Hi Hans,
I'm sorry to report that I got a new problem, I was executing  a zfs recv of a few GBs of data while, at the same time, executing a dd if=/dev/zero of=/... onto my test pool, which right now has 4 disks in two mirror vdevs, when the pool stopped responding.

In /var/adm/messages this is what I have 

Feb  9 19:38:54 pg-1 ipmi: [ID 183295 kern.info] SMBIOS type 0x1, addr 0xca2
Feb  9 19:38:54 pg-1 ipmi: [ID 306142 kern.info] device rev. 3, firmware rev. 2.65, version 2.0
Feb  9 19:38:54 pg-1 ipmi: [ID 935091 kern.info] number of channels 2
Feb  9 19:38:54 pg-1 ipmi: [ID 699450 kern.info] watchdog supported
Feb  9 19:38:55 pg-1 scsi: [ID 583861 kern.info] ses0 at lmrc2: target-port w300162b20dde3640 lun 0
Feb  9 19:38:55 pg-1 genunix: [ID 936769 kern.info] ses0 is /pci@0,0/pci8086,43b8@1c/pci1590,32b@0/iport@p0/enclosure@w300162b20dde3>
Feb  9 19:42:05 pg-1 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-GH, TYPE: Fault, VER: 1, SEVERITY: Major#012EVENT-TIME: Fri>
Feb  9 19:51:46 pg-1 rootnex: [ID 349649 kern.info] xsvc0 at root: space 0 offset 0
Feb  9 19:51:46 pg-1 genunix: [ID 936769 kern.info] xsvc0 is /xsvc@0,0
Feb 10 08:24:08 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 08:24:16 pg-1 lmrc: [ID 998901 kern.warning] WARNING: lmrc0: AEN failed, status = 255
Feb 10 08:24:16 pg-1 lmrc: [ID 831201 kern.warning] WARNING: lmrc0: PD map sync failed, status = 255
Feb 10 08:24:17 pg-1 zfs: [ID 961531 kern.warning] WARNING: Pool 'dati' has encountered an uncorrectable I/O failure and has been su>
Feb 10 08:24:22 pg-1 lmrc: [ID 864919 kern.notice] NOTICE: lmrc0: FW is in fault state!
Feb 10 08:24:22 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 08:28:02 pg-1 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major#012EVENT-TIME: Sat>
Feb 10 08:28:02 pg-1 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major#012EVENT-TIME: Sat>
Feb 10 08:28:03 pg-1 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major#012EVENT-TIME: Sat>
Feb 10 08:28:03 pg-1 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major#012EVENT-TIME: Sat>
Feb 10 08:28:10 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 08:28:18 pg-1 lmrc: [ID 380853 kern.warning] WARNING: lmrc0: LD target map sync failed, status = 255
Feb 10 08:34:49 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 08:41:27 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 08:48:06 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 08:54:45 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 09:01:24 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 09:08:03 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 09:14:42 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...
Feb 10 09:21:21 pg-1 lmrc: [ID 408335 kern.warning] WARNING: lmrc0: resetting...

and it was still trying to reset it this morning, iostat -indexC shows this 

r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot device
    0.0  123.3    0.0  651.6  0.0  0.0    0.0    0.3   0   4   0   0   0   0 c2
    0.0   61.1    0.0  325.8  0.0  0.0    0.0    0.4   0   3   0   0   0   0 c2t3d0
    0.0   62.1    0.0  325.8  0.0  0.0    0.0    0.2   0   1   0   0   0   0 c2t4d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0   0   0 c3
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0   0   0 c3t001B448B4A7140BFd0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0 116 116 c4
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0  24  24 c4t50014EE6B2513C38d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0  24  24 c4t5000CCA85EE5ECB0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0  18  18 c4t5000C500AAF9B0C3d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0  50  50 c4t5000C500AAF9BF2Fd0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0   0   0 c5
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0   0   0 c5tACE42E0005CFF480d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0   0   0   0 dati
    0.0  119.3    0.0  651.6  0.4  0.0    3.3    0.3   3   3   0   0   0   0 rpool


here the pool configuration where the problem occurred

  pool: dati
state: ONLINE
  scan: scrub repaired 0 in 0 days 00:53:43 with 0 errors on Fri Feb  9 19:03:47 2024
config:

        NAME                       STATE     READ WRITE CKSUM
        dati                       ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c4t5000CCA85EE5ECB0d0  ONLINE       0     0     0
            c4t50014EE6B2513C38d0  ONLINE       0     0     0
          mirror-2                 ONLINE       0     0     0
            c4t5000C500AAF9B0C3d0  ONLINE       0     0     0
            c4t5000C500AAF9BF2Fd0  ONLINE       0     0     0
        logs
          c3t001B448B4A7140BFd0s0  ONLINE       0     0     0

errors: No known data errors

So I forced a reboot with a dump, which you can find here:

https://mega.nz/file/YqczlQbS#XJ7q0-NIDezq3czIu3qyqVdEs8JA7aM2uAicEUEjX0E

Best regards.
Maurilio