* [discuss] Fwd: [OpenIndiana-discuss] Installation on a partition (UEFI System)
[not found] ` <CABC65rNbiPwBosCJ-Pm2reMGW1OB1enr1=4pso1fyA_xTe8now@mail.gmail.com>
@ 2025-08-13 2:17 ` Atiq Rahman
2025-08-13 6:29 ` Stephan Althaus via illumos-discuss
0 siblings, 1 reply; 20+ messages in thread
From: Atiq Rahman @ 2025-08-13 2:17 UTC (permalink / raw)
To: illumos-discuss
[-- Attachment #1.1.1: Type: text/plain, Size: 3368 bytes --]
Copying this to illumos discussion as well if anyone else knows.
---------- Forwarded message ---------
From: Atiq Rahman <atiqcx@gmail.com>
Date: Tue, Aug 12, 2025 at 6:23 PM
Subject: Re: [OpenIndiana-discuss] Installation on a partition (UEFI System)
To: Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>
Cc: John D Groenveld <groenveld@acm.org>
Hi,
I tried a live usb text installer again. Right after the launch text
installer shows following as second option on the bottom of the screen,
*F5_InstallToExistingPool*
Screenshot is below. If you zoom in, you will see the option at the top of
the blue background in the bottom part of the image.
[image: text_install_screen.jpg]
Can I partition and create a pool using FreeBSD as you mentioned. And, then
use this option to install on that pool.
*Note that my system is EFI Only (no legacy / no CSM).*
Sincerely,
Atiq
On Tue, Aug 12, 2025 at 2:27 PM Atiq Rahman <atiqcx@gmail.com> wrote:
> Hi,
> Following up again to request confirmation regarding the "partition the
> disk" option in the installer.
>
> At present, it’s impractical for me to carry an external storage device
> solely for OpenIndiana. With other operating systems, I can safely install
> and test them on my internal SSD without risking my existing setup.
> Unfortunately, with OI, this specific limitation and uncertainty are
> creating unnecessary friction and confusion.
>
> Could you please confirm the current state of EFI support — specifically,
> whether it is possible to safely perform a custom-partition installation on
> an EFI-only system, using a FreeBSD live USB image with the -d option.
>
> Thank you,
> Atiq
>
>
> On Mon, Aug 4, 2025 at 3:54 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>
>> Hi John,
>> That's very helpful..
>>
>> So, even though I choose "Partition the disk (MBR)" on the text
>> installer, it should work fine on my UEFI System after I set up the pool
>> using FreeBSD live image.
>> Right? I get confused since it mentions "MBR" next to it and my System is
>> entirely GPT, no support for legacy / MBR booting at all (no CSM).
>>
>> Have a smooth Monday,
>>
>> Atiq
>>
>>
>> On Tue, Jul 15, 2025 at 5:56 AM John D Groenveld via openindiana-discuss <
>> openindiana-discuss@openindiana.org> wrote:
>>
>>> In message <CABC65rOmBY6YmMSB=HQdXq=
>>> TDN7d1XR+K8m9-__RQPBzhJZeMQ@mail.gmail.com>
>>> , Atiq Rahman writes:
>>> >Most likely, I am gonna go with this. It's been a while since I last
>>> had a
>>> >Solaris machine. I have to figure out the zpool stuff.
>>>
>>> If you create the pool for OI under FreeBSD, you'll likely need to
>>> disable features with the -d option as FreeBSD's ZFS is more feature
>>> rich.
>>> <URL:
>>> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&apropos=0&sektion=0&manpath=FreeBSD+14.3-RELEASE+and+Ports&arch=default&format=html
>>>
>>> Once you get OI installed on your OI pool, you can enable features:
>>> <URL:https://illumos.org/man/7/zpool-features>
>>>
>>> John
>>> groenveld@acm.org
>>>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T9fe2cf48232789d6-M0e1116a9284e82c4b10de66b
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #1.1.2: Type: text/html, Size: 6710 bytes --]
[-- Attachment #1.2: text_install_screen.jpg --]
[-- Type: image/jpeg, Size: 155658 bytes --]
[-- Attachment #2: text_install_screen.jpg --]
[-- Type: image/jpeg, Size: 155658 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [discuss] Fwd: [OpenIndiana-discuss] Installation on a partition (UEFI System)
2025-08-13 2:17 ` [discuss] Fwd: [OpenIndiana-discuss] Installation on a partition (UEFI System) Atiq Rahman
@ 2025-08-13 6:29 ` Stephan Althaus via illumos-discuss
2025-08-13 7:17 ` [discuss] " Toomas Soome via illumos-discuss
0 siblings, 1 reply; 20+ messages in thread
From: Stephan Althaus via illumos-discuss @ 2025-08-13 6:29 UTC (permalink / raw)
To: discuss
[-- Attachment #1: Type: text/plain, Size: 4800 bytes --]
On 8/13/25 04:17, Atiq Rahman wrote:
> Copying this to illumos discussion as well if anyone else knows.
>
> ---------- Forwarded message ---------
> From: *Atiq Rahman* <atiqcx@gmail.com>
> Date: Tue, Aug 12, 2025 at 6:23 PM
> Subject: Re: [OpenIndiana-discuss] Installation on a partition (UEFI
> System)
> To: Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>
> Cc: John D Groenveld <groenveld@acm.org>
>
> Hi,
> I tried a live usb text installer again. Right after the launch text
> installer shows following as second option on the bottom of the screen,
> *F5_InstallToExistingPool*
>
> Screenshot is below. If you zoom in, you will see the option at the
> top of the blue background in the bottom part of the image.
>
> text_install_screen.jpg
>
> Can I partition and create a pool using FreeBSD as you mentioned. And,
> then use this option to install on that pool.
>
> /Note that my system is EFI Only (no legacy / no CSM)./
>
> Sincerely,
>
> Atiq
>
>
> On Tue, Aug 12, 2025 at 2:27 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>
> Hi,
> Following up again to request confirmation regarding the
> "partition the disk" option in the installer.
>
> At present, it’s impractical for me to carry an external storage
> device solely for OpenIndiana. With other operating systems, I can
> safely install and test them on my internal SSD without risking my
> existing setup. Unfortunately, with OI, this specific limitation
> and uncertainty are creating unnecessary friction and confusion.
>
> Could you please confirm the current state of EFI support —
> specifically, whether it is possible to safely perform a
> custom-partition installation on an EFI-only system, using a
> FreeBSD live USB image with the -d option.
>
> Thank you,
> Atiq
>
>
> On Mon, Aug 4, 2025 at 3:54 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>
> Hi John,
> That's very helpful..
>
> So, even though I choose "Partition the disk (MBR)" on the
> text installer, it should work fine on my UEFI System after I
> set up the pool using FreeBSD live image.
> Right? I get confused since it mentions "MBR" next to it and
> my System is entirely GPT, no support for legacy / MBR booting
> at all (no CSM).
>
> Have a smooth Monday,
>
> Atiq
>
>
>
> On Tue, Jul 15, 2025 at 5:56 AM John D Groenveld via
> openindiana-discuss <openindiana-discuss@openindiana.org> wrote:
>
> In message
> <CABC65rOmBY6YmMSB=HQdXq=TDN7d1XR+K8m9-__RQPBzhJZeMQ@mail.gmail.com
> <mailto:TDN7d1XR%2BK8m9-__RQPBzhJZeMQ@mail.gmail.com>>
> , Atiq Rahman writes:
> >Most likely, I am gonna go with this. It's been a while
> since I last had a
> >Solaris machine. I have to figure out the zpool stuff.
>
> If you create the pool for OI under FreeBSD, you'll likely
> need to
> disable features with the -d option as FreeBSD's ZFS is
> more feature
> rich.
> <URL:https://man.freebsd.org/cgi/man.cgi?query=zpool-create&apropos=0&sektion=0&manpath=FreeBSD+14.3-RELEASE+and+Ports&arch=default&format=html
> <https://man.freebsd.org/cgi/man.cgi?query=zpool-create&apropos=0&sektion=0&manpath=FreeBSD+14.3-RELEASE+and+Ports&arch=default&format=html>>
>
> Once you get OI installed on your OI pool, you can enable
> features:
> <URL:https://illumos.org/man/7/zpool-features>
>
> John
> groenveld@acm.org
>
> *illumos <https://illumos.topicbox.com/latest>* / illumos-discuss /
> see discussions <https://illumos.topicbox.com/groups/discuss> +
> participants <https://illumos.topicbox.com/groups/discuss/members> +
> delivery options
> <https://illumos.topicbox.com/groups/discuss/subscription> Permalink
> <https://illumos.topicbox.com/groups/discuss/T9fe2cf48232789d6-M0e1116a9284e82c4b10de66b>
>
Hello!
You can install to an existing pool created with FreeBSD, be shure to
use the option "-d" as "zpool create -d <newpool> <partition> " to
disable additional ZFS features, see Details here:
https://man.freebsd.org/cgi/man.cgi?query=zpool-create&sektion=8&apropos=0&manpath=FreeBSD+14.3-RELEASE+and+Ports
And an EFI partition should be there, too.
HTH,
Stephan
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T9fe2cf48232789d6-M6f576759d85ce1f109acb2fb
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2.1: Type: text/html, Size: 9544 bytes --]
[-- Attachment #2.2: text_install_screen.jpg --]
[-- Type: image/jpeg, Size: 155658 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [discuss] [OpenIndiana-discuss] Installation on a partition (UEFI System)
2025-08-13 6:29 ` Stephan Althaus via illumos-discuss
@ 2025-08-13 7:17 ` Toomas Soome via illumos-discuss
2025-08-14 0:53 ` [discuss] zpool create on specified GPT partition Atiq Rahman
0 siblings, 1 reply; 20+ messages in thread
From: Toomas Soome via illumos-discuss @ 2025-08-13 7:17 UTC (permalink / raw)
To: illumos-discuss
[-- Attachment #1: Type: text/plain, Size: 5280 bytes --]
> On 13. Aug 2025, at 09:29, Stephan Althaus via illumos-discuss <discuss@lists.illumos.org> wrote:
>
> On 8/13/25 04:17, Atiq Rahman wrote:
>> Copying this to illumos discussion as well if anyone else knows.
>>
>> ---------- Forwarded message ---------
>> From: Atiq Rahman <atiqcx@gmail.com <mailto:atiqcx@gmail.com>>
>> Date: Tue, Aug 12, 2025 at 6:23 PM
>> Subject: Re: [OpenIndiana-discuss] Installation on a partition (UEFI System)
>> To: Discussion list for OpenIndiana <openindiana-discuss@openindiana.org <mailto:openindiana-discuss@openindiana.org>>
>> Cc: John D Groenveld <groenveld@acm.org <mailto:groenveld@acm.org>>
>>
>> Hi,
>> I tried a live usb text installer again. Right after the launch text installer shows following as second option on the bottom of the screen,
>> F5_InstallToExistingPool
>>
>> Screenshot is below. If you zoom in, you will see the option at the top of the blue background in the bottom part of the image.
>>
>> <text_install_screen.jpg>
>>
>> Can I partition and create a pool using FreeBSD as you mentioned. And, then use this option to install on that pool.
>>
>> Note that my system is EFI Only (no legacy / no CSM).
>> Sincerely, <>
>> Atiq <>
>> On Tue, Aug 12, 2025 at 2:27 PM Atiq Rahman <atiqcx@gmail.com <mailto:atiqcx@gmail.com>> wrote:
>>> Hi,
>>> Following up again to request confirmation regarding the "partition the disk" option in the installer.
>>>
>>> At present, it’s impractical for me to carry an external storage device solely for OpenIndiana. With other operating systems, I can safely install and test them on my internal SSD without risking my existing setup. Unfortunately, with OI, this specific limitation and uncertainty are creating unnecessary friction and confusion.
>>>
You can create your single illumos partition on OI as well, it does not have to be whole disk setup.
>>> Could you please confirm the current state of EFI support — specifically, whether it is possible to safely perform a custom-partition installation on an EFI-only system, using a FreeBSD live USB image with the -d option.
>>>
>>> Thank you,
>>> Atiq
yes, it is possible. However, if you intend to share pool with illumos, you need to know the specifics. illumos GPT partition type in FreeBSD is named apple-zfs (its actually the uuid of illumos usr slice from vtoc). FreeBSD boot loader is searching for partition freebsd-zfs to boot the FreeBSD system.
In general, I’d suggest to use virtualization to get first glimpse of the OS, to learn its details and facts and then attempt to build complicated multi-os/multi boot setups.
rgds,
toomas
>>>
>>>
>>> On Mon, Aug 4, 2025 at 3:54 PM Atiq Rahman <atiqcx@gmail.com <mailto:atiqcx@gmail.com>> wrote:
>>>> Hi John,
>>>> That's very helpful..
>>>>
>>>> So, even though I choose "Partition the disk (MBR)" on the text installer, it should work fine on my UEFI System after I set up the pool using FreeBSD live image.
>>>> Right? I get confused since it mentions "MBR" next to it and my System is entirely GPT, no support for legacy / MBR booting at all (no CSM).
>>>>
>>>> Have a smooth Monday,
>>>> Atiq <>
>>>>
>>>> On Tue, Jul 15, 2025 at 5:56 AM John D Groenveld via openindiana-discuss <openindiana-discuss@openindiana.org <mailto:openindiana-discuss@openindiana.org>> wrote:
>>>>> In message <CABC65rOmBY6YmMSB=HQdXq=TDN7d1XR+K8m9-__RQPBzhJZeMQ@mail.gmail.com <mailto:TDN7d1XR%2BK8m9-__RQPBzhJZeMQ@mail.gmail.com>>
>>>>> , Atiq Rahman writes:
>>>>> >Most likely, I am gonna go with this. It's been a while since I last had a
>>>>> >Solaris machine. I have to figure out the zpool stuff.
>>>>>
>>>>> If you create the pool for OI under FreeBSD, you'll likely need to
>>>>> disable features with the -d option as FreeBSD's ZFS is more feature
>>>>> rich.
>>>>> <URL:https://man.freebsd.org/cgi/man.cgi?query=zpool-create&apropos=0&sektion=0&manpath=FreeBSD+14.3-RELEASE+and+Ports&arch=default&format=html>
>>>>>
>>>>> Once you get OI installed on your OI pool, you can enable features:
>>>>> <URL:https://illumos.org/man/7/zpool-features>
>>>>>
>>>>> John
>>>>> groenveld@acm.org <mailto:groenveld@acm.org>
> Hello!
>
> You can install to an existing pool created with FreeBSD, be shure to use the option "-d" as "zpool create -d <newpool> <partition> " to disable additional ZFS features, see Details here:
>
> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&sektion=8&apropos=0&manpath=FreeBSD+14.3-RELEASE+and+Ports
>
> And an EFI partition should be there, too.
>
> HTH,
>
> Stephan
>
>
>
>
>
> illumos <https://illumos.topicbox.com/latest> / illumos-discuss / see discussions <https://illumos.topicbox.com/groups/discuss> + participants <https://illumos.topicbox.com/groups/discuss/members> + delivery options <https://illumos.topicbox.com/groups/discuss/subscription>Permalink <https://illumos.topicbox.com/groups/discuss/T9fe2cf48232789d6-M6f576759d85ce1f109acb2fb>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T9fe2cf48232789d6-M840c3abaeb9b893e30fed36a
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 14474 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] zpool create on specified GPT partition
2025-08-13 7:17 ` [discuss] " Toomas Soome via illumos-discuss
@ 2025-08-14 0:53 ` Atiq Rahman
2025-08-14 1:23 ` Joshua M. Clulow via illumos-discuss
` (2 more replies)
0 siblings, 3 replies; 20+ messages in thread
From: Atiq Rahman @ 2025-08-14 0:53 UTC (permalink / raw)
To: illumos-discuss, Discussion list for OpenIndiana
Cc: illumos-developer, OpenIndiana Developer mailing list
[-- Attachment #1: Type: text/plain, Size: 6303 bytes --]
Hi,
> You can create your single illumos partition on OI as well, it does not
have to be a whole disk setup.
Yep, I would prefer to create the pool using an OI live image. So, I
created a new partition from mkpart using latest parted from a linux system
for this purpose,
parted > mkpart illumos 767GB 100%
and then set the partition type to solaris launching,
$ sudo gdisk /dev/nvme0n1
(set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
I booted using OI live gui image (flashed / created it using 07-23 live usb
image from test dir under hipster),
Time to kill the switch!
I see that parted can list all the partitions and format can list some.
However, not all of them were listed under /dev/dsk or /dev/rdsk I could
only see p0 to p4 (I guess illumos skipped the linux partitions).
However, when I applied 'zpool create zfs_main /dev/rdsk/c2t0xxxxxxxxxxd0p4
it reported no such file or directory! Tried various versions of `zpool
create`, nothing would succeed!
Wondering if this is some nvme storage support / enumeration issue.
Atiq
On Wed, Aug 13, 2025 at 12:18 AM Toomas Soome via illumos-discuss <
discuss@lists.illumos.org> wrote:
>
>
> On 13. Aug 2025, at 09:29, Stephan Althaus via illumos-discuss <
> discuss@lists.illumos.org> wrote:
>
> On 8/13/25 04:17, Atiq Rahman wrote:
>
> Copying this to illumos discussion as well if anyone else knows.
>
> ---------- Forwarded message ---------
> From: Atiq Rahman <atiqcx@gmail.com>
> Date: Tue, Aug 12, 2025 at 6:23 PM
> Subject: Re: [OpenIndiana-discuss] Installation on a partition (UEFI
> System)
> To: Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>
> Cc: John D Groenveld <groenveld@acm.org>
>
> Hi,
> I tried a live usb text installer again. Right after the launch text
> installer shows following as second option on the bottom of the screen,
> *F5_InstallToExistingPool*
>
> Screenshot is below. If you zoom in, you will see the option at the top of
> the blue background in the bottom part of the image.
>
> <text_install_screen.jpg>
>
> Can I partition and create a pool using FreeBSD as you mentioned. And,
> then use this option to install on that pool.
>
> *Note that my system is EFI Only (no legacy / no CSM).*
>
> Sincerely,
>
> Atiq
>
>
> On Tue, Aug 12, 2025 at 2:27 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>
>> Hi,
>> Following up again to request confirmation regarding the "partition the
>> disk" option in the installer.
>>
>> At present, it’s impractical for me to carry an external storage device
>> solely for OpenIndiana. With other operating systems, I can safely install
>> and test them on my internal SSD without risking my existing setup.
>> Unfortunately, with OI, this specific limitation and uncertainty are
>> creating unnecessary friction and confusion.
>>
>>
>
> You can create your single illumos partition on OI as well, it does not
> have to be whole disk setup.
>
>
> Could you please confirm the current state of EFI support — specifically,
>> whether it is possible to safely perform a custom-partition installation on
>> an EFI-only system, using a FreeBSD live USB image with the -d option.
>>
>> Thank you,
>> Atiq
>>
>
>
> yes, it is possible. However, if you intend to share pool with illumos,
> you need to know the specifics. illumos GPT partition type in FreeBSD is
> named apple-zfs (its actually the uuid of illumos usr slice from vtoc).
> FreeBSD boot loader is searching for partition freebsd-zfs to boot the
> FreeBSD system.
>
> In general, I’d suggest to use virtualization to get first glimpse of the
> OS, to learn its details and facts and then attempt to build complicated
> multi-os/multi boot setups.
>
> rgds,
> toomas
>
>
>>
>> On Mon, Aug 4, 2025 at 3:54 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>
>>> Hi John,
>>> That's very helpful..
>>>
>>> So, even though I choose "Partition the disk (MBR)" on the text
>>> installer, it should work fine on my UEFI System after I set up the pool
>>> using FreeBSD live image.
>>> Right? I get confused since it mentions "MBR" next to it and my System
>>> is entirely GPT, no support for legacy / MBR booting at all (no CSM).
>>>
>>> Have a smooth Monday,
>>>
>>> Atiq
>>>
>>>
>>> On Tue, Jul 15, 2025 at 5:56 AM John D Groenveld via openindiana-discuss
>>> <openindiana-discuss@openindiana.org> wrote:
>>>
>>>> In message <CABC65rOmBY6YmMSB=HQdXq=
>>>> TDN7d1XR+K8m9-__RQPBzhJZeMQ@mail.gmail.com>
>>>> , Atiq Rahman writes:
>>>> >Most likely, I am gonna go with this. It's been a while since I last
>>>> had a
>>>> >Solaris machine. I have to figure out the zpool stuff.
>>>>
>>>> If you create the pool for OI under FreeBSD, you'll likely need to
>>>> disable features with the -d option as FreeBSD's ZFS is more feature
>>>> rich.
>>>> <URL:
>>>> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&apropos=0&sektion=0&manpath=FreeBSD+14.3-RELEASE+and+Ports&arch=default&format=html
>>>>
>>>> Once you get OI installed on your OI pool, you can enable features:
>>>> <URL:https://illumos.org/man/7/zpool-features>
>>>>
>>>> John
>>>> groenveld@acm.org
>>>>
>>> Hello!
>
> You can install to an existing pool created with FreeBSD, be shure to use
> the option "-d" as "zpool create -d <newpool> <partition> " to disable
> additional ZFS features, see Details here:
>
>
> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&sektion=8&apropos=0&manpath=FreeBSD+14.3-RELEASE+and+Ports
>
> And an EFI partition should be there, too.
>
> HTH,
>
> Stephan
>
>
>
>
> *illumos <https://illumos.topicbox.com/latest>* / illumos-discuss / see
> discussions <https://illumos.topicbox.com/groups/discuss> + participants
> <https://illumos.topicbox.com/groups/discuss/members> + delivery options
> <https://illumos.topicbox.com/groups/discuss/subscription> Permalink
> <https://illumos.topicbox.com/groups/discuss/T9fe2cf48232789d6-M840c3abaeb9b893e30fed36a>
>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T74729cd92612e3e6-Me58145382d7be8f4abc86331
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 14967 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [discuss] zpool create on specified GPT partition
2025-08-14 0:53 ` [discuss] zpool create on specified GPT partition Atiq Rahman
@ 2025-08-14 1:23 ` Joshua M. Clulow via illumos-discuss
2025-08-14 7:43 ` [OpenIndiana-discuss] " Atiq Rahman
[not found] ` <CABC65rOztb9dAbdpC7xcWhun6Le9uO5NyNvSgkhr7kypNdMXBQ@mail.gmail.com>
[not found] ` <198abc29f87.10236f39f26274.6456493104881031520@zoho.com>
2 siblings, 1 reply; 20+ messages in thread
From: Joshua M. Clulow via illumos-discuss @ 2025-08-14 1:23 UTC (permalink / raw)
To: illumos-discuss
Cc: Discussion list for OpenIndiana, illumos-developer,
OpenIndiana Developer mailing list
On Wed, 13 Aug 2025 at 17:53, Atiq Rahman <atiqcx@gmail.com> wrote:
> I see that parted can list all the partitions and format can list some. However, not all of them were listed under /dev/dsk or /dev/rdsk I could only see p0 to p4 (I guess illumos skipped the linux partitions).
The p* devices are probably not what you're looking for. The p0
device is generally "the whole disk", including access to the
partition table, etc. The p1-4 devices are for access to the classic
MBR primary partitions, of which there can only be four.
If you've created a modern, EFI/GPT style partition table, and
format(8) can see those GPT partitions, then what you want are the s*
or "slice" devices, which is how we refer to them. You should ideally
have one slice device for each GPT partition.
Cheers.
--
Joshua M. Clulow
http://blog.sysmgr.org
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T74729cd92612e3e6-Mf02168fa42cf780d8de49971
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [OpenIndiana-discuss] [discuss] zpool create on specified GPT partition
2025-08-14 1:23 ` Joshua M. Clulow via illumos-discuss
@ 2025-08-14 7:43 ` Atiq Rahman
2025-08-14 8:31 ` Toomas Soome via illumos-discuss
0 siblings, 1 reply; 20+ messages in thread
From: Atiq Rahman @ 2025-08-14 7:43 UTC (permalink / raw)
To: Discussion list for OpenIndiana
Cc: illumos-discuss, Joshua M. Clulow, illumos-developer,
OpenIndiana Developer mailing list
[-- Attachment #1.1: Type: text/plain, Size: 1762 bytes --]
Hi Joshua,
Thanks.
I found that s0 mapped to ESP (boot EFI) partition on my system. I also
found that the newly created solaris partition mapped to s6. So, mostly
slice numbers seem to be 1 less than part# showed by `parted` that can be
found in the photo screenshot attached.
On Wed, Aug 13, 2025 at 6:24 PM Joshua M. Clulow via openindiana-discuss <
openindiana-discuss@openindiana.org> wrote:
> On Wed, 13 Aug 2025 at 17:53, Atiq Rahman <atiqcx@gmail.com> wrote:
> > I see that parted can list all the partitions and format can list some.
> However, not all of them were listed under /dev/dsk or /dev/rdsk I could
> only see p0 to p4 (I guess illumos skipped the linux partitions).
>
> The p* devices are probably not what you're looking for. The p0
> device is generally "the whole disk", including access to the
> partition table, etc. The p1-4 devices are for access to the classic
> MBR primary partitions, of which there can only be four.
>
> If you've created a modern, EFI/GPT style partition table, and
> format(8) can see those GPT partitions, then what you want are the s*
> or "slice" devices, which is how we refer to them. You should ideally
> have one slice device for each GPT partition.
>
>
>
> Cheers.
>
> --
> Joshua M. Clulow
> http://blog.sysmgr.org
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T74729cd92612e3e6-M5957fc4fb0ecdf75d59ec7f8
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #1.2: Type: text/html, Size: 3044 bytes --]
[-- Attachment #2: 1000003926.jpg --]
[-- Type: image/jpeg, Size: 3095443 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [OpenIndiana-discuss] [discuss] zpool create on specified GPT partition
2025-08-14 7:43 ` [OpenIndiana-discuss] " Atiq Rahman
@ 2025-08-14 8:31 ` Toomas Soome via illumos-discuss
2025-08-16 5:50 ` [oi-dev] " Atiq Rahman
0 siblings, 1 reply; 20+ messages in thread
From: Toomas Soome via illumos-discuss @ 2025-08-14 8:31 UTC (permalink / raw)
To: illumos-discuss
Cc: Discussion list for OpenIndiana, Joshua M. Clulow,
illumos-developer, OpenIndiana Developer mailing list
[-- Attachment #1: Type: text/plain, Size: 3838 bytes --]
> On 14. Aug 2025, at 10:43, Atiq Rahman <atiqcx@gmail.com> wrote:
>
> Hi Joshua,
> Thanks.
>
> I found that s0 mapped to ESP (boot EFI) partition on my system. I also found that the newly created solaris partition mapped to s6. So, mostly slice numbers seem to be 1 less than part# showed by `parted` that can be found in the photo screenshot attached.
>
>
The “native” tools in illumos (and solaris) are format, prtvtoc, fmthard and fdisk; they tell you disk partition/slices numbers as they actually can be found in the system.
fdisk/mbr partitions are named p1-p4 (as already written), with p0 historically used to access all of the disk (so you can actually write down the MBR record).
format and prtvtoc/fmthard are using term “slice” for partitions; and slices are numbered from 0. In case of GPT partitioning, the device name to access all of the disk is the device name without trailing slice component. The confusing part is, if the disk has no GPT table, then there is no device name without slice component.
When using other software, you need to pay attention to slice/partition numbering (as you have discovered already).
For fun fact: FreeBSD has switched around the terms “slice” and “partition”. They are using “slice" for fdisk/mbr case and “partition” for everything else.
For time being, illumos does install EFI boot program on ESP as efi/boot/bootx64.efi — if you want to share ESP between different OS, you need to back up the existing efi/boot/bootx64.efi, put illumos version to, say, efi/illumos/loader64.efi, restore the original and create respective UEFI boot manager configuration. At this time, we can not create UEFI boot manager config from inside the illumos, so we do rely on default.
rgds,
toomas
>
> On Wed, Aug 13, 2025 at 6:24 PM Joshua M. Clulow via openindiana-discuss <openindiana-discuss@openindiana.org <mailto:openindiana-discuss@openindiana.org>> wrote:
>> On Wed, 13 Aug 2025 at 17:53, Atiq Rahman <atiqcx@gmail.com <mailto:atiqcx@gmail.com>> wrote:
>> > I see that parted can list all the partitions and format can list some. However, not all of them were listed under /dev/dsk or /dev/rdsk I could only see p0 to p4 (I guess illumos skipped the linux partitions).
>>
>> The p* devices are probably not what you're looking for. The p0
>> device is generally "the whole disk", including access to the
>> partition table, etc. The p1-4 devices are for access to the classic
>> MBR primary partitions, of which there can only be four.
>>
>> If you've created a modern, EFI/GPT style partition table, and
>> format(8) can see those GPT partitions, then what you want are the s*
>> or "slice" devices, which is how we refer to them. You should ideally
>> have one slice device for each GPT partition.
>>
>>
>>
>> Cheers.
>>
>> --
>> Joshua M. Clulow
>> http://blog.sysmgr.org <http://blog.sysmgr.org/>
>>
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org <mailto:openindiana-discuss@openindiana.org>
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
> illumos <https://illumos.topicbox.com/latest> / illumos-discuss / see discussions <https://illumos.topicbox.com/groups/discuss> + participants <https://illumos.topicbox.com/groups/discuss/members> + delivery options <https://illumos.topicbox.com/groups/discuss/subscription>Permalink <https://illumos.topicbox.com/groups/discuss/T74729cd92612e3e6-M5957fc4fb0ecdf75d59ec7f8><1000003926.jpg>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T74729cd92612e3e6-Mc86a7a5a29280f13ac258787
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 5268 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [oi-dev] [OpenIndiana-discuss] [discuss] zpool create on specified GPT partition
2025-08-14 8:31 ` Toomas Soome via illumos-discuss
@ 2025-08-16 5:50 ` Atiq Rahman
0 siblings, 0 replies; 20+ messages in thread
From: Atiq Rahman @ 2025-08-16 5:50 UTC (permalink / raw)
To: OpenIndiana Developer mailing list
Cc: illumos-discuss, Toomas Soome, Discussion list for OpenIndiana,
illumos-developer
[-- Attachment #1: Type: text/plain, Size: 2249 bytes --]
> FreeBSD has switched around the terms “slice” and “partition”. They are
using “slice" for fdisk/mbr case and “partition” for everything else.
Thank you for sharing the fun fact!
Atiq
On Thu, Aug 14, 2025 at 1:31 AM Toomas Soome via oi-dev <
oi-dev@openindiana.org> wrote:
>
>
> On 14. Aug 2025, at 10:43, Atiq Rahman <atiqcx@gmail.com> wrote:
>
> Hi Joshua,
> Thanks.
>
> I found that s0 mapped to ESP (boot EFI) partition on my system. I also
> found that the newly created solaris partition mapped to s6. So, mostly
> slice numbers seem to be 1 less than part# showed by `parted` that can be
> found in the photo screenshot attached.
>
>
>
>
> The “native” tools in illumos (and solaris) are format, prtvtoc, fmthard
> and fdisk; they tell you disk partition/slices numbers as they actually can
> be found in the system.
>
> fdisk/mbr partitions are named p1-p4 (as already written), with p0
> historically used to access all of the disk (so you can actually write down
> the MBR record).
>
> format and prtvtoc/fmthard are using term “slice” for partitions; and
> slices are numbered from 0. In case of GPT partitioning, the device name to
> access all of the disk is the device name without trailing slice component.
> The confusing part is, if the disk has no GPT table, then there is no
> device name without slice component.
>
> When using other software, you need to pay attention to slice/partition
> numbering (as you have discovered already).
>
> For time being, illumos does install EFI boot program on ESP as
> efi/boot/bootx64.efi — if you want to share ESP between different OS, you
> need to back up the existing efi/boot/bootx64.efi, put illumos version to,
> say, efi/illumos/loader64.efi, restore the original and create respective
> UEFI boot manager configuration. At this time, we can not create UEFI boot
> manager config from inside the illumos, so we do rely on default.
>
> rgds,
> toomas
>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T74729cd92612e3e6-Mf6681cb570f1356dba09f5a8
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 3806 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] GPT Partitioning for Illumos/Solaris
[not found] ` <198abc29f87.10236f39f26274.6456493104881031520@zoho.com>
@ 2025-08-17 1:43 ` Atiq Rahman
2025-08-17 10:06 ` Toomas Soome via illumos-discuss
2025-08-20 16:33 ` [discuss] " Atiq Rahman
1 sibling, 1 reply; 20+ messages in thread
From: Atiq Rahman @ 2025-08-17 1:43 UTC (permalink / raw)
To: Eric J Bowman
Cc: illumos-discuss, Discussion list for OpenIndiana,
illumos-developer, OpenIndiana Developer mailing list
[-- Attachment #1: Type: text/plain, Size: 2249 bytes --]
Hi Eric,
> The "Solaris Root" partition type is relevant to UFS, not ZFS. On UEFI,
set s0/p1 to be an ESP, "boot, hidden, esp" flags and format it fat32
Yep, I have done that quite a while ago. Linux is booting alright with that.
> your "Solaris Reserved" should be s8/p9, 8Mib, -1MiB from end of disk
. That's for legacy / MBR partitions. Mine is GPT. I don't need that or any
other partition for Solaris. I only need one single partition of zfs /
illumos and that's working so far.
> /EFI/OpenIndiana/bootx64.efi (copy of loader.efi)
I put mine on /EFI/Solaris/
I might try rEFInd in future. Right now, I'm booting everything using
Linux's bootloader.
Thanks,
Atiq
On Thu, Aug 14, 2025 at 8:25 PM Eric J Bowman <mellowmutt@zoho.com> wrote:
> >
> > parted > mkpart illumos 767GB 100%
> >
> > and then set the partition type to solaris launching,
> >
> > $ sudo gdisk /dev/nvme0n1
> >
> > (set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
> >
>
>
> The "Solaris Root" partition type is relevant to UFS, not ZFS. On UEFI,
> set s0/p1 to be an ESP, "boot, hidden, esp" flags and format it fat32, your
> "Solaris Reserved" should be s8/p9, 8Mib, -1MiB from end of disk. Your ZFS
> partition should be set to ZFS for illumos, FreeBSD ZFS for freebsd. Being
> bootable is a property of the ZFS filesystem, not its partition.
>
> /EFI
> /EFI/OpenIndiana/bootx64.efi (copy of loader.efi)
> /EFI/boot
>
> I put rEFInd Plus in /EFI/boot, with a backup of its bootx64.efi in case
> it gets overwritten.
>
> /EFI/boot/drivers/x64_zfs.efi
>
> This driver will allow rEFInd to stub-load loader64.efi from your ZFS
> partition on most firmware, some will still need the
> /EFI/OpenIndiana/bootx64.efi. I don't know if the string "OpenIndiana" is
> registered, this one is:
>
> /EFI/FreeBSD
>
> That's the gist of multiboot on UEFI, more work if you want more than one
> illumos or freebsd partition, but they will coexist peacefully.
>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T6c1d3abbcce146a4-Mbe2f97f9fbca160e3826c0f8
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 5403 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [discuss] GPT Partitioning for Illumos/Solaris
2025-08-17 1:43 ` [discuss] GPT Partitioning for Illumos/Solaris Atiq Rahman
@ 2025-08-17 10:06 ` Toomas Soome via illumos-discuss
2025-09-04 5:22 ` [discuss] " Atiq Rahman
0 siblings, 1 reply; 20+ messages in thread
From: Toomas Soome via illumos-discuss @ 2025-08-17 10:06 UTC (permalink / raw)
To: illumos-discuss
Cc: Eric J Bowman, Discussion list for OpenIndiana, illumos-developer,
OpenIndiana Developer mailing list
[-- Attachment #1: Type: text/plain, Size: 3082 bytes --]
> On 17. Aug 2025, at 04:43, Atiq Rahman <atiqcx@gmail.com> wrote:
>
> Hi Eric,
> > The "Solaris Root" partition type is relevant to UFS, not ZFS. On UEFI, set s0/p1 to be an ESP, "boot, hidden, esp" flags and format it fat32
>
> Yep, I have done that quite a while ago. Linux is booting alright with that.
>
> > your "Solaris Reserved" should be s8/p9, 8Mib, -1MiB from end of disk
> . That's for legacy / MBR partitions. Mine is GPT. I don't need that or any other partition for Solaris. I only need one single partition of zfs / illumos and that's working so far.
It is created automatically on GPT and it is used to store fabricated disk identifier for case the disk does not provide one for itself. It is originating from Solaris and since illumos is fork of Solaris, it is still there. MBR+VTOC setup is using space from ‘alternates’ for the same purpose.
rgds,
toomas
> > /EFI/OpenIndiana/bootx64.efi (copy of loader.efi)
> I put mine on /EFI/Solaris/
>
> I might try rEFInd in future. Right now, I'm booting everything using Linux's bootloader.
>
> Thanks, <>
> Atiq <>
>
> On Thu, Aug 14, 2025 at 8:25 PM Eric J Bowman <mellowmutt@zoho.com <mailto:mellowmutt@zoho.com>> wrote:
>> >
>> > parted > mkpart illumos 767GB 100%
>> >
>> > and then set the partition type to solaris launching,
>> >
>> > $ sudo gdisk /dev/nvme0n1
>> >
>> > (set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
>> >
>>
>> The "Solaris Root" partition type is relevant to UFS, not ZFS. On UEFI, set s0/p1 to be an ESP, "boot, hidden, esp" flags and format it fat32, your "Solaris Reserved" should be s8/p9, 8Mib, -1MiB from end of disk. Your ZFS partition should be set to ZFS for illumos, FreeBSD ZFS for freebsd. Being bootable is a property of the ZFS filesystem, not its partition.
>>
>> /EFI
>> /EFI/OpenIndiana/bootx64.efi (copy of loader.efi)
>> /EFI/boot
>>
>> I put rEFInd Plus in /EFI/boot, with a backup of its bootx64.efi in case it gets overwritten.
>>
>> /EFI/boot/drivers/x64_zfs.efi
>>
>> This driver will allow rEFInd to stub-load loader64.efi from your ZFS partition on most firmware, some will still need the /EFI/OpenIndiana/bootx64.efi. I don't know if the string "OpenIndiana" is registered, this one is:
>>
>> /EFI/FreeBSD
>>
>> That's the gist of multiboot on UEFI, more work if you want more than one illumos or freebsd partition, but they will coexist peacefully.
>
> illumos <https://illumos.topicbox.com/latest> / illumos-discuss / see discussions <https://illumos.topicbox.com/groups/discuss> + participants <https://illumos.topicbox.com/groups/discuss/members> + delivery options <https://illumos.topicbox.com/groups/discuss/subscription>Permalink <https://illumos.topicbox.com/groups/discuss/T6c1d3abbcce146a4-Mbe2f97f9fbca160e3826c0f8>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T6c1d3abbcce146a4-Mccbe1a848c176b8fa1442fc8
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 6337 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] Re: [oi-dev] [developer] Install into existing zfs pool on a GPT partition
[not found] ` <2C6FD009-8737-489C-8B30-07BEA38832B3@me.com>
@ 2025-08-19 21:40 ` Atiq Rahman
2025-08-20 16:26 ` Atiq Rahman
2025-08-20 18:38 ` Atiq Rahman
0 siblings, 2 replies; 20+ messages in thread
From: Atiq Rahman @ 2025-08-19 21:40 UTC (permalink / raw)
To: Toomas Soome, OpenIndiana Developer mailing list
Cc: illumos-developer, illumos-discuss
[-- Attachment #1: Type: text/plain, Size: 20657 bytes --]
Hi Toomas,
booted into live env:
> zpool import -R /mnt rpool
did that
/mnt/var is pretty much empty.
----
# ls -a /mnt/var
.
..
user
----
> zfs create -V 8G -o volblocksize=4k -o primarycache=metadata rpool/swap
did that.
> make sure your /mnt/etc/vfstab has line
here's vfstab
----
# cat /mnt/etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
/devices - /devices devfs - no -
/proc - /proc proc - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
sharefs - /etc/dfs/sharetab sharefs - no -
fd - /dev/fd fd - no -
swap - /tmp tmpfs - yes -
----
> zfs create -V size -o primarycache=metadata rpool/dump
did that
However, following throws error,
> dumpadm -r /mnt -d /dev/zvol/dsk/rpool/dump
dumpadm: failed to open /dev/dump: No such file or directory
I also tried dumpadm also booting into the installed version from internal
SSD, gives the same error.
# dumpadm -d /dev/zvol/dsk/rpool/dump
dumpadm: failed to open /dev/dump: No such file or directory
Additionally, fs structure,
----
# ls -a /mnt/
.
..
.cdrom
bin
boot
dev
devices
etc
export
home
jack
kernel
lib
media
mnt
net
opt
platform
proc
reconfigure
root
rpool
save
sbin
system
tmp
usr
var
---
Feels like the installer did not create/generate the file system properly?
Should I reinstall?
Thanks,
Atiq
On Mon, Aug 18, 2025 at 2:02 AM Toomas Soome via oi-dev <
oi-dev@openindiana.org> wrote:
>
>
> On 18. Aug 2025, at 11:36, Atiq Rahman <atiqcx@gmail.com> wrote:
>
> Hi,
> Thanks so much for explaining the bits.
>
> > However, the system goes into maintenance mode after booting. the root
> password I specified during installation wasn't set. It sets the root user
> with the default maintenance password.
>
> How to make it run the remaining parts of the installer and complete it so
> I can at least get OI on text mode?
>
> Best!
>
> Atiq
>
>
>
> First of all, we would need to understand why it did drop to maintenance
> mode. If you can’t get in with root (on maintenance mode), then I’d suggest
> to boot to live environment with install medium, then
>
> zpool import -R /mnt rpool — now you have your os instance available in
> /mnt; from it you should see log files in /mnt/var/svc/log and most likely,
> some of the system-* files have recorded the reason.
>
> The install to existing pool does not create swap and dump devices for you
>
> zfs create -V size -o volblocksize=4k -o primarycache=metadata rpool/swap
> and make sure your /mnt/etc/vfstab has line:
>
>
> /dev/zvol/dsk/rpool/swap - - swap - no -
>
> zfs create -V size -o primarycache=metadata rpool/dump
> dumpadm -r /mnt -d */dev/zvol/dsk/rpool/dump*
>
> other that the above depend on what you will find from the logs.
>
> rgds,
> toomas
>
>
> On Sun, Aug 17, 2025 at 2:50 AM Toomas Soome via oi-dev <
> oi-dev@openindiana.org> wrote:
>
>> On 16. Aug 2025, at 08:31, Atiq Rahman <atiqcx@gmail.com> wrote:
>> > bootadm update-archive -R /mntpointfor_rootfs will create it.
>> This indeed created it. Now, I am able to boot into it without the USB
>> stick.
>>
>> > the finalization steps are likely missing.
>> Yep, more of the final steps were missed by the installer.
>>
>> Regarding the Install log (bottom of the email after this section).
>> 1. set_partition_active: is it running stuffs for legacy MBR even though
>> it is UEFI system?
>>
>> it is likely the installer logic may need some updating.
>>
>> 2. is /usr/sbin/installboot a script? If yes, I plan to run it manually
>> next time.
>>
>>
>> no. you still can run it manually, or use bootadm install-bootloader
>> (which does run installboot). Note that installboot is assuming that
>> existing boot programs are from illumos, if not, you will see some error
>> (in this case, the multiboot data structure is not found and therefore the
>> program is not “ours”).
>>
>> 3. EFI/Boot/bootia32.efi: why is it installing 32 bit boot file? Mine is
>> amd64 system.
>>
>>
>> Because bootia32.efi (loader32.efi) implements support of system which
>> starts with 32-bit UEFI firmware, but cpu can be switched to 64-bit mode.
>> Granted, most systems do not need it, but also, one might want to transfer
>> this boot disk, so it would be nice to have it prepared. And since ESP does
>> have spare space, it does not really hurt to have it installed. Same way we
>> do install both UEFI boot programs in ESP and BIOS boot programs into pmbr
>> and os partition start - this way it really does not matter if you use CSM
>> or not, you still can boot.
>>
>> 4. Finally, why did _curses.error: endwin() returned ERR ?
>>
>>
>> from man endwin:
>>
>> • *endwin* returns an error if
>>
>> • the terminal was not initialized, or
>>
>> • *endwin* is called more than once without updating the
>> screen, or
>>
>> • *reset_shell_mode*(3X) returns an error.
>>
>> So some debugging is needed there.
>>
>> rgds,
>> toomas
>>
>>
>>
>> Install log is attached here
>> ------------------------
>> 2025-08-15 16:00:35,533 - INFO : text-install:120 **** START ****
>> 2025-08-15 16:00:36,166 - ERROR : ti_install_utils.py:428 Error occured
>> during zpool call: no pools available to import
>>
>> 2025-08-15 16:00:38,146 - ERROR : ti_install_utils.py:428 Error occured
>> during zpool call: no pools available to import
>>
>> 2025-08-15 09:11:38,777 - ERROR : install_utils.py:411
>> be_do_installboot_walk: child 0 of 1 device c2t00A075014881A463d0s6
>> 2025-08-15 09:11:38,927 - ERROR : install_utils.py:411 Command:
>> "/usr/sbin/installboot -F -m -f -b /a/boot
>> /dev/rdsk/c2t00A075014881A463d0s6"
>> 2025-08-15 09:11:38,927 - ERROR : install_utils.py:411 Output:
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 bootblock
>> written for /dev/rdsk/c2t00A075014881A463d0s6, 280 sectors starting at 1024
>> (abs 1500001280)
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 /a/boot/pmbr is
>> newer than one in /dev/rdsk/c2t00A075014881A463d0s6
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 stage1 written
>> to slice 6 sector 0 (abs 1500000256)
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 bootblock
>> written to /tmp/ibootBxW1oa/EFI/Boot/bootx64.efi
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 bootblock
>> written to /tmp/ibootBxW1oa/EFI/Boot/bootia32.efi
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 /a/boot/pmbr is
>> newer than one in /dev/rdsk/c2t00A075014881A463d0p0
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 stage1 written
>> to slice 0 sector 0 (abs 0)
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Errors:
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Error reading
>> bootblock from /tmp/ibootBxW1oa/EFI/Boot/bootia32.efi
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Unable to find
>> multiboot header
>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Error reading
>> bootblock from /tmp/ibootBxW1oa/EFI/Boot/bootx64.efi
>> 2025-08-15 09:11:38,967 - ERROR : ti_install.py:680 One or more ICTs
>> failed. See previous log messages
>> 2025-08-15 09:12:54,907 - ERROR : text-install:254 Install Profile:
>> Pool: rpool
>> BE name: solaris
>> Overwrite boot configuration: True
>> NIC None:
>> Type: none
>> System Info:
>> Hostname: solaris
>> TZ: America - US - America/Los_Angeles
>> Time Offset: 0:00:17.242116
>> Keyboard: None
>> Locale: en_US.UTF-8
>> User Info(root):
>> Real name: None
>> Login name: root
>> Is Role: False
>> User Info():
>> Real name:
>> Login name:
>> Is Role: False
>> None
>> 2025-08-15 09:12:54,910 - ERROR : text-install:255 Traceback (most
>> recent call last):
>> File "/usr/bin/text-install", line 247, in <module>
>> cleanup_curses()
>> File "/usr/bin/text-install", line 93, in cleanup_curses
>> curses.endwin()
>> _curses.error: endwin() returned ERR
>>
>> 2025-08-15 09:12:54,911 - INFO : text-install:99 **** END ****
>>
>>
>>
>> Thanks,
>>
>> Atiq
>>
>>
>> On Thu, Aug 14, 2025 at 1:54 AM Toomas Soome via oi-dev <
>> oi-dev@openindiana.org> wrote:
>>
>>>
>>>
>>> On 14. Aug 2025, at 11:26, Atiq Rahman <atiqcx@gmail.com> wrote:
>>>
>>> So my disk name came out to be a long one: "c2t00A075014881A463d0" and I
>>> am creating a fresh new pool in s6 (7th slice).
>>> I am trying to get away with my mid week sorcery!
>>>
>>> Don't judge me. I barely concocted a majestic `zpool create` command
>>> doing few searches online (been a while, I don't remember any of the
>>> arguments)
>>>
>>> $ sudo zpool create -o ashift=12 -O compression=lz4 -O atime=off -O
>>> normalization=formD -O mountpoint=none -O canmount=off rpool
>>> /dev/dsk/c2t00A075014881A463d0s6
>>>
>>> I had a tiny bit of hesitation on putting in the mount restriction
>>> switches, thinking whether it will prevent the *BSD Loader* from
>>> looking up inside the pool.
>>>
>>> Then before running the text-installer (PS: no GUI yet for me, iGPU:
>>> radeon not supported, dGPU: Nvidia RTX 4060, driver does modeset unloading)
>>> I take the zfs pool off,
>>>
>>> $ sudo zpool export rpool
>>>
>>> pondered though if that would be a good idea before running the
>>> installer. And then, I traveled to light blue OpenIndiana land.
>>>
>>> $ sudo /usr/bin/text-install
>>>
>>> I got away with a lot of firsts today probably in a decade (zpool create
>>> worked, was able to treasure hunt the partition/slice that I created from
>>> another FOSS OS with help) so while that installer is running I am almost
>>> getting overconfident.
>>>
>>> An error in the end on /tmp/install_log put a dent on that just a tiny
>>> bit. Screenshot attached: one of the py scripts invoked by
>>> /sbin/install-finish: osol_install/ict.py has a TypeError which might have
>>> caused due to NIC not being supported. I thought for a second whether
>>> installer missing anything important by terminating after that TypeError.
>>>
>>> Life goes on: I created an entry on EFI boot manager pointing to
>>> /Boot/bootx64.efi
>>> As I try to boot BSD Loader is complaining that rootfs module not found.
>>>
>>>
>>> As your install did end up with errors, the finalization steps are
>>> likely missing.
>>>
>>> bootadm update-archive -R /mntpointfor_rootfs will create it.
>>>
>>>
>>> While running some beadm command from OI live image I also noticed it
>>> said few times that
>>> Boot menu config missing. I guess I need to mount /boot/efi
>>>
>>>
>>> no, beadm does manage boot environments and one step there is to
>>> maintain BE menu list for bootloader boot environments menu. beadm
>>> create/destroy/activate will update this menu (its located in <pool
>>> dataset>/boot/menu.lst
>>>
>>>
>>> As I am still learning how to make this work, any comment / suggestion
>>> is greatly appreciated.
>>>
>>> Next, I will follow this after above is resolved,
>>>
>>> per
>>> https://docs.openindiana.org/handbook/getting-started/#install-openindiana-to-existing-zfs-pool
>>> ,
>>> > As there's possibly already another OS instance installed to the
>>> selected ZFS pool, no additional users or filesystems (like `rpool/export`)
>>> are created during installation. Swap and dump ZFS volumes also are not
>>> created and should be added manually after installation. Only root user
>>> will be available in created boot environment.
>>>
>>>
>>> using existing pool implies you need to take care of many things
>>> yourself; IMO it is leaving just a bit too many things not done, but the
>>> thing is, because it is ending up with things to be done by the operator,
>>> it is definitely not the best option for someone trying to do its first
>>> install. It really is for “advanced mode”, when you know what and how to do
>>> to get setup completed.
>>>
>>> I have not used OI installer for some time, but I do believe you should
>>> be able to point the installer to create pool on existing slice (not to
>>> reuse existing pool). Or as I have suggested before, just use VM to get the
>>> first setup to learn from.
>>>
>>> rgds,
>>> toomas
>>>
>>> which means our installation process (into existing zfs pool) missed a
>>> few steps of usual installation even though we started with a fresh new zfs
>>> pool, no other OS instance.
>>>
>>> Appreciate the help of everyone who chimed in: Joshua, Toomas, John and
>>> others to help me discover the magic to install into specific partition
>>> instead of using the whole disk.
>>>
>>>
>>> Cheers!
>>>
>>> Atiq
>>>
>>>
>>> On Wed, Aug 13, 2025 at 5:53 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>>
>>>> Hi,
>>>> > You can create your single illumos partition on OI as well, it does
>>>> not have to be a whole disk setup.
>>>>
>>>> Yep, I would prefer to create the pool using an OI live image. So, I
>>>> created a new partition from mkpart using latest parted from a linux system
>>>> for this purpose,
>>>>
>>>> parted > mkpart illumos 767GB 100%
>>>>
>>>> and then set the partition type to solaris launching,
>>>>
>>>> $ sudo gdisk /dev/nvme0n1
>>>>
>>>> (set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
>>>>
>>>> I booted using OI live gui image (flashed / created it using 07-23 live
>>>> usb image from test dir under hipster),
>>>> Time to kill the switch!
>>>> I see that parted can list all the partitions and format can list some.
>>>> However, not all of them were listed under /dev/dsk or /dev/rdsk I could
>>>> only see p0 to p4 (I guess illumos skipped the linux partitions).
>>>>
>>>> However, when I applied 'zpool create zfs_main
>>>> /dev/rdsk/c2t0xxxxxxxxxxd0p4 it reported no such file or directory! Tried
>>>> various versions of `zpool create`, nothing would succeed!
>>>>
>>>> Wondering if this is some nvme storage support / enumeration issue.
>>>>
>>>> Atiq
>>>>
>>>>
>>>> On Wed, Aug 13, 2025 at 12:18 AM Toomas Soome via illumos-discuss <
>>>> discuss@lists.illumos.org> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 13. Aug 2025, at 09:29, Stephan Althaus via illumos-discuss <
>>>>> discuss@lists.illumos.org> wrote:
>>>>>
>>>>> On 8/13/25 04:17, Atiq Rahman wrote:
>>>>>
>>>>> Copying this to illumos discussion as well if anyone else knows.
>>>>>
>>>>> ---------- Forwarded message ---------
>>>>> From: Atiq Rahman <atiqcx@gmail.com>
>>>>> Date: Tue, Aug 12, 2025 at 6:23 PM
>>>>> Subject: Re: [OpenIndiana-discuss] Installation on a partition (UEFI
>>>>> System)
>>>>> To: Discussion list for OpenIndiana <
>>>>> openindiana-discuss@openindiana.org>
>>>>> Cc: John D Groenveld <groenveld@acm.org>
>>>>>
>>>>> Hi,
>>>>> I tried a live usb text installer again. Right after the launch text
>>>>> installer shows following as second option on the bottom of the screen,
>>>>> *F5_InstallToExistingPool*
>>>>>
>>>>> Screenshot is below. If you zoom in, you will see the option at the
>>>>> top of the blue background in the bottom part of the image.
>>>>>
>>>>> <text_install_screen.jpg>
>>>>>
>>>>> Can I partition and create a pool using FreeBSD as you mentioned. And,
>>>>> then use this option to install on that pool.
>>>>>
>>>>> *Note that my system is EFI Only (no legacy / no CSM).*
>>>>>
>>>>> Sincerely,
>>>>>
>>>>> Atiq
>>>>>
>>>>>
>>>>> On Tue, Aug 12, 2025 at 2:27 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>> Following up again to request confirmation regarding the "partition
>>>>>> the disk" option in the installer.
>>>>>>
>>>>>> At present, it’s impractical for me to carry an external storage
>>>>>> device solely for OpenIndiana. With other operating systems, I can safely
>>>>>> install and test them on my internal SSD without risking my existing setup.
>>>>>> Unfortunately, with OI, this specific limitation and uncertainty are
>>>>>> creating unnecessary friction and confusion.
>>>>>>
>>>>>>
>>>>>
>>>>> You can create your single illumos partition on OI as well, it does
>>>>> not have to be whole disk setup.
>>>>>
>>>>>
>>>>> Could you please confirm the current state of EFI support —
>>>>>> specifically, whether it is possible to safely perform a custom-partition
>>>>>> installation on an EFI-only system, using a FreeBSD live USB image with the
>>>>>> -d option.
>>>>>>
>>>>>> Thank you,
>>>>>> Atiq
>>>>>>
>>>>>
>>>>>
>>>>> yes, it is possible. However, if you intend to share pool with
>>>>> illumos, you need to know the specifics. illumos GPT partition type in
>>>>> FreeBSD is named apple-zfs (its actually the uuid of illumos usr slice from
>>>>> vtoc). FreeBSD boot loader is searching for partition freebsd-zfs to boot
>>>>> the FreeBSD system.
>>>>>
>>>>> In general, I’d suggest to use virtualization to get first glimpse of
>>>>> the OS, to learn its details and facts and then attempt to build
>>>>> complicated multi-os/multi boot setups.
>>>>>
>>>>> rgds,
>>>>> toomas
>>>>>
>>>>>
>>>>>>
>>>>>> On Mon, Aug 4, 2025 at 3:54 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>>>
>>>>>>> Hi John,
>>>>>>> That's very helpful..
>>>>>>>
>>>>>>> So, even though I choose "Partition the disk (MBR)" on the text
>>>>>>> installer, it should work fine on my UEFI System after I set up the pool
>>>>>>> using FreeBSD live image.
>>>>>>> Right? I get confused since it mentions "MBR" next to it and my
>>>>>>> System is entirely GPT, no support for legacy / MBR booting at all (no CSM).
>>>>>>>
>>>>>>> Have a smooth Monday,
>>>>>>>
>>>>>>> Atiq
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Jul 15, 2025 at 5:56 AM John D Groenveld via
>>>>>>> openindiana-discuss <openindiana-discuss@openindiana.org> wrote:
>>>>>>>
>>>>>>>> In message <CABC65rOmBY6YmMSB=HQdXq=
>>>>>>>> TDN7d1XR+K8m9-__RQPBzhJZeMQ@mail.gmail.com>
>>>>>>>> , Atiq Rahman writes:
>>>>>>>> >Most likely, I am gonna go with this. It's been a while since I
>>>>>>>> last had a
>>>>>>>> >Solaris machine. I have to figure out the zpool stuff.
>>>>>>>>
>>>>>>>> If you create the pool for OI under FreeBSD, you'll likely need to
>>>>>>>> disable features with the -d option as FreeBSD's ZFS is more feature
>>>>>>>> rich.
>>>>>>>> <URL:
>>>>>>>> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&apropos=0&sektion=0&manpath=FreeBSD+14.3-RELEASE+and+Ports&arch=default&format=html
>>>>>>>>
>>>>>>>> Once you get OI installed on your OI pool, you can enable features:
>>>>>>>> <URL:https://illumos.org/man/7/zpool-features>
>>>>>>>>
>>>>>>>> John
>>>>>>>> groenveld@acm.org
>>>>>>>>
>>>>>>> Hello!
>>>>>
>>>>> You can install to an existing pool created with FreeBSD, be shure to
>>>>> use the option "-d" as "zpool create -d <newpool> <partition> " to disable
>>>>> additional ZFS features, see Details here:
>>>>>
>>>>>
>>>>> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&sektion=8&apropos=0&manpath=FreeBSD+14.3-RELEASE+and+Ports
>>>>>
>>>>> And an EFI partition should be there, too.
>>>>>
>>>>> HTH,
>>>>>
>>>>> Stephan
>>>>>
>>>>> *illumos <https://illumos.topicbox.com/latest>* / illumos-developer /
> see discussions <https://illumos.topicbox.com/groups/developer> +
> participants <https://illumos.topicbox.com/groups/developer/members> +
> delivery options
> <https://illumos.topicbox.com/groups/developer/subscription> Permalink
> <https://illumos.topicbox.com/groups/developer/T240e6222b19f3259-M259d6746cee89b6ac04e3806>
>
>
> _______________________________________________
> oi-dev mailing list
> oi-dev@openindiana.org
> https://openindiana.org/mailman/listinfo/oi-dev
>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T240e6222b19f3259-M57ba9985887ec2930b383436
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 43170 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] Re: [oi-dev] [developer] Install into existing zfs pool on a GPT partition
2025-08-19 21:40 ` [discuss] Re: [oi-dev] [developer] Install into existing zfs pool on a " Atiq Rahman
@ 2025-08-20 16:26 ` Atiq Rahman
2025-08-20 18:30 ` John D Groenveld via illumos-discuss
2025-08-20 18:38 ` Atiq Rahman
1 sibling, 1 reply; 20+ messages in thread
From: Atiq Rahman @ 2025-08-20 16:26 UTC (permalink / raw)
To: Toomas Soome, OpenIndiana Developer mailing list
Cc: illumos-developer, illumos-discuss
[-- Attachment #1: Type: text/plain, Size: 21277 bytes --]
Hi,
I have an idea as a temporary workaround.
Say I install OI on an external SSD using the install in whole disk (GPT)
option. Then I zfs send/receive the pool to a specific GPT partition in the
internal disk. And, then I update paths / install bootloader update-archive
(and update root path for BSD Loaders) to use the internal one so I don't
have to carry the external disk anymore.
Will this work? Did anyone ever try?
Best!
Atiq
On Tue, Aug 19, 2025 at 2:40 PM Atiq Rahman <atiqcx@gmail.com> wrote:
> Hi Toomas,
> booted into live env:
>
> > zpool import -R /mnt rpool
> did that
>
> /mnt/var is pretty much empty.
> ----
> # ls -a /mnt/var
> .
> ..
> user
> ----
>
>
> > zfs create -V 8G -o volblocksize=4k -o primarycache=metadata rpool/swap
> did that.
>
> > make sure your /mnt/etc/vfstab has line
> here's vfstab
> ----
> # cat /mnt/etc/vfstab
> #device device mount FS fsck mount mount
> #to mount to fsck point type pass at boot options
> /devices - /devices devfs - no -
> /proc - /proc proc - no -
> ctfs - /system/contract ctfs - no -
> objfs - /system/object objfs - no -
> sharefs - /etc/dfs/sharetab sharefs - no -
> fd - /dev/fd fd - no -
> swap - /tmp tmpfs - yes -
> ----
>
> > zfs create -V size -o primarycache=metadata rpool/dump
> did that
>
> However, following throws error,
> > dumpadm -r /mnt -d /dev/zvol/dsk/rpool/dump
> dumpadm: failed to open /dev/dump: No such file or directory
>
> I also tried dumpadm also booting into the installed version from internal
> SSD, gives the same error.
> # dumpadm -d /dev/zvol/dsk/rpool/dump
> dumpadm: failed to open /dev/dump: No such file or directory
>
> Additionally, fs structure,
>
> ----
> # ls -a /mnt/
> .
> ..
> .cdrom
> bin
> boot
> dev
> devices
> etc
> export
> home
> jack
> kernel
> lib
> media
> mnt
> net
> opt
> platform
> proc
> reconfigure
> root
> rpool
> save
> sbin
> system
> tmp
> usr
> var
> ---
>
>
> Feels like the installer did not create/generate the file system properly?
> Should I reinstall?
>
> Thanks,
>
> Atiq
>
>
> On Mon, Aug 18, 2025 at 2:02 AM Toomas Soome via oi-dev <
> oi-dev@openindiana.org> wrote:
>
>>
>>
>> On 18. Aug 2025, at 11:36, Atiq Rahman <atiqcx@gmail.com> wrote:
>>
>> Hi,
>> Thanks so much for explaining the bits.
>>
>> > However, the system goes into maintenance mode after booting. the root
>> password I specified during installation wasn't set. It sets the root user
>> with the default maintenance password.
>>
>> How to make it run the remaining parts of the installer and complete it
>> so I can at least get OI on text mode?
>>
>> Best!
>>
>> Atiq
>>
>>
>>
>> First of all, we would need to understand why it did drop to maintenance
>> mode. If you can’t get in with root (on maintenance mode), then I’d suggest
>> to boot to live environment with install medium, then
>>
>> zpool import -R /mnt rpool — now you have your os instance available in
>> /mnt; from it you should see log files in /mnt/var/svc/log and most likely,
>> some of the system-* files have recorded the reason.
>>
>> The install to existing pool does not create swap and dump devices for you
>>
>> zfs create -V size -o volblocksize=4k -o primarycache=metadata
>> rpool/swap and make sure your /mnt/etc/vfstab has line:
>>
>>
>> /dev/zvol/dsk/rpool/swap - - swap - no -
>>
>> zfs create -V size -o primarycache=metadata rpool/dump
>> dumpadm -r /mnt -d */dev/zvol/dsk/rpool/dump*
>>
>> other that the above depend on what you will find from the logs.
>>
>> rgds,
>> toomas
>>
>>
>> On Sun, Aug 17, 2025 at 2:50 AM Toomas Soome via oi-dev <
>> oi-dev@openindiana.org> wrote:
>>
>>> On 16. Aug 2025, at 08:31, Atiq Rahman <atiqcx@gmail.com> wrote:
>>> > bootadm update-archive -R /mntpointfor_rootfs will create it.
>>> This indeed created it. Now, I am able to boot into it without the USB
>>> stick.
>>>
>>> > the finalization steps are likely missing.
>>> Yep, more of the final steps were missed by the installer.
>>>
>>> Regarding the Install log (bottom of the email after this section).
>>> 1. set_partition_active: is it running stuffs for legacy MBR even though
>>> it is UEFI system?
>>>
>>> it is likely the installer logic may need some updating.
>>>
>>> 2. is /usr/sbin/installboot a script? If yes, I plan to run it manually
>>> next time.
>>>
>>>
>>> no. you still can run it manually, or use bootadm install-bootloader
>>> (which does run installboot). Note that installboot is assuming that
>>> existing boot programs are from illumos, if not, you will see some error
>>> (in this case, the multiboot data structure is not found and therefore the
>>> program is not “ours”).
>>>
>>> 3. EFI/Boot/bootia32.efi: why is it installing 32 bit boot file? Mine is
>>> amd64 system.
>>>
>>>
>>> Because bootia32.efi (loader32.efi) implements support of system which
>>> starts with 32-bit UEFI firmware, but cpu can be switched to 64-bit mode.
>>> Granted, most systems do not need it, but also, one might want to transfer
>>> this boot disk, so it would be nice to have it prepared. And since ESP does
>>> have spare space, it does not really hurt to have it installed. Same way we
>>> do install both UEFI boot programs in ESP and BIOS boot programs into pmbr
>>> and os partition start - this way it really does not matter if you use CSM
>>> or not, you still can boot.
>>>
>>> 4. Finally, why did _curses.error: endwin() returned ERR ?
>>>
>>>
>>> from man endwin:
>>>
>>> • *endwin* returns an error if
>>>
>>> • the terminal was not initialized, or
>>>
>>> • *endwin* is called more than once without updating the
>>> screen, or
>>>
>>> • *reset_shell_mode*(3X) returns an error.
>>>
>>> So some debugging is needed there.
>>>
>>> rgds,
>>> toomas
>>>
>>>
>>>
>>> Install log is attached here
>>> ------------------------
>>> 2025-08-15 16:00:35,533 - INFO : text-install:120 **** START ****
>>> 2025-08-15 16:00:36,166 - ERROR : ti_install_utils.py:428 Error
>>> occured during zpool call: no pools available to import
>>>
>>> 2025-08-15 16:00:38,146 - ERROR : ti_install_utils.py:428 Error
>>> occured during zpool call: no pools available to import
>>>
>>> 2025-08-15 09:11:38,777 - ERROR : install_utils.py:411
>>> be_do_installboot_walk: child 0 of 1 device c2t00A075014881A463d0s6
>>> 2025-08-15 09:11:38,927 - ERROR : install_utils.py:411 Command:
>>> "/usr/sbin/installboot -F -m -f -b /a/boot
>>> /dev/rdsk/c2t00A075014881A463d0s6"
>>> 2025-08-15 09:11:38,927 - ERROR : install_utils.py:411 Output:
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 bootblock
>>> written for /dev/rdsk/c2t00A075014881A463d0s6, 280 sectors starting at 1024
>>> (abs 1500001280)
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 /a/boot/pmbr is
>>> newer than one in /dev/rdsk/c2t00A075014881A463d0s6
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 stage1 written
>>> to slice 6 sector 0 (abs 1500000256)
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 bootblock
>>> written to /tmp/ibootBxW1oa/EFI/Boot/bootx64.efi
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 bootblock
>>> written to /tmp/ibootBxW1oa/EFI/Boot/bootia32.efi
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 /a/boot/pmbr is
>>> newer than one in /dev/rdsk/c2t00A075014881A463d0p0
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 stage1 written
>>> to slice 0 sector 0 (abs 0)
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Errors:
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Error reading
>>> bootblock from /tmp/ibootBxW1oa/EFI/Boot/bootia32.efi
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Unable to find
>>> multiboot header
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Error reading
>>> bootblock from /tmp/ibootBxW1oa/EFI/Boot/bootx64.efi
>>> 2025-08-15 09:11:38,967 - ERROR : ti_install.py:680 One or more ICTs
>>> failed. See previous log messages
>>> 2025-08-15 09:12:54,907 - ERROR : text-install:254 Install Profile:
>>> Pool: rpool
>>> BE name: solaris
>>> Overwrite boot configuration: True
>>> NIC None:
>>> Type: none
>>> System Info:
>>> Hostname: solaris
>>> TZ: America - US - America/Los_Angeles
>>> Time Offset: 0:00:17.242116
>>> Keyboard: None
>>> Locale: en_US.UTF-8
>>> User Info(root):
>>> Real name: None
>>> Login name: root
>>> Is Role: False
>>> User Info():
>>> Real name:
>>> Login name:
>>> Is Role: False
>>> None
>>> 2025-08-15 09:12:54,910 - ERROR : text-install:255 Traceback (most
>>> recent call last):
>>> File "/usr/bin/text-install", line 247, in <module>
>>> cleanup_curses()
>>> File "/usr/bin/text-install", line 93, in cleanup_curses
>>> curses.endwin()
>>> _curses.error: endwin() returned ERR
>>>
>>> 2025-08-15 09:12:54,911 - INFO : text-install:99 **** END ****
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Atiq
>>>
>>>
>>> On Thu, Aug 14, 2025 at 1:54 AM Toomas Soome via oi-dev <
>>> oi-dev@openindiana.org> wrote:
>>>
>>>>
>>>>
>>>> On 14. Aug 2025, at 11:26, Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>
>>>> So my disk name came out to be a long one: "c2t00A075014881A463d0" and
>>>> I am creating a fresh new pool in s6 (7th slice).
>>>> I am trying to get away with my mid week sorcery!
>>>>
>>>> Don't judge me. I barely concocted a majestic `zpool create` command
>>>> doing few searches online (been a while, I don't remember any of the
>>>> arguments)
>>>>
>>>> $ sudo zpool create -o ashift=12 -O compression=lz4 -O atime=off -O
>>>> normalization=formD -O mountpoint=none -O canmount=off rpool
>>>> /dev/dsk/c2t00A075014881A463d0s6
>>>>
>>>> I had a tiny bit of hesitation on putting in the mount restriction
>>>> switches, thinking whether it will prevent the *BSD Loader* from
>>>> looking up inside the pool.
>>>>
>>>> Then before running the text-installer (PS: no GUI yet for me, iGPU:
>>>> radeon not supported, dGPU: Nvidia RTX 4060, driver does modeset unloading)
>>>> I take the zfs pool off,
>>>>
>>>> $ sudo zpool export rpool
>>>>
>>>> pondered though if that would be a good idea before running the
>>>> installer. And then, I traveled to light blue OpenIndiana land.
>>>>
>>>> $ sudo /usr/bin/text-install
>>>>
>>>> I got away with a lot of firsts today probably in a decade (zpool
>>>> create worked, was able to treasure hunt the partition/slice that I created
>>>> from another FOSS OS with help) so while that installer is running I am
>>>> almost getting overconfident.
>>>>
>>>> An error in the end on /tmp/install_log put a dent on that just a tiny
>>>> bit. Screenshot attached: one of the py scripts invoked by
>>>> /sbin/install-finish: osol_install/ict.py has a TypeError which might have
>>>> caused due to NIC not being supported. I thought for a second whether
>>>> installer missing anything important by terminating after that TypeError.
>>>>
>>>> Life goes on: I created an entry on EFI boot manager pointing to
>>>> /Boot/bootx64.efi
>>>> As I try to boot BSD Loader is complaining that rootfs module not found.
>>>>
>>>>
>>>> As your install did end up with errors, the finalization steps are
>>>> likely missing.
>>>>
>>>> bootadm update-archive -R /mntpointfor_rootfs will create it.
>>>>
>>>>
>>>> While running some beadm command from OI live image I also noticed it
>>>> said few times that
>>>> Boot menu config missing. I guess I need to mount /boot/efi
>>>>
>>>>
>>>> no, beadm does manage boot environments and one step there is to
>>>> maintain BE menu list for bootloader boot environments menu. beadm
>>>> create/destroy/activate will update this menu (its located in <pool
>>>> dataset>/boot/menu.lst
>>>>
>>>>
>>>> As I am still learning how to make this work, any comment / suggestion
>>>> is greatly appreciated.
>>>>
>>>> Next, I will follow this after above is resolved,
>>>>
>>>> per
>>>> https://docs.openindiana.org/handbook/getting-started/#install-openindiana-to-existing-zfs-pool
>>>> ,
>>>> > As there's possibly already another OS instance installed to the
>>>> selected ZFS pool, no additional users or filesystems (like `rpool/export`)
>>>> are created during installation. Swap and dump ZFS volumes also are not
>>>> created and should be added manually after installation. Only root user
>>>> will be available in created boot environment.
>>>>
>>>>
>>>> using existing pool implies you need to take care of many things
>>>> yourself; IMO it is leaving just a bit too many things not done, but the
>>>> thing is, because it is ending up with things to be done by the operator,
>>>> it is definitely not the best option for someone trying to do its first
>>>> install. It really is for “advanced mode”, when you know what and how to do
>>>> to get setup completed.
>>>>
>>>> I have not used OI installer for some time, but I do believe you should
>>>> be able to point the installer to create pool on existing slice (not to
>>>> reuse existing pool). Or as I have suggested before, just use VM to get the
>>>> first setup to learn from.
>>>>
>>>> rgds,
>>>> toomas
>>>>
>>>> which means our installation process (into existing zfs pool) missed a
>>>> few steps of usual installation even though we started with a fresh new zfs
>>>> pool, no other OS instance.
>>>>
>>>> Appreciate the help of everyone who chimed in: Joshua, Toomas, John and
>>>> others to help me discover the magic to install into specific partition
>>>> instead of using the whole disk.
>>>>
>>>>
>>>> Cheers!
>>>>
>>>> Atiq
>>>>
>>>>
>>>> On Wed, Aug 13, 2025 at 5:53 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>> > You can create your single illumos partition on OI as well, it does
>>>>> not have to be a whole disk setup.
>>>>>
>>>>> Yep, I would prefer to create the pool using an OI live image. So, I
>>>>> created a new partition from mkpart using latest parted from a linux system
>>>>> for this purpose,
>>>>>
>>>>> parted > mkpart illumos 767GB 100%
>>>>>
>>>>> and then set the partition type to solaris launching,
>>>>>
>>>>> $ sudo gdisk /dev/nvme0n1
>>>>>
>>>>> (set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
>>>>>
>>>>> I booted using OI live gui image (flashed / created it using 07-23
>>>>> live usb image from test dir under hipster),
>>>>> Time to kill the switch!
>>>>> I see that parted can list all the partitions and format can list
>>>>> some. However, not all of them were listed under /dev/dsk or /dev/rdsk I
>>>>> could only see p0 to p4 (I guess illumos skipped the linux partitions).
>>>>>
>>>>> However, when I applied 'zpool create zfs_main
>>>>> /dev/rdsk/c2t0xxxxxxxxxxd0p4 it reported no such file or directory! Tried
>>>>> various versions of `zpool create`, nothing would succeed!
>>>>>
>>>>> Wondering if this is some nvme storage support / enumeration issue.
>>>>>
>>>>> Atiq
>>>>>
>>>>>
>>>>> On Wed, Aug 13, 2025 at 12:18 AM Toomas Soome via illumos-discuss <
>>>>> discuss@lists.illumos.org> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 13. Aug 2025, at 09:29, Stephan Althaus via illumos-discuss <
>>>>>> discuss@lists.illumos.org> wrote:
>>>>>>
>>>>>> On 8/13/25 04:17, Atiq Rahman wrote:
>>>>>>
>>>>>> Copying this to illumos discussion as well if anyone else knows.
>>>>>>
>>>>>> ---------- Forwarded message ---------
>>>>>> From: Atiq Rahman <atiqcx@gmail.com>
>>>>>> Date: Tue, Aug 12, 2025 at 6:23 PM
>>>>>> Subject: Re: [OpenIndiana-discuss] Installation on a partition (UEFI
>>>>>> System)
>>>>>> To: Discussion list for OpenIndiana <
>>>>>> openindiana-discuss@openindiana.org>
>>>>>> Cc: John D Groenveld <groenveld@acm.org>
>>>>>>
>>>>>> Hi,
>>>>>> I tried a live usb text installer again. Right after the launch text
>>>>>> installer shows following as second option on the bottom of the screen,
>>>>>> *F5_InstallToExistingPool*
>>>>>>
>>>>>> Screenshot is below. If you zoom in, you will see the option at the
>>>>>> top of the blue background in the bottom part of the image.
>>>>>>
>>>>>> <text_install_screen.jpg>
>>>>>>
>>>>>> Can I partition and create a pool using FreeBSD as you
>>>>>> mentioned. And, then use this option to install on that pool.
>>>>>>
>>>>>> *Note that my system is EFI Only (no legacy / no CSM).*
>>>>>>
>>>>>> Sincerely,
>>>>>>
>>>>>> Atiq
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 12, 2025 at 2:27 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>> Following up again to request confirmation regarding the "partition
>>>>>>> the disk" option in the installer.
>>>>>>>
>>>>>>> At present, it’s impractical for me to carry an external storage
>>>>>>> device solely for OpenIndiana. With other operating systems, I can safely
>>>>>>> install and test them on my internal SSD without risking my existing setup.
>>>>>>> Unfortunately, with OI, this specific limitation and uncertainty are
>>>>>>> creating unnecessary friction and confusion.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> You can create your single illumos partition on OI as well, it does
>>>>>> not have to be whole disk setup.
>>>>>>
>>>>>>
>>>>>> Could you please confirm the current state of EFI support —
>>>>>>> specifically, whether it is possible to safely perform a custom-partition
>>>>>>> installation on an EFI-only system, using a FreeBSD live USB image with the
>>>>>>> -d option.
>>>>>>>
>>>>>>> Thank you,
>>>>>>> Atiq
>>>>>>>
>>>>>>
>>>>>>
>>>>>> yes, it is possible. However, if you intend to share pool with
>>>>>> illumos, you need to know the specifics. illumos GPT partition type in
>>>>>> FreeBSD is named apple-zfs (its actually the uuid of illumos usr slice from
>>>>>> vtoc). FreeBSD boot loader is searching for partition freebsd-zfs to boot
>>>>>> the FreeBSD system.
>>>>>>
>>>>>> In general, I’d suggest to use virtualization to get first glimpse of
>>>>>> the OS, to learn its details and facts and then attempt to build
>>>>>> complicated multi-os/multi boot setups.
>>>>>>
>>>>>> rgds,
>>>>>> toomas
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> On Mon, Aug 4, 2025 at 3:54 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi John,
>>>>>>>> That's very helpful..
>>>>>>>>
>>>>>>>> So, even though I choose "Partition the disk (MBR)" on the text
>>>>>>>> installer, it should work fine on my UEFI System after I set up the pool
>>>>>>>> using FreeBSD live image.
>>>>>>>> Right? I get confused since it mentions "MBR" next to it and my
>>>>>>>> System is entirely GPT, no support for legacy / MBR booting at all (no CSM).
>>>>>>>>
>>>>>>>> Have a smooth Monday,
>>>>>>>>
>>>>>>>> Atiq
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Jul 15, 2025 at 5:56 AM John D Groenveld via
>>>>>>>> openindiana-discuss <openindiana-discuss@openindiana.org> wrote:
>>>>>>>>
>>>>>>>>> In message <CABC65rOmBY6YmMSB=HQdXq=
>>>>>>>>> TDN7d1XR+K8m9-__RQPBzhJZeMQ@mail.gmail.com>
>>>>>>>>> , Atiq Rahman writes:
>>>>>>>>> >Most likely, I am gonna go with this. It's been a while since I
>>>>>>>>> last had a
>>>>>>>>> >Solaris machine. I have to figure out the zpool stuff.
>>>>>>>>>
>>>>>>>>> If you create the pool for OI under FreeBSD, you'll likely need to
>>>>>>>>> disable features with the -d option as FreeBSD's ZFS is more
>>>>>>>>> feature
>>>>>>>>> rich.
>>>>>>>>> <URL:
>>>>>>>>> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&apropos=0&sektion=0&manpath=FreeBSD+14.3-RELEASE+and+Ports&arch=default&format=html
>>>>>>>>>
>>>>>>>>> Once you get OI installed on your OI pool, you can enable features:
>>>>>>>>> <URL:https://illumos.org/man/7/zpool-features>
>>>>>>>>>
>>>>>>>>> John
>>>>>>>>> groenveld@acm.org
>>>>>>>>>
>>>>>>>> Hello!
>>>>>>
>>>>>> You can install to an existing pool created with FreeBSD, be shure to
>>>>>> use the option "-d" as "zpool create -d <newpool> <partition> " to disable
>>>>>> additional ZFS features, see Details here:
>>>>>>
>>>>>>
>>>>>> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&sektion=8&apropos=0&manpath=FreeBSD+14.3-RELEASE+and+Ports
>>>>>>
>>>>>> And an EFI partition should be there, too.
>>>>>>
>>>>>> HTH,
>>>>>>
>>>>>> Stephan
>>>>>>
>>>>>>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T240e6222b19f3259-M0d134591f7e8ea3e98c7774c
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 44572 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] GPT Partitioning for Illumos/Solaris
[not found] ` <198abc29f87.10236f39f26274.6456493104881031520@zoho.com>
2025-08-17 1:43 ` [discuss] GPT Partitioning for Illumos/Solaris Atiq Rahman
@ 2025-08-20 16:33 ` Atiq Rahman
2025-08-20 16:41 ` [discuss] " bronkoo via illumos-discuss
[not found] ` <198d980ae49.1166dae1613934.6527363511160319029@zoho.com>
1 sibling, 2 replies; 20+ messages in thread
From: Atiq Rahman @ 2025-08-20 16:33 UTC (permalink / raw)
To: Eric J Bowman, OpenIndiana Developer mailing list
Cc: illumos-discuss, illumos-developer
[-- Attachment #1: Type: text/plain, Size: 1671 bytes --]
> That's the gist of multiboot on UEFI, more work if you want more than one
illumos or freebsd partition, but they will coexist peacefully.
Have any of you ever tried the ZFS Boot Menu? I heard good things about it,
thinking of trying it.
Best!
Atiq
On Thu, Aug 14, 2025 at 8:25 PM Eric J Bowman <mellowmutt@zoho.com> wrote:
> >
> > parted > mkpart illumos 767GB 100%
> >
> > and then set the partition type to solaris launching,
> >
> > $ sudo gdisk /dev/nvme0n1
> >
> > (set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
> >
>
>
> The "Solaris Root" partition type is relevant to UFS, not ZFS. On UEFI,
> set s0/p1 to be an ESP, "boot, hidden, esp" flags and format it fat32, your
> "Solaris Reserved" should be s8/p9, 8Mib, -1MiB from end of disk. Your ZFS
> partition should be set to ZFS for illumos, FreeBSD ZFS for freebsd. Being
> bootable is a property of the ZFS filesystem, not its partition.
>
> /EFI
> /EFI/OpenIndiana/bootx64.efi (copy of loader.efi)
> /EFI/boot
>
> I put rEFInd Plus in /EFI/boot, with a backup of its bootx64.efi in case
> it gets overwritten.
>
> /EFI/boot/drivers/x64_zfs.efi
>
> This driver will allow rEFInd to stub-load loader64.efi from your ZFS
> partition on most firmware, some will still need the
> /EFI/OpenIndiana/bootx64.efi. I don't know if the string "OpenIndiana" is
> registered, this one is:
>
> /EFI/FreeBSD
>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T6c1d3abbcce146a4-Md3c9ee656cf9f76e023f646e
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 3578 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] Re: GPT Partitioning for Illumos/Solaris
2025-08-20 16:33 ` [discuss] " Atiq Rahman
@ 2025-08-20 16:41 ` bronkoo via illumos-discuss
[not found] ` <198d980ae49.1166dae1613934.6527363511160319029@zoho.com>
1 sibling, 0 replies; 20+ messages in thread
From: bronkoo via illumos-discuss @ 2025-08-20 16:41 UTC (permalink / raw)
To: illumos-discuss
[-- Attachment #1: Type: text/plain, Size: 1025 bytes --]
On Wednesday, 20 August 2025, at 6:33 PM, Atiq Rahman wrote:
> Have any of you ever tried the ZFS Boot Menu? I heard good things about it, thinking of trying it.
yes, on Arch Linux several times and it works to close the gap to *beadm* on Solarish:
https://florianesser.ch/posts/20220714-arch-install-zbm/
user@arch ~ % mkdir -p /efi/EFI/zbm
user@arch ~ % wget https://get.zfsbootmenu.org/latest.EFI -O /efi/EFI/zbm/zfsbootmenu.EFI
user@arch ~ % efibootmgr --disk $DISK --part 1 --create --label "ZFSBootMenu" --loader '\EFI\zbm\zfsbootmenu.EFI' --unicode "spl_hostid=$(hostid) zbm.timeout=3 zbm.prefer=rpool zbm.import_policy=hostid" --verbose
user@arch ~ % zfs set org.zfsbootmenu:commandline="noresume init_on_alloc=0 rw spl.spl_hostid=$(hostid)" rpool/ROOT
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T6c1d3abbcce146a4-Mc4a4e504228f3c02ee52ed48
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 2234 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] Re: [oi-dev] [developer] Install into existing zfs pool on a GPT partition
2025-08-20 16:26 ` Atiq Rahman
@ 2025-08-20 18:30 ` John D Groenveld via illumos-discuss
0 siblings, 0 replies; 20+ messages in thread
From: John D Groenveld via illumos-discuss @ 2025-08-20 18:30 UTC (permalink / raw)
To: OpenIndiana Developer mailing list; +Cc: illumos-developer, illumos-discuss
In message <CABC65rM_GKWBm1bdccK5ukZ5r-W48Mb2467K0Oh8dVZ56uhVYw@mail.gmail.com>
, Atiq Rahman writes:
>I have an idea as a temporary workaround.
Workaround for what?
ISTR you had fixed your broken GPT on your ASUS ProArt laptop's
internal disk and successfully created a zpool for OI on that
partition.
I recommend you install OI on it via the "F5_InstallToExistingPool"
option.
<URL:https://docs.openindiana.org/handbook/getting-started/#install-openindiana-to-existing-zfs-pool>
After installation, add a swap zvol before you attempt to pkg(1) update.
>Say I install OI on an external SSD using the install in whole disk (GPT)
>option. Then I zfs send/receive the pool to a specific GPT partition in the
>internal disk. And, then I update paths / install bootloader update-archive
>(and update root path for BSD Loaders) to use the internal one so I don't
>have to carry the external disk anymore.
That likely works.
You might test in a VM on top of your Fedora before breaking your
laptop again.
You can also use beadm(8) to create a BE on the internal disk zpool
and then use zfs send/receive to obtain any other datasets from your
external disk before you boot to the internal OI BE.
<URL:https://illumos.org/man/8/beadm>
This definitely works:
Use zpool(8) attach to create a mirror of your external drive's OI
zpool with the internal drive OI partition and then
zpool detach the external drive partition after the resilver completes.
<URL:https://illumos.org/man/8/zpool>
The external drive zpool must be equal or less size than the internal
partition or the attach will fail.
John
groenveld@acm.org
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T240e6222b19f3259-M8e7bb41f3674bd7f702ea533
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] Re: [oi-dev] [developer] Install into existing zfs pool on a GPT partition
2025-08-19 21:40 ` [discuss] Re: [oi-dev] [developer] Install into existing zfs pool on a " Atiq Rahman
2025-08-20 16:26 ` Atiq Rahman
@ 2025-08-20 18:38 ` Atiq Rahman
2025-08-20 18:52 ` John D Groenveld via illumos-discuss
1 sibling, 1 reply; 20+ messages in thread
From: Atiq Rahman @ 2025-08-20 18:38 UTC (permalink / raw)
To: Toomas Soome, OpenIndiana Developer mailing list
Cc: illumos-developer, illumos-discuss
[-- Attachment #1: Type: text/plain, Size: 21352 bytes --]
Hi John,
> Workaround for what?
After I installed OI via "F5_InstallToExistingPool" and I booted into it
from the internal disk it kept defaulting to maintenance mode and it didn't
set the password I specified on the installer (as I mentioned in previous
email). So, did it miss most of the steps for installation and generating
the filesystem etc?
For example, see below email, I don't have /var/svc; /dev/dump keeps
throwing errors on dumpadm and so on..
That's why I am reconsidering the installation?
Best,
Atiq
On Tue, Aug 19, 2025 at 2:40 PM Atiq Rahman <atiqcx@gmail.com> wrote:
> Hi Toomas,
> booted into live env:
>
> > zpool import -R /mnt rpool
> did that
>
> /mnt/var is pretty much empty.
> ----
> # ls -a /mnt/var
> .
> ..
> user
> ----
>
>
> > zfs create -V 8G -o volblocksize=4k -o primarycache=metadata rpool/swap
> did that.
>
> > make sure your /mnt/etc/vfstab has line
> here's vfstab
> ----
> # cat /mnt/etc/vfstab
> #device device mount FS fsck mount mount
> #to mount to fsck point type pass at boot options
> /devices - /devices devfs - no -
> /proc - /proc proc - no -
> ctfs - /system/contract ctfs - no -
> objfs - /system/object objfs - no -
> sharefs - /etc/dfs/sharetab sharefs - no -
> fd - /dev/fd fd - no -
> swap - /tmp tmpfs - yes -
> ----
>
> > zfs create -V size -o primarycache=metadata rpool/dump
> did that
>
> However, following throws error,
> > dumpadm -r /mnt -d /dev/zvol/dsk/rpool/dump
> dumpadm: failed to open /dev/dump: No such file or directory
>
> I also tried dumpadm also booting into the installed version from internal
> SSD, gives the same error.
> # dumpadm -d /dev/zvol/dsk/rpool/dump
> dumpadm: failed to open /dev/dump: No such file or directory
>
> Additionally, fs structure,
>
> ----
> # ls -a /mnt/
> .
> ..
> .cdrom
> bin
> boot
> dev
> devices
> etc
> export
> home
> kernel
> lib
> media
> mnt
> net
> opt
> platform
> proc
> reconfigure
> root
> rpool
> save
> sbin
> system
> tmp
> usr
> var
> ---
>
>
> Feels like the installer did not create/generate the file system properly?
> Should I reinstall?
>
> Thanks,
>
> Atiq
>
>
> On Mon, Aug 18, 2025 at 2:02 AM Toomas Soome via oi-dev <
> oi-dev@openindiana.org> wrote:
>
>>
>>
>> On 18. Aug 2025, at 11:36, Atiq Rahman <atiqcx@gmail.com> wrote:
>>
>> Hi,
>> Thanks so much for explaining the bits.
>>
>> > However, the system goes into maintenance mode after booting. the root
>> password I specified during installation wasn't set. It sets the root user
>> with the default maintenance password.
>>
>> How to make it run the remaining parts of the installer and complete it
>> so I can at least get OI on text mode?
>>
>> Best!
>>
>> Atiq
>>
>>
>>
>> First of all, we would need to understand why it did drop to maintenance
>> mode. If you can’t get in with root (on maintenance mode), then I’d suggest
>> to boot to live environment with install medium, then
>>
>> zpool import -R /mnt rpool — now you have your os instance available in
>> /mnt; from it you should see log files in /mnt/var/svc/log and most likely,
>> some of the system-* files have recorded the reason.
>>
>> The install to existing pool does not create swap and dump devices for you
>>
>> zfs create -V size -o volblocksize=4k -o primarycache=metadata
>> rpool/swap and make sure your /mnt/etc/vfstab has line:
>>
>>
>> /dev/zvol/dsk/rpool/swap - - swap - no -
>>
>> zfs create -V size -o primarycache=metadata rpool/dump
>> dumpadm -r /mnt -d */dev/zvol/dsk/rpool/dump*
>>
>> other that the above depend on what you will find from the logs.
>>
>> rgds,
>> toomas
>>
>>
>> On Sun, Aug 17, 2025 at 2:50 AM Toomas Soome via oi-dev <
>> oi-dev@openindiana.org> wrote:
>>
>>> On 16. Aug 2025, at 08:31, Atiq Rahman <atiqcx@gmail.com> wrote:
>>> > bootadm update-archive -R /mntpointfor_rootfs will create it.
>>> This indeed created it. Now, I am able to boot into it without the USB
>>> stick.
>>>
>>> > the finalization steps are likely missing.
>>> Yep, more of the final steps were missed by the installer.
>>>
>>> Regarding the Install log (bottom of the email after this section).
>>> 1. set_partition_active: is it running stuffs for legacy MBR even though
>>> it is UEFI system?
>>>
>>> it is likely the installer logic may need some updating.
>>>
>>> 2. is /usr/sbin/installboot a script? If yes, I plan to run it manually
>>> next time.
>>>
>>>
>>> no. you still can run it manually, or use bootadm install-bootloader
>>> (which does run installboot). Note that installboot is assuming that
>>> existing boot programs are from illumos, if not, you will see some error
>>> (in this case, the multiboot data structure is not found and therefore the
>>> program is not “ours”).
>>>
>>> 3. EFI/Boot/bootia32.efi: why is it installing 32 bit boot file? Mine is
>>> amd64 system.
>>>
>>>
>>> Because bootia32.efi (loader32.efi) implements support of system which
>>> starts with 32-bit UEFI firmware, but cpu can be switched to 64-bit mode.
>>> Granted, most systems do not need it, but also, one might want to transfer
>>> this boot disk, so it would be nice to have it prepared. And since ESP does
>>> have spare space, it does not really hurt to have it installed. Same way we
>>> do install both UEFI boot programs in ESP and BIOS boot programs into pmbr
>>> and os partition start - this way it really does not matter if you use CSM
>>> or not, you still can boot.
>>>
>>> 4. Finally, why did _curses.error: endwin() returned ERR ?
>>>
>>>
>>> from man endwin:
>>>
>>> • *endwin* returns an error if
>>>
>>> • the terminal was not initialized, or
>>>
>>> • *endwin* is called more than once without updating the
>>> screen, or
>>>
>>> • *reset_shell_mode*(3X) returns an error.
>>>
>>> So some debugging is needed there.
>>>
>>> rgds,
>>> toomas
>>>
>>>
>>>
>>> Install log is attached here
>>> ------------------------
>>> 2025-08-15 16:00:35,533 - INFO : text-install:120 **** START ****
>>> 2025-08-15 16:00:36,166 - ERROR : ti_install_utils.py:428 Error
>>> occured during zpool call: no pools available to import
>>>
>>> 2025-08-15 16:00:38,146 - ERROR : ti_install_utils.py:428 Error
>>> occured during zpool call: no pools available to import
>>>
>>> 2025-08-15 09:11:38,777 - ERROR : install_utils.py:411
>>> be_do_installboot_walk: child 0 of 1 device c2t00A075014881A463d0s6
>>> 2025-08-15 09:11:38,927 - ERROR : install_utils.py:411 Command:
>>> "/usr/sbin/installboot -F -m -f -b /a/boot
>>> /dev/rdsk/c2t00A075014881A463d0s6"
>>> 2025-08-15 09:11:38,927 - ERROR : install_utils.py:411 Output:
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 bootblock
>>> written for /dev/rdsk/c2t00A075014881A463d0s6, 280 sectors starting at 1024
>>> (abs 1500001280)
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 /a/boot/pmbr is
>>> newer than one in /dev/rdsk/c2t00A075014881A463d0s6
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 stage1 written
>>> to slice 6 sector 0 (abs 1500000256)
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 bootblock
>>> written to /tmp/ibootBxW1oa/EFI/Boot/bootx64.efi
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 bootblock
>>> written to /tmp/ibootBxW1oa/EFI/Boot/bootia32.efi
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 /a/boot/pmbr is
>>> newer than one in /dev/rdsk/c2t00A075014881A463d0p0
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 stage1 written
>>> to slice 0 sector 0 (abs 0)
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Errors:
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Error reading
>>> bootblock from /tmp/ibootBxW1oa/EFI/Boot/bootia32.efi
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Unable to find
>>> multiboot header
>>> 2025-08-15 09:11:38,928 - ERROR : install_utils.py:411 Error reading
>>> bootblock from /tmp/ibootBxW1oa/EFI/Boot/bootx64.efi
>>> 2025-08-15 09:11:38,967 - ERROR : ti_install.py:680 One or more ICTs
>>> failed. See previous log messages
>>> 2025-08-15 09:12:54,907 - ERROR : text-install:254 Install Profile:
>>> Pool: rpool
>>> BE name: solaris
>>> Overwrite boot configuration: True
>>> NIC None:
>>> Type: none
>>> System Info:
>>> Hostname: solaris
>>> TZ: America - US - America/Los_Angeles
>>> Time Offset: 0:00:17.242116
>>> Keyboard: None
>>> Locale: en_US.UTF-8
>>> User Info(root):
>>> Real name: None
>>> Login name: root
>>> Is Role: False
>>> User Info():
>>> Real name:
>>> Login name:
>>> Is Role: False
>>> None
>>> 2025-08-15 09:12:54,910 - ERROR : text-install:255 Traceback (most
>>> recent call last):
>>> File "/usr/bin/text-install", line 247, in <module>
>>> cleanup_curses()
>>> File "/usr/bin/text-install", line 93, in cleanup_curses
>>> curses.endwin()
>>> _curses.error: endwin() returned ERR
>>>
>>> 2025-08-15 09:12:54,911 - INFO : text-install:99 **** END ****
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Atiq
>>>
>>>
>>> On Thu, Aug 14, 2025 at 1:54 AM Toomas Soome via oi-dev <
>>> oi-dev@openindiana.org> wrote:
>>>
>>>>
>>>>
>>>> On 14. Aug 2025, at 11:26, Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>
>>>> So my disk name came out to be a long one: "c2t00A075014881A463d0" and
>>>> I am creating a fresh new pool in s6 (7th slice).
>>>> I am trying to get away with my mid week sorcery!
>>>>
>>>> Don't judge me. I barely concocted a majestic `zpool create` command
>>>> doing few searches online (been a while, I don't remember any of the
>>>> arguments)
>>>>
>>>> $ sudo zpool create -o ashift=12 -O compression=lz4 -O atime=off -O
>>>> normalization=formD -O mountpoint=none -O canmount=off rpool
>>>> /dev/dsk/c2t00A075014881A463d0s6
>>>>
>>>> I had a tiny bit of hesitation on putting in the mount restriction
>>>> switches, thinking whether it will prevent the *BSD Loader* from
>>>> looking up inside the pool.
>>>>
>>>> Then before running the text-installer (PS: no GUI yet for me, iGPU:
>>>> radeon not supported, dGPU: Nvidia RTX 4060, driver does modeset unloading)
>>>> I take the zfs pool off,
>>>>
>>>> $ sudo zpool export rpool
>>>>
>>>> pondered though if that would be a good idea before running the
>>>> installer. And then, I traveled to light blue OpenIndiana land.
>>>>
>>>> $ sudo /usr/bin/text-install
>>>>
>>>> I got away with a lot of firsts today probably in a decade (zpool
>>>> create worked, was able to treasure hunt the partition/slice that I created
>>>> from another FOSS OS with help) so while that installer is running I am
>>>> almost getting overconfident.
>>>>
>>>> An error in the end on /tmp/install_log put a dent on that just a tiny
>>>> bit. Screenshot attached: one of the py scripts invoked by
>>>> /sbin/install-finish: osol_install/ict.py has a TypeError which might have
>>>> caused due to NIC not being supported. I thought for a second whether
>>>> installer missing anything important by terminating after that TypeError.
>>>>
>>>> Life goes on: I created an entry on EFI boot manager pointing to
>>>> /Boot/bootx64.efi
>>>> As I try to boot BSD Loader is complaining that rootfs module not found.
>>>>
>>>>
>>>> As your install did end up with errors, the finalization steps are
>>>> likely missing.
>>>>
>>>> bootadm update-archive -R /mntpointfor_rootfs will create it.
>>>>
>>>>
>>>> While running some beadm command from OI live image I also noticed it
>>>> said few times that
>>>> Boot menu config missing. I guess I need to mount /boot/efi
>>>>
>>>>
>>>> no, beadm does manage boot environments and one step there is to
>>>> maintain BE menu list for bootloader boot environments menu. beadm
>>>> create/destroy/activate will update this menu (its located in <pool
>>>> dataset>/boot/menu.lst
>>>>
>>>>
>>>> As I am still learning how to make this work, any comment / suggestion
>>>> is greatly appreciated.
>>>>
>>>> Next, I will follow this after above is resolved,
>>>>
>>>> per
>>>> https://docs.openindiana.org/handbook/getting-started/#install-openindiana-to-existing-zfs-pool
>>>> ,
>>>> > As there's possibly already another OS instance installed to the
>>>> selected ZFS pool, no additional users or filesystems (like `rpool/export`)
>>>> are created during installation. Swap and dump ZFS volumes also are not
>>>> created and should be added manually after installation. Only root user
>>>> will be available in created boot environment.
>>>>
>>>>
>>>> using existing pool implies you need to take care of many things
>>>> yourself; IMO it is leaving just a bit too many things not done, but the
>>>> thing is, because it is ending up with things to be done by the operator,
>>>> it is definitely not the best option for someone trying to do its first
>>>> install. It really is for “advanced mode”, when you know what and how to do
>>>> to get setup completed.
>>>>
>>>> I have not used OI installer for some time, but I do believe you should
>>>> be able to point the installer to create pool on existing slice (not to
>>>> reuse existing pool). Or as I have suggested before, just use VM to get the
>>>> first setup to learn from.
>>>>
>>>> rgds,
>>>> toomas
>>>>
>>>> which means our installation process (into existing zfs pool) missed a
>>>> few steps of usual installation even though we started with a fresh new zfs
>>>> pool, no other OS instance.
>>>>
>>>> Appreciate the help of everyone who chimed in: Joshua, Toomas, John and
>>>> others to help me discover the magic to install into specific partition
>>>> instead of using the whole disk.
>>>>
>>>>
>>>> Cheers!
>>>>
>>>> Atiq
>>>>
>>>>
>>>> On Wed, Aug 13, 2025 at 5:53 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>> > You can create your single illumos partition on OI as well, it does
>>>>> not have to be a whole disk setup.
>>>>>
>>>>> Yep, I would prefer to create the pool using an OI live image. So, I
>>>>> created a new partition from mkpart using latest parted from a linux system
>>>>> for this purpose,
>>>>>
>>>>> parted > mkpart illumos 767GB 100%
>>>>>
>>>>> and then set the partition type to solaris launching,
>>>>>
>>>>> $ sudo gdisk /dev/nvme0n1
>>>>>
>>>>> (set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
>>>>>
>>>>> I booted using OI live gui image (flashed / created it using 07-23
>>>>> live usb image from test dir under hipster),
>>>>> Time to kill the switch!
>>>>> I see that parted can list all the partitions and format can list
>>>>> some. However, not all of them were listed under /dev/dsk or /dev/rdsk I
>>>>> could only see p0 to p4 (I guess illumos skipped the linux partitions).
>>>>>
>>>>> However, when I applied 'zpool create zfs_main
>>>>> /dev/rdsk/c2t0xxxxxxxxxxd0p4 it reported no such file or directory! Tried
>>>>> various versions of `zpool create`, nothing would succeed!
>>>>>
>>>>> Wondering if this is some nvme storage support / enumeration issue.
>>>>>
>>>>> Atiq
>>>>>
>>>>>
>>>>> On Wed, Aug 13, 2025 at 12:18 AM Toomas Soome via illumos-discuss <
>>>>> discuss@lists.illumos.org> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 13. Aug 2025, at 09:29, Stephan Althaus via illumos-discuss <
>>>>>> discuss@lists.illumos.org> wrote:
>>>>>>
>>>>>> On 8/13/25 04:17, Atiq Rahman wrote:
>>>>>>
>>>>>> Copying this to illumos discussion as well if anyone else knows.
>>>>>>
>>>>>> ---------- Forwarded message ---------
>>>>>> From: Atiq Rahman <atiqcx@gmail.com>
>>>>>> Date: Tue, Aug 12, 2025 at 6:23 PM
>>>>>> Subject: Re: [OpenIndiana-discuss] Installation on a partition (UEFI
>>>>>> System)
>>>>>> To: Discussion list for OpenIndiana <
>>>>>> openindiana-discuss@openindiana.org>
>>>>>> Cc: John D Groenveld <groenveld@acm.org>
>>>>>>
>>>>>> Hi,
>>>>>> I tried a live usb text installer again. Right after the launch text
>>>>>> installer shows following as second option on the bottom of the screen,
>>>>>> *F5_InstallToExistingPool*
>>>>>>
>>>>>> Screenshot is below. If you zoom in, you will see the option at the
>>>>>> top of the blue background in the bottom part of the image.
>>>>>>
>>>>>> <text_install_screen.jpg>
>>>>>>
>>>>>> Can I partition and create a pool using FreeBSD as you
>>>>>> mentioned. And, then use this option to install on that pool.
>>>>>>
>>>>>> *Note that my system is EFI Only (no legacy / no CSM).*
>>>>>>
>>>>>> Sincerely,
>>>>>>
>>>>>> Atiq
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 12, 2025 at 2:27 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>> Following up again to request confirmation regarding the "partition
>>>>>>> the disk" option in the installer.
>>>>>>>
>>>>>>> At present, it’s impractical for me to carry an external storage
>>>>>>> device solely for OpenIndiana. With other operating systems, I can safely
>>>>>>> install and test them on my internal SSD without risking my existing setup.
>>>>>>> Unfortunately, with OI, this specific limitation and uncertainty are
>>>>>>> creating unnecessary friction and confusion.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> You can create your single illumos partition on OI as well, it does
>>>>>> not have to be whole disk setup.
>>>>>>
>>>>>>
>>>>>> Could you please confirm the current state of EFI support —
>>>>>>> specifically, whether it is possible to safely perform a custom-partition
>>>>>>> installation on an EFI-only system, using a FreeBSD live USB image with the
>>>>>>> -d option.
>>>>>>>
>>>>>>> Thank you,
>>>>>>> Atiq
>>>>>>>
>>>>>>
>>>>>>
>>>>>> yes, it is possible. However, if you intend to share pool with
>>>>>> illumos, you need to know the specifics. illumos GPT partition type in
>>>>>> FreeBSD is named apple-zfs (its actually the uuid of illumos usr slice from
>>>>>> vtoc). FreeBSD boot loader is searching for partition freebsd-zfs to boot
>>>>>> the FreeBSD system.
>>>>>>
>>>>>> In general, I’d suggest to use virtualization to get first glimpse of
>>>>>> the OS, to learn its details and facts and then attempt to build
>>>>>> complicated multi-os/multi boot setups.
>>>>>>
>>>>>> rgds,
>>>>>> toomas
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> On Mon, Aug 4, 2025 at 3:54 PM Atiq Rahman <atiqcx@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi John,
>>>>>>>> That's very helpful..
>>>>>>>>
>>>>>>>> So, even though I choose "Partition the disk (MBR)" on the text
>>>>>>>> installer, it should work fine on my UEFI System after I set up the pool
>>>>>>>> using FreeBSD live image.
>>>>>>>> Right? I get confused since it mentions "MBR" next to it and my
>>>>>>>> System is entirely GPT, no support for legacy / MBR booting at all (no CSM).
>>>>>>>>
>>>>>>>> Have a smooth Monday,
>>>>>>>>
>>>>>>>> Atiq
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Jul 15, 2025 at 5:56 AM John D Groenveld via
>>>>>>>> openindiana-discuss <openindiana-discuss@openindiana.org> wrote:
>>>>>>>>
>>>>>>>>> In message <CABC65rOmBY6YmMSB=HQdXq=
>>>>>>>>> TDN7d1XR+K8m9-__RQPBzhJZeMQ@mail.gmail.com>
>>>>>>>>> , Atiq Rahman writes:
>>>>>>>>> >Most likely, I am gonna go with this. It's been a while since I
>>>>>>>>> last had a
>>>>>>>>> >Solaris machine. I have to figure out the zpool stuff.
>>>>>>>>>
>>>>>>>>> If you create the pool for OI under FreeBSD, you'll likely need to
>>>>>>>>> disable features with the -d option as FreeBSD's ZFS is more
>>>>>>>>> feature
>>>>>>>>> rich.
>>>>>>>>> <URL:
>>>>>>>>> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&apropos=0&sektion=0&manpath=FreeBSD+14.3-RELEASE+and+Ports&arch=default&format=html
>>>>>>>>>
>>>>>>>>> Once you get OI installed on your OI pool, you can enable features:
>>>>>>>>> <URL:https://illumos.org/man/7/zpool-features>
>>>>>>>>>
>>>>>>>>> John
>>>>>>>>> groenveld@acm.org
>>>>>>>>>
>>>>>>>> Hello!
>>>>>>
>>>>>> You can install to an existing pool created with FreeBSD, be shure to
>>>>>> use the option "-d" as "zpool create -d <newpool> <partition> " to disable
>>>>>> additional ZFS features, see Details here:
>>>>>>
>>>>>>
>>>>>> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&sektion=8&apropos=0&manpath=FreeBSD+14.3-RELEASE+and+Ports
>>>>>>
>>>>>> And an EFI partition should be there, too.
>>>>>>
>>>>>> HTH,
>>>>>>
>>>>>> Stephan
>>>>>>
>>>>>>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T240e6222b19f3259-Mf13cec51ca5c474c6eec2154
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 44666 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] Re: [oi-dev] [developer] Install into existing zfs pool on a GPT partition
2025-08-20 18:38 ` Atiq Rahman
@ 2025-08-20 18:52 ` John D Groenveld via illumos-discuss
2025-08-21 7:01 ` [discuss] " Toomas Soome via illumos-discuss
0 siblings, 1 reply; 20+ messages in thread
From: John D Groenveld via illumos-discuss @ 2025-08-20 18:52 UTC (permalink / raw)
To: OpenIndiana Developer mailing list; +Cc: illumos-developer, illumos-discuss
In message <CABC65rM0=m__ZQbWVHzFLJCcnfFvBZXmxWiJ+HL6FjA73k=C_Q@mail.gmail.com>
, Atiq Rahman writes:
>After I installed OI via "F5_InstallToExistingPool" and I booted into it
>from the internal disk it kept defaulting to maintenance mode and it didn't
>set the password I specified on the installer (as I mentioned in previous
>email). So, did it miss most of the steps for installation and generating
>the filesystem etc?
I can't reproduce.
Try again and record all of your steps.
<URL:https://docs.openindiana.org/handbook/getting-started/#install-openindiana-to-existing-zfs-pool>
John
groenveld@acm.org
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T240e6222b19f3259-M63f70f7b3b9ca6b9ed2768a8
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [discuss] [oi-dev] [developer] Install into existing zfs pool on a GPT partition
2025-08-20 18:52 ` John D Groenveld via illumos-discuss
@ 2025-08-21 7:01 ` Toomas Soome via illumos-discuss
0 siblings, 0 replies; 20+ messages in thread
From: Toomas Soome via illumos-discuss @ 2025-08-21 7:01 UTC (permalink / raw)
To: illumos-discuss; +Cc: OpenIndiana Developer mailing list, illumos-developer
> On 20. Aug 2025, at 21:52, John D Groenveld via illumos-discuss <discuss@lists.illumos.org> wrote:
>
> In message <CABC65rM0=m__ZQbWVHzFLJCcnfFvBZXmxWiJ+HL6FjA73k=C_Q@mail.gmail.com>
> , Atiq Rahman writes:
>> After I installed OI via "F5_InstallToExistingPool" and I booted into it
>> from the internal disk it kept defaulting to maintenance mode and it didn't
>> set the password I specified on the installer (as I mentioned in previous
>> email). So, did it miss most of the steps for installation and generating
>> the filesystem etc?
>
> I can't reproduce.
> Try again and record all of your steps.
> <URL:https://docs.openindiana.org/handbook/getting-started/#install-openindiana-to-existing-zfs-pool>
>
tbh, the OI installer really should support using GPT partition next to whole disk setup option (and perhaps the MBR related setup should be hidden behind “advanced” or “historical setup” page;)
But this means work to improve the installer and is OI specific, not related to illumos-gate.
rgds,
toomas
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T240e6222b19f3259-M0edfa8a0e0b09ebf4bdf2694
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] Re: GPT Partitioning for Illumos/Solaris
[not found] ` <198d980ae49.1166dae1613934.6527363511160319029@zoho.com>
@ 2025-08-27 21:21 ` Atiq Rahman
0 siblings, 0 replies; 20+ messages in thread
From: Atiq Rahman @ 2025-08-27 21:21 UTC (permalink / raw)
To: Eric J Bowman
Cc: OpenIndiana Developer mailing list, illumos-discuss,
illumos-developer
[-- Attachment #1: Type: text/plain, Size: 2906 bytes --]
> but only boots linux kernels
heard void linux maintains ZFS Boot Menu. And they only support linux
kernels (vmlinuz) for now, sadly!
Atiq
On Sat, Aug 23, 2025 at 5:35 PM Eric J Bowman <mellowmutt@zoho.com> wrote:
> I have ZFS Boot Menu in my 'standard' rEFInd+ options (I also launch it
> from netboot.xyz), but oddly it doesn't see cachyos, but does see
> GhostBSD (not illumos ZFS tho) but only boots linux kernels. While I think
> it suffers from feature creep, I like it in concept. So I've stripped down
> a FreeBSD kernel, to basically give me beadm and the ability to boot
> multiple illumos/FreeBSD installs on the same drive, but what I'm driving
> at is using illumos kernel (as EFI app on FAT32) because the fastboot
> facility (not to be confused with FreeBSD fastboot) allows me to access
> bootable partitions after the NVMe driver loads -- this allows me to run
> NVMe in a PCIe slot on older systems without firmware NVMe support, BIOS or
> UEFI, and illumos fastboot can load any multiboot2-compliant kernel...
> supposedly.
>
> -Eric
>
>
>
> ---- On Wed, 20 Aug 2025 09:33:19 -0700 *Atiq Rahman <atiqcx@gmail.com
> <atiqcx@gmail.com>>* wrote ---
>
> > That's the gist of multiboot on UEFI, more work if you want more than
> one illumos or freebsd partition, but they will coexist peacefully.
>
> Have any of you ever tried the ZFS Boot Menu? I heard good things about
> it, thinking of trying it.
>
> Best!
>
> Atiq
>
>
> On Thu, Aug 14, 2025 at 8:25 PM Eric J Bowman <mellowmutt@zoho.com> wrote:
>
>
> >
> > parted > mkpart illumos 767GB 100%
> >
> > and then set the partition type to solaris launching,
> >
> > $ sudo gdisk /dev/nvme0n1
> >
> > (set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
> >
>
>
> The "Solaris Root" partition type is relevant to UFS, not ZFS. On UEFI,
> set s0/p1 to be an ESP, "boot, hidden, esp" flags and format it fat32, your
> "Solaris Reserved" should be s8/p9, 8Mib, -1MiB from end of disk. Your ZFS
> partition should be set to ZFS for illumos, FreeBSD ZFS for freebsd. Being
> bootable is a property of the ZFS filesystem, not its partition.
>
> /EFI
> /EFI/OpenIndiana/bootx64.efi (copy of loader.efi)
> /EFI/boot
>
> I put rEFInd Plus in /EFI/boot, with a backup of its bootx64.efi in case
> it gets overwritten.
>
> /EFI/boot/drivers/x64_zfs.efi
>
> This driver will allow rEFInd to stub-load loader64.efi from your ZFS
> partition on most firmware, some will still need the
> /EFI/OpenIndiana/bootx64.efi. I don't know if the string "OpenIndiana" is
> registered, this one is:
>
> /EFI/FreeBSD
>
>
>
>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T6c1d3abbcce146a4-M97d1ba05d6455d844e6f2324
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 6271 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* [discuss] Re: GPT Partitioning for Illumos/Solaris
2025-08-17 10:06 ` Toomas Soome via illumos-discuss
@ 2025-09-04 5:22 ` Atiq Rahman
0 siblings, 0 replies; 20+ messages in thread
From: Atiq Rahman @ 2025-09-04 5:22 UTC (permalink / raw)
To: OpenIndiana Developer mailing list
Cc: illumos-discuss, Toomas Soome, Eric J Bowman,
Discussion list for OpenIndiana, illumos-developer
[-- Attachment #1: Type: text/plain, Size: 3018 bytes --]
Hi Toomas,
Gotcha, the 8MB partition is still needed for historical reasons.
Best!
Atiq
On Sun, Aug 17, 2025 at 3:06 AM Toomas Soome via oi-dev <
oi-dev@openindiana.org> wrote:
>
>
> On 17. Aug 2025, at 04:43, Atiq Rahman <atiqcx@gmail.com> wrote:
>
> Hi Eric,
> > The "Solaris Root" partition type is relevant to UFS, not ZFS. On UEFI,
> set s0/p1 to be an ESP, "boot, hidden, esp" flags and format it fat32
>
> Yep, I have done that quite a while ago. Linux is booting alright with
> that.
>
> > your "Solaris Reserved" should be s8/p9, 8Mib, -1MiB from end of disk
> . That's for legacy / MBR partitions. Mine is GPT. I don't need that or
> any other partition for Solaris. I only need one single partition of zfs /
> illumos and that's working so far.
>
>
> It is created automatically on GPT and it is used to store fabricated disk
> identifier for case the disk does not provide one for itself. It is
> originating from Solaris and since illumos is fork of Solaris, it is still
> there. MBR+VTOC setup is using space from ‘alternates’ for the same purpose.
>
> rgds,
> toomas
>
>
> On Thu, Aug 14, 2025 at 8:25 PM Eric J Bowman <mellowmutt@zoho.com> wrote:
>
>> >
>> > parted > mkpart illumos 767GB 100%
>> >
>> > and then set the partition type to solaris launching,
>> >
>> > $ sudo gdisk /dev/nvme0n1
>> >
>> > (set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
>> >
>>
>>
>> The "Solaris Root" partition type is relevant to UFS, not ZFS. On UEFI,
>> set s0/p1 to be an ESP, "boot, hidden, esp" flags and format it fat32, your
>> "Solaris Reserved" should be s8/p9, 8Mib, -1MiB from end of disk. Your ZFS
>> partition should be set to ZFS for illumos, FreeBSD ZFS for freebsd. Being
>> bootable is a property of the ZFS filesystem, not its partition.
>>
>> /EFI
>> /EFI/OpenIndiana/bootx64.efi (copy of loader.efi)
>> /EFI/boot
>>
>> I put rEFInd Plus in /EFI/boot, with a backup of its bootx64.efi in case
>> it gets overwritten.
>>
>> /EFI/boot/drivers/x64_zfs.efi
>>
>> This driver will allow rEFInd to stub-load loader64.efi from your ZFS
>> partition on most firmware, some will still need the
>> /EFI/OpenIndiana/bootx64.efi. I don't know if the string "OpenIndiana" is
>> registered, this one is:
>>
>> /EFI/FreeBSD
>>
>> That's the gist of multiboot on UEFI, more work if you want more than one
>> illumos or freebsd partition, but they will coexist peacefully.
>>
> *illumos <https://illumos.topicbox.com/latest>* / illumos-discuss / see
> discussions <https://illumos.topicbox.com/groups/discuss> + participants
> <https://illumos.topicbox.com/groups/discuss/members> + delivery options
> <https://illumos.topicbox.com/groups/discuss/subscription>
>
>
------------------------------------------
illumos: illumos-discuss
Permalink: https://illumos.topicbox.com/groups/discuss/T6c1d3abbcce146a4-M271df913c0b6744ebc4af6b1
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription
[-- Attachment #2: Type: text/html, Size: 5804 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2025-09-04 5:23 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CABC65rMtCy52d2a9ZtY=sa9EyzOftABhOqc0j2RZM97vOpYv8w@mail.gmail.com>
[not found] ` <4bh0G92PCRzm174G@004.mia.mailroute.net>
[not found] ` <CABC65rOmBY6YmMSB=HQdXq=TDN7d1XR+K8m9-__RQPBzhJZeMQ@mail.gmail.com>
[not found] ` <4bhK1s1yDDzlgqV7@003.mia.mailroute.net>
[not found] ` <CABC65rOLNb=THkwWaA+GvghCt5jUbVvcCWJTGZpsWxi_5qCnKg@mail.gmail.com>
[not found] ` <CABC65rP8Um6YECDWbVQKFGuqvu5UaMRxzn_-VQEoM-Aq8x5t=g@mail.gmail.com>
[not found] ` <CABC65rNbiPwBosCJ-Pm2reMGW1OB1enr1=4pso1fyA_xTe8now@mail.gmail.com>
2025-08-13 2:17 ` [discuss] Fwd: [OpenIndiana-discuss] Installation on a partition (UEFI System) Atiq Rahman
2025-08-13 6:29 ` Stephan Althaus via illumos-discuss
2025-08-13 7:17 ` [discuss] " Toomas Soome via illumos-discuss
2025-08-14 0:53 ` [discuss] zpool create on specified GPT partition Atiq Rahman
2025-08-14 1:23 ` Joshua M. Clulow via illumos-discuss
2025-08-14 7:43 ` [OpenIndiana-discuss] " Atiq Rahman
2025-08-14 8:31 ` Toomas Soome via illumos-discuss
2025-08-16 5:50 ` [oi-dev] " Atiq Rahman
[not found] ` <CABC65rOztb9dAbdpC7xcWhun6Le9uO5NyNvSgkhr7kypNdMXBQ@mail.gmail.com>
[not found] ` <0ED5B354-9397-45D7-8B28-52D35D857BFB@me.com>
[not found] ` <CABC65rNp6CkH8ESN+Cru8k_Q+6OHYVqQ8j1eTYM=o-9kzTBooA@mail.gmail.com>
[not found] ` <CB99847C-9482-47CE-8F37-025DF0EB3B99@me.com>
[not found] ` <CABC65rPQkQNSrBsD9BwPSorFzgiJ-GqNLks+Bz09OuaHkstHGA@mail.gmail.com>
[not found] ` <2C6FD009-8737-489C-8B30-07BEA38832B3@me.com>
2025-08-19 21:40 ` [discuss] Re: [oi-dev] [developer] Install into existing zfs pool on a " Atiq Rahman
2025-08-20 16:26 ` Atiq Rahman
2025-08-20 18:30 ` John D Groenveld via illumos-discuss
2025-08-20 18:38 ` Atiq Rahman
2025-08-20 18:52 ` John D Groenveld via illumos-discuss
2025-08-21 7:01 ` [discuss] " Toomas Soome via illumos-discuss
[not found] ` <198abc29f87.10236f39f26274.6456493104881031520@zoho.com>
2025-08-17 1:43 ` [discuss] GPT Partitioning for Illumos/Solaris Atiq Rahman
2025-08-17 10:06 ` Toomas Soome via illumos-discuss
2025-09-04 5:22 ` [discuss] " Atiq Rahman
2025-08-20 16:33 ` [discuss] " Atiq Rahman
2025-08-20 16:41 ` [discuss] " bronkoo via illumos-discuss
[not found] ` <198d980ae49.1166dae1613934.6527363511160319029@zoho.com>
2025-08-27 21:21 ` Atiq Rahman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).