* [9fans] linux vm appliance
@ 2025-06-13 17:14 ron minnich
2025-06-13 21:55 ` David Leimbach via 9fans
0 siblings, 1 reply; 7+ messages in thread
From: ron minnich @ 2025-06-13 17:14 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
[-- Attachment #1: Type: text/plain, Size: 4493 bytes --]
In my IWP9 talk I mentioned the linux vm appliance we built for akaros,
when I was at google, and I've got a very simple version working on 9front
vmx now, tested on my t420.
It needs a kernel, and a u-root (github.com/u-root/u-root) initramfs, with
my sidecore command (github.com/u-root/sidecore)
sidecore is a modified cpu client which provides, to the cpu server, over
9p or nfs, a union of the client namespace (as in cpu) and a cpio archive.
The cpio is typically a flattened docker container, created using a tool I
wrote in github.com/u-root/sidecore-images.
I did all this work, while at Samsung, so we could have cpu in Windows.
Windows does not have things linux wants, like symlinks, device specials,
and so on; sidecore provides them via the cpio file.
So, to make this work, you need a kernel, an initramfs, usually compiled
into the kernel, and the sidecore command, which runs in Plan 9. I can
provide more instructions to interested parties, but overall, it's
pretty easy. What it lacks is polish.
There is an rc script, appliance, which does this:
vmx -c /fd/0,/fd/1 -M 1024M -n ether1 /usr/glenda/plan9cpulinux
Once that starts, in the guest, you need to do something like this:
ip addr add 192.168.0.187/24 dev eth0
ip link set dev eth0 up
cpud
Then the appliance is ready for use.
On the plan 9 side, there is a command, linux, which looks like this:
ramfs -S sidecore
mount -ac /srv/sidecore /
mkdir -p /home/glenda
bind /usr/glenda /home/glenda
SIDECORE_ARCH=amd64
SIDECORE_DISTRO=py3-numpi
SIDECORE_KEYFILE=/usr/glenda/.ssh/cpu_rsa
HOME=/home/glenda
PWD=/home/glenda
sidecore -sp 17010 '-9p=false' '-nfs=true' 192.168.0.187 $*
The ramfs setup is so our home is /home/glenda, which linux distros seem to
want.
The various environment variables direct sidecore about where to get its
key, the architecture of the container, and the distro desired.
sidecore looks for a file of the form:
$home/sidecore-images/$(SIDECORE_ARCH)-$(SIDECORE_DISTRO)@$(SIDECORE_VERSION).cpio
so, a few things: works across architectures, you can have any distro you
want, and you can have different versions: default is latest.
I've used this from osx, linux, and windows to risc-v, and now it works
from plan 9 to vmx guest. Note that, also, the guest can be freebsd.
Now we get to the kind-of-crude part, sorry about this, the plan 9 support
is a little unbaked.
Part of the issue is ... ptys. Anyway ...
I run the rc script:
linux
# this gets you to a non-interactive shell with no prompt. Sorry.
# all the shells stat fd 0, and, if it is not a tty, assume they are
non-interactive.
# the golang cpu sets up the namespace in /tmp/cpu, not /mnt/term.
# Step 1: get device mounted into /tmp/cpu
mount --bind /dev /tmp/cpu/dev
# Step 2: chroot into the namespace provided by the sidecore command
chroot /tmp/cpu bash -i
# Step 3: run python3
root@(none):/# python3 -c 'print("Hello!")'
python3 -c 'print("Hello!")'
# What is this stuff? I don't know.
2025/06/13 07:11:38 [ERROR] failing create to indicate lack of support for
'exclusive' mode.
2025/06/13 07:11:38 [ERROR] Error Creating: permission denied
2025/06/13 07:11:38 [ERROR] failing create to indicate lack of support for
'exclusive' mode.
2025/06/13 07:11:38 [ERROR] Error Creating: permission denied
2025/06/13 07:11:38 [ERROR] failing create to indicate lack of support for
'exclusive' mode.
2025/06/13 07:11:38 [ERROR] Error Creating: permission denied
# python output
Hello!
I have a cpio container image that provides python3 and numpy
The big picture here: you can have linux programs easily. It should be as
easy to run these programs, in a vm, as it is to run any command. You can,
e.g.,
du -a | linux wc # but why? :-)
and have that work.
My next step is to finish up the "VMs via libthread" work, then get back to
smp guests, but I thought this initial work might be of interested.
BTW, the golang root file system I use has been used in google and
bytedance firmware for 5+ years, and is load bearing on a few million data
center nodes. The golang cpu/cpud is an integral part of google and ARM
confidential compute stacks. It's pretty real, and now it's on plan 9 too.
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Td71c087b037b58f9-M46efd19a081253eba3eb9097
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
[-- Attachment #2: Type: text/html, Size: 5998 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [9fans] linux vm appliance
2025-06-13 17:14 [9fans] linux vm appliance ron minnich
@ 2025-06-13 21:55 ` David Leimbach via 9fans
2025-06-14 2:35 ` ori
0 siblings, 1 reply; 7+ messages in thread
From: David Leimbach via 9fans @ 2025-06-13 21:55 UTC (permalink / raw)
To: 9fans; +Cc: Fans of the OS Plan from Bell Labs
[-- Attachment #1: Type: text/html, Size: 6578 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [9fans] linux vm appliance
2025-06-13 21:55 ` David Leimbach via 9fans
@ 2025-06-14 2:35 ` ori
0 siblings, 0 replies; 7+ messages in thread
From: ori @ 2025-06-14 2:35 UTC (permalink / raw)
To: 9fans; +Cc: 9fans
Equis is quite out of date and, AFAIK, fairly buggy, at least on amd64.
We got it building several years ago at a 9front mini-hackathon, but
IMO, runing X on 9front is a dead end; the API surface area used is large,
and it's annoying to update. Better to keep it in its native environment,
and import the draw device over the network.
Quoth David Leimbach via 9fans <9fans@9fans.net>:
> Very interesting.
> If we had a X server (we do don’t we) can I run a web browser through
> this?
>
> Sent from my iPhone
>
> On Jun 13, 2025, at 10:59 AM, ron minnich <rminnich@gmail.com> wrote:
>
>
> In my IWP9 talk I mentioned the linux vm appliance we built for
> akaros, when I was at google, and I've got a very simple version
> working on 9front vmx now, tested on my t420.
> It needs a kernel, and a u-root ( github.com/u-root/u-root
> [http://github.com/u-root/u-root]) initramfs, with my sidecore command
> ( github.com/u-root/sidecore [http://github.com/u-root/sidecore])
> sidecore is a modified cpu client which provides, to the cpu server,
> over 9p or nfs, a union of the client namespace (as in cpu) and a cpio
> archive. The cpio is typically a flattened docker container, created
> using a tool I wrote in github.com/u-root/sidecore-images
> [http://github.com/u-root/sidecore-images].
> I did all this work, while at Samsung, so we could have cpu in
> Windows. Windows does not have things linux wants, like symlinks,
> device specials, and so on; sidecore provides them via the cpio file.
> So, to make this work, you need a kernel, an initramfs, usually
> compiled into the kernel, and the sidecore command, which runs in Plan
> 9. I can provide more instructions to interested parties, but overall,
> it's pretty easy. What it lacks is polish.
> There is an rc script, appliance, which does this:
> vmx -c /fd/0,/fd/1 -M 1024M -n ether1 /usr/glenda/plan9cpulinux
> Once that starts, in the guest, you need to do something like this:
> ip addr add 192.168.0.187/24 [http://192.168.0.187/24] dev eth0
> ip link set dev eth0 up
> cpud
> Then the appliance is ready for use.
> On the plan 9 side, there is a command, linux, which looks like this:
> ramfs -S sidecore
> mount -ac /srv/sidecore /
> mkdir -p /home/glenda
> bind /usr/glenda /home/glenda
>
> SIDECORE_ARCH=amd64
> SIDECORE_DISTRO=py3-numpi
> SIDECORE_KEYFILE=/usr/glenda/.ssh/cpu_rsa
> HOME=/home/glenda
> PWD=/home/glenda
>
> sidecore -sp 17010'-9p=false''-nfs=true' 192.168.0.187 $*
>
> The ramfs setup is so our home is /home/glenda, which linux distros
> seem to want.
> The various environment variables direct sidecore about where to get
> its key, the architecture of the container, and the distro desired.
> sidecore looks for a file of the form:
> $home/sidecore-images/$(SIDECORE_ARCH)-$(SIDECORE_DISTRO)@$(SIDECORE_VERSION).cpio
> so, a few things: works across architectures, you can have any distro
> you want, and you can have different versions: default is latest.
> I've used this from osx, linux, and windows to risc-v, and now it
> works from plan 9 to vmx guest. Note that, also, the guest can be
> freebsd.
> Now we get to the kind-of-crude part, sorry about this, the plan 9
> support is a little unbaked.
> Part of the issue is... ptys. Anyway...
> I run the rc script:
> linux
> # this gets you to a non-interactive shell with no prompt. Sorry.
> # all the shells stat fd 0, and, if it is not a tty, assume they are
> non-interactive.
> # the golang cpu sets up the namespace in /tmp/cpu, not /mnt/term.
> # Step 1: get device mounted into /tmp/cpu
> mount --bind /dev /tmp/cpu/dev
> # Step 2: chroot into the namespace provided by the sidecore command
> chroot /tmp/cpu bash -i
> # Step 3: run python3
> root@(none):/# python3 -c'print("Hello!")'
> python3 -c'print("Hello!")'
> # What is this stuff? I don't know.
> 2025/06/13 07:11:38 [ERROR] failing create to indicate lack of support
> for'exclusive' mode.
> 2025/06/13 07:11:38 [ERROR] Error Creating: permission denied
> 2025/06/13 07:11:38 [ERROR] failing create to indicate lack of support
> for'exclusive' mode.
> 2025/06/13 07:11:38 [ERROR] Error Creating: permission denied
> 2025/06/13 07:11:38 [ERROR] failing create to indicate lack of support
> for'exclusive' mode.
> 2025/06/13 07:11:38 [ERROR] Error Creating: permission denied
> # python output
> Hello!
>
> I have a cpio container image that provides python3 and numpy
> The big picture here: you can have linux programs easily. It should be
> as easy to run these programs, in a vm, as it is to run any command.
> You can, e.g.,
> du -a | linux wc # but why?:-)
> and have that work.
> My next step is to finish up the"VMs via libthread" work, then get
> back to smp guests, but I thought this initial work might be of
> interested.
> BTW, the golang root file system I use has been used in google and
> bytedance firmware for 5+ years, and is load bearing on a few million
> data center nodes. The golang cpu/cpud is an integral part of google
> and ARM confidential compute stacks. It's pretty real, and now it's on
> plan 9 too.
>
> 9fans [https://9fans.topicbox.com/latest] / 9fans / see discussions
> [https://9fans.topicbox.com/groups/9fans] + participants
> [https://9fans.topicbox.com/groups/9fans/members] + delivery
> [https://9fans.topicbox.com/groups/9fans/subscription] options
> Permalink
> [https://9fans.topicbox.com/groups/9fans/Td71c087b037b58f9-Mbc862ef48799530a2da39430]
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Td71c087b037b58f9-M03588886b2fa7dde45132ae3
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [9fans] linux vm appliance
2025-06-15 22:52 ` ori
@ 2025-06-16 0:56 ` ron minnich
0 siblings, 0 replies; 7+ messages in thread
From: ron minnich @ 2025-06-16 0:56 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 1357 bytes --]
If the existing stuff can support an x11 server in the guest then we are
good to go. I did not know it could
On Sun, Jun 15, 2025 at 16:47 <ori@eigenstate.org> wrote:
> Hm, is there anything this gets us over the existing vmx video
> output options?
>
> The things I really want, which just using a virtual video card
> won't get us:
>
> - rootless, resizable windows
> - clipboard integration
> - reasonable performance
> - input following kbdfs settings
>
>
> Quoth ron minnich <rminnich@gmail.com>:
> > This might be a helpful jumpoff point to see how the kvmtool implemented
> > guest graphics. It is pretty simple, but it was able to support windows.
> >
> > https://github.com/kvmtool/kvmtool/blob/master/ui/sdl.c
> >
> > On Sun, Jun 15, 2025 at 7:42 AM Stuart Morrow <morrow.stuart@gmail.com>
> > wrote:
> >
> > > > I was actually thinking about writing one. It shouldn't be too hard,
> but
> > > then I remembered how much effort it'll be to support all the chaos.
> > >
> > > ssh -X and an "X server" that's just enough to bridge xclip to
> /dev/snarf
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Td71c087b037b58f9-M40c088ea9a2e74fc26eda659
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
[-- Attachment #2: Type: text/html, Size: 3060 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [9fans] linux vm appliance
2025-06-15 19:16 ` ron minnich
@ 2025-06-15 22:52 ` ori
2025-06-16 0:56 ` ron minnich
0 siblings, 1 reply; 7+ messages in thread
From: ori @ 2025-06-15 22:52 UTC (permalink / raw)
To: 9fans
Hm, is there anything this gets us over the existing vmx video
output options?
The things I really want, which just using a virtual video card
won't get us:
- rootless, resizable windows
- clipboard integration
- reasonable performance
- input following kbdfs settings
Quoth ron minnich <rminnich@gmail.com>:
> This might be a helpful jumpoff point to see how the kvmtool implemented
> guest graphics. It is pretty simple, but it was able to support windows.
>
> https://github.com/kvmtool/kvmtool/blob/master/ui/sdl.c
>
> On Sun, Jun 15, 2025 at 7:42 AM Stuart Morrow <morrow.stuart@gmail.com>
> wrote:
>
> > > I was actually thinking about writing one. It shouldn't be too hard, but
> > then I remembered how much effort it'll be to support all the chaos.
> >
> > ssh -X and an "X server" that's just enough to bridge xclip to /dev/snarf
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Td71c087b037b58f9-Md53a61a1229f7a3d297fd723
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [9fans] linux vm appliance
2025-06-15 14:12 ` Stuart Morrow
@ 2025-06-15 19:16 ` ron minnich
2025-06-15 22:52 ` ori
0 siblings, 1 reply; 7+ messages in thread
From: ron minnich @ 2025-06-15 19:16 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 753 bytes --]
This might be a helpful jumpoff point to see how the kvmtool implemented
guest graphics. It is pretty simple, but it was able to support windows.
https://github.com/kvmtool/kvmtool/blob/master/ui/sdl.c
On Sun, Jun 15, 2025 at 7:42 AM Stuart Morrow <morrow.stuart@gmail.com>
wrote:
> > I was actually thinking about writing one. It shouldn't be too hard, but
> then I remembered how much effort it'll be to support all the chaos.
>
> ssh -X and an "X server" that's just enough to bridge xclip to /dev/snarf
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Td71c087b037b58f9-M20b5aeb305712abe748f6643
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
[-- Attachment #2: Type: text/html, Size: 2157 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [9fans] linux vm appliance
[not found] ` <1fd01c98-2e8e-407a-ae63-955c9c4c2876@sirjofri.de>
@ 2025-06-15 14:12 ` Stuart Morrow
2025-06-15 19:16 ` ron minnich
0 siblings, 1 reply; 7+ messages in thread
From: Stuart Morrow @ 2025-06-15 14:12 UTC (permalink / raw)
To: 9fans
> I was actually thinking about writing one. It shouldn't be too hard, but then I remembered how much effort it'll be to support all the chaos.
ssh -X and an "X server" that's just enough to bridge xclip to /dev/snarf
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Td71c087b037b58f9-Mb1847b0cabdd126d18649782
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-06-16 5:03 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-06-13 17:14 [9fans] linux vm appliance ron minnich
2025-06-13 21:55 ` David Leimbach via 9fans
2025-06-14 2:35 ` ori
[not found] <41421D638843A3043EB76519164DFE71@eigenstate.org>
[not found] ` <1fd01c98-2e8e-407a-ae63-955c9c4c2876@sirjofri.de>
2025-06-15 14:12 ` Stuart Morrow
2025-06-15 19:16 ` ron minnich
2025-06-15 22:52 ` ori
2025-06-16 0:56 ` ron minnich
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).