* [9fans] Markdown/Markdeep in Edwood
@ 2026-02-07 23:00 Paul Lalonde
2026-02-08 5:08 ` [9fans] " penny
` (3 more replies)
0 siblings, 4 replies; 17+ messages in thread
From: Paul Lalonde @ 2026-02-07 23:00 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
I posted a week ago about the .md support and rich text I was getting
an LLM to build for me.
I've updated it significantly.
I still don't recommend reading the source, but it's now my
daily-driver and I do most of my markdown work in this now. It
supports editing reasonably well, acme-ideomatically, and lets you
directly type markdown annotations in-place, interpreting it as they
become usable. And when it's too bad, a quick B2 on the Markdeep tag
pops you into the regular text win that's backing the rich text.
The rich text frame itself could probably be repurposed for other uses
(LSP comes to mind) as it merely tracks spans and does layout from
there, without knowledge of markdown itself.
I'll note that it uses the font you currently have, building sizes,
bold and italics using crap naming heuristics. I usually invoke my
edwood with '-f /mnt/font/GoRegular/16a/font -F
/mnt/font/GoMono/16a/font' which gives me lots. If I invoke without, I
still get decent rendering, but the fonts and sizes are more limited.
This is now in the mainline branch of my git repo:
https://github.com/paul-lalonde/edwood
Issues welcomed.
Paul
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-Mab01fbcad0229706892957ec
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* [9fans] Re: Markdown/Markdeep in Edwood
2026-02-07 23:00 [9fans] Markdown/Markdeep in Edwood Paul Lalonde
@ 2026-02-08 5:08 ` penny
2026-02-08 9:16 ` Clout Tolstoy
2026-02-08 10:46 ` [9fans] " Ori Bernstein
` (2 subsequent siblings)
3 siblings, 1 reply; 17+ messages in thread
From: penny @ 2026-02-08 5:08 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 376 bytes --]
I just want to register a hopefully gentle; but public acknowledgment: this is deeply uninteresting, and incredibly embarrassing for you and for us.
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-Md752ecd3ef37c57356f8dcc1
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
[-- Attachment #2: Type: text/html, Size: 864 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Re: Markdown/Markdeep in Edwood
2026-02-08 5:08 ` [9fans] " penny
@ 2026-02-08 9:16 ` Clout Tolstoy
2026-02-08 11:06 ` hiro
0 siblings, 1 reply; 17+ messages in thread
From: Clout Tolstoy @ 2026-02-08 9:16 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 1191 bytes --]
If it comes down to how many rainforest does an idea cost, I do believe
that weight should be accounted for. ...
What's the caloric input for a thought and how that scales for tokens or
consumption in relation to electricity?
I looked at the code, and can tell it was ai because of every comment on a
new block/chunk. Something everyone could learn from.
On Sun, Feb 8, 2026, 12:21 AM <penny@limitedideas.org> wrote:
> I just want to register a hopefully gentle; but public acknowledgment:
> this is deeply uninteresting, and incredibly embarrassing for you and for
> us.
> *9fans <https://9fans.topicbox.com/latest>* / 9fans / see discussions
> <https://9fans.topicbox.com/groups/9fans> + participants
> <https://9fans.topicbox.com/groups/9fans/members> + delivery options
> <https://9fans.topicbox.com/groups/9fans/subscription> Permalink
> <https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-Md752ecd3ef37c57356f8dcc1>
>
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-Md293098cd5cedc131ee27399
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
[-- Attachment #2: Type: text/html, Size: 1753 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-07 23:00 [9fans] Markdown/Markdeep in Edwood Paul Lalonde
2026-02-08 5:08 ` [9fans] " penny
@ 2026-02-08 10:46 ` Ori Bernstein
2026-02-09 11:02 ` tlaronde
2026-02-16 21:10 ` [9fans] Markdown/Markdeep in Edwood Edouard Klein
2026-02-19 21:01 ` Shawn Rutledge
3 siblings, 1 reply; 17+ messages in thread
From: Ori Bernstein @ 2026-02-08 10:46 UTC (permalink / raw)
To: 9fans; +Cc: Paul Lalonde
The future is bright, because we're setting
it on fire.
Can we please take a step back, think about
what we value, and build towards a future
where we care about what we build, and its
long term results on the planet and society?
Paul, do you really want to live in the world
the musky smelling AI cultists are trying to
build?
Using these tools is a statement of values.
What do you value?
On Sat, 7 Feb 2026 15:00:55 -0800
Paul Lalonde <paul.a.lalonde@gmail.com> wrote:
> I posted a week ago about the .md support and rich text I was getting
> an LLM to build for me.
>
> I've updated it significantly.
>
> I still don't recommend reading the source, but it's now my
> daily-driver and I do most of my markdown work in this now. It
> supports editing reasonably well, acme-ideomatically, and lets you
> directly type markdown annotations in-place, interpreting it as they
> become usable. And when it's too bad, a quick B2 on the Markdeep tag
> pops you into the regular text win that's backing the rich text.
>
> The rich text frame itself could probably be repurposed for other uses
> (LSP comes to mind) as it merely tracks spans and does layout from
> there, without knowledge of markdown itself.
>
> I'll note that it uses the font you currently have, building sizes,
> bold and italics using crap naming heuristics. I usually invoke my
> edwood with '-f /mnt/font/GoRegular/16a/font -F
> /mnt/font/GoMono/16a/font' which gives me lots. If I invoke without, I
> still get decent rendering, but the fonts and sizes are more limited.
>
> This is now in the mainline branch of my git repo:
> https://github.com/paul-lalonde/edwood
>
> Issues welcomed.
>
> Paul
--
Ori Bernstein <ori@eigenstate.org>
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-Mc13d17601fc365414d208851
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Re: Markdown/Markdeep in Edwood
2026-02-08 9:16 ` Clout Tolstoy
@ 2026-02-08 11:06 ` hiro
0 siblings, 0 replies; 17+ messages in thread
From: hiro @ 2026-02-08 11:06 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 1617 bytes --]
the possible negative consequences for shipping these files out there for
the whole world to ingest, without proper warning in the filename and file
headers that the files are automatically generated are too high.
you have not even updated the README to warn people that the commits are
full of slop now.
On Sun, Feb 8, 2026 at 11:47 AM Clout Tolstoy <tolstoyclout@gmail.com>
wrote:
> If it comes down to how many rainforest does an idea cost, I do believe
> that weight should be accounted for. ...
>
> What's the caloric input for a thought and how that scales for tokens or
> consumption in relation to electricity?
>
> I looked at the code, and can tell it was ai because of every comment on a
> new block/chunk. Something everyone could learn from.
>
>
>
>
> On Sun, Feb 8, 2026, 12:21 AM <penny@limitedideas.org> wrote:
>
>> I just want to register a hopefully gentle; but public acknowledgment:
>> this is deeply uninteresting, and incredibly embarrassing for you and for
>> us.
>>
> *9fans <https://9fans.topicbox.com/latest>* / 9fans / see discussions
> <https://9fans.topicbox.com/groups/9fans> + participants
> <https://9fans.topicbox.com/groups/9fans/members> + delivery options
> <https://9fans.topicbox.com/groups/9fans/subscription> Permalink
> <https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-Md293098cd5cedc131ee27399>
>
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-M9c3cfe0a0d014ce64f36d97e
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
[-- Attachment #2: Type: text/html, Size: 2502 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-08 10:46 ` [9fans] " Ori Bernstein
@ 2026-02-09 11:02 ` tlaronde
2026-02-09 17:22 ` [9fans] "Maintenir" (was: Markdown/Markdeep in Edwood) sirjofri via 9fans
0 siblings, 1 reply; 17+ messages in thread
From: tlaronde @ 2026-02-09 11:02 UTC (permalink / raw)
To: 9fans
On Sun, Feb 08, 2026 at 05:46:53AM -0500, Ori Bernstein wrote:
> The future is bright, because we're setting
> it on fire.
>
> Can we please take a step back, think about
> what we value, and build towards a future
> where we care about what we build, and its
> long term results on the planet and society?
>
> Paul, do you really want to live in the world
> the musky smelling AI cultists are trying to
> build?
>
> Using these tools is a statement of values.
> What do you value?
>
I have a perhaps slightly aside worry (or a complementary worry): with
an increasing number of people using A.I. generated code, how many
softwares will be compromised by deliberate addition (by the ones
producing the A.I.) of backdoors in code? And since the generated
code is like the one generated by GUI interface builders (lengthy,
convoluted, garrulous) how many people will spend time reviewing
the code produced? (My guess is whether the ones using the "utility"
have not the skill to write another version---this is definitively
not the case with Paul---so they won't be able to review the code,
or they tried to save time for a second hand project they had no
time to dedicate to deepen the subject, so won't spend more time
reviewing the code than they will have had to spend writing the code
ex-nihilo. So A.I. generated code will never be reviewed.)
If I extracted the needle from the haystack of TeXlive, this is _also_
because I don't trust code so massive that a verification is simply
out of reach for a single individual. "Maintenir": be able to hold in
one hand. And for software, I think it has to be handled with an iron
fist in a glove of lead---the lead is heavy, so you are able to strike
heavier, and since it is malleable you don't have to restrain yourself
worrying about causing damage to the iron fist.
> On Sat, 7 Feb 2026 15:00:55 -0800
> Paul Lalonde <paul.a.lalonde@gmail.com> wrote:
>
> > I posted a week ago about the .md support and rich text I was getting
> > an LLM to build for me.
> >
> > I've updated it significantly.
> >
> > I still don't recommend reading the source, but it's now my
> > daily-driver and I do most of my markdown work in this now. It
> > supports editing reasonably well, acme-ideomatically, and lets you
> > directly type markdown annotations in-place, interpreting it as they
> > become usable. And when it's too bad, a quick B2 on the Markdeep tag
> > pops you into the regular text win that's backing the rich text.
> >
> > The rich text frame itself could probably be repurposed for other uses
> > (LSP comes to mind) as it merely tracks spans and does layout from
> > there, without knowledge of markdown itself.
> >
> > I'll note that it uses the font you currently have, building sizes,
> > bold and italics using crap naming heuristics. I usually invoke my
> > edwood with '-f /mnt/font/GoRegular/16a/font -F
> > /mnt/font/GoMono/16a/font' which gives me lots. If I invoke without, I
> > still get decent rendering, but the fonts and sizes are more limited.
> >
> > This is now in the mainline branch of my git repo:
> > https://github.com/paul-lalonde/edwood
> >
> > Issues welcomed.
> >
> > Paul
>
>
> --
> Ori Bernstein <ori@eigenstate.org>
--
Thierry Laronde <tlaronde +AT+ kergis +dot+ com>
http://www.kergis.com/
http://kertex.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-M896307c584f46f27d7590bf8
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] "Maintenir" (was: Markdown/Markdeep in Edwood)
2026-02-09 11:02 ` tlaronde
@ 2026-02-09 17:22 ` sirjofri via 9fans
0 siblings, 0 replies; 17+ messages in thread
From: sirjofri via 9fans @ 2026-02-09 17:22 UTC (permalink / raw)
To: 9fans
Hi,
Maintenir is a really interesting word, and I never really thought about this meaning of it: being able/willing to hold in your hand. And it reflects pretty well one of my biggest critique points about open source software.
Open source software is often mentioned as "the software you can trust", because anybody can look at it and influence its development. However, in the sense of "maintenir," is that really true? In my opinion, not necessarily.
For small, sane software like Plan 9 and lots of other software it is true. Anybody can take a look at it, and, more importantly, hold it in your hand. For large, complex, often corporate software, I don't think it is true. If you need a huge software team of a few hundred paid fulltime people to be able to understand and maintain the software, I don't think it's safe to assume that this software can be trusted, just because it is open source.
Sure, it enables a lot of trust, but if nobody who's independent can[1] "hold the software in their hand," you have to trust the company who created the software and invests lots of dollars into maintenance. So who benefits from open source software like that?
That same trust and maintenance question can be applied to AI generated software, which is the topic that we're coming from. I just wanted to contribute a few thoughts about the fact that we can raise the same questions for non-AI code.
If you ask which software I mean? There are multiple examples. Without knowing the specifics about the code bases, I'm thinking about things like chromium (the web engine) or blender. There are probably many more, but these two come to my mind immediately. Chromium as a (mostly) corporate product by google, though also Android AOSP comes to mind. And blender as led by its foundation that serves no corporate benefit, but I doubt it would be maintainable like that without that foundation.
These are just a few thoughts that fit into the ongoing conversation. I don't want this to become too offtopic though. I love how maintainable Plan 9 is in that sense, and how easy it is to deep dive into kernel code and make impactful adjustments without sacrificing your whole life to it. This is what I think can enable real open source trust.
sirjofri
[1] or is able/willing to.
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3e4b119380795905-Mafb8c8728b1052172023313b
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-07 23:00 [9fans] Markdown/Markdeep in Edwood Paul Lalonde
2026-02-08 5:08 ` [9fans] " penny
2026-02-08 10:46 ` [9fans] " Ori Bernstein
@ 2026-02-16 21:10 ` Edouard Klein
2026-02-19 18:23 ` hiro
` (3 more replies)
2026-02-19 21:01 ` Shawn Rutledge
3 siblings, 4 replies; 17+ messages in thread
From: Edouard Klein @ 2026-02-16 21:10 UTC (permalink / raw)
To: 9fans
Hi all,
I, for one, think this initiative is interesting. When I tried acme, the
lack of syntax coloring was a big hindrance, I'm probably not the only
one. This could lead to more adoption.
Now, LLMs companies are mostly evil, and LLM generated code is mostly
shit, but with proper quality gates it can be OK. These tools are here
to stay, and I think that Paul's method of seeing the *.md files as the
source and the LLM process as a kind of preprocessor is the best way to
go about it.
I know that this community values the craft and prides itself on code
quality. I use it as an example to strive for when I teach computer
science, systems design, or programming, but compilers were once seen as
LLMs are seen now (minus the copyright infringement and the ecological
cost). Have you seen the output of the Go compiler for Hello world ? Yet
go is an OK language for this community.
There are ways to run open models on one own's hardware, and when doing
so I use less electricity than I use to heat up my oven when I cook. We
can avoid the ethical pitfalls, and learn how to put them to good use.
I think Paul's approach of forking, being forward with the LLM use, and
giving the prompts is a good standard to set to experiment with these.
I see in your repo that this is going to be submitted to IWP9 :) I'm
sorry I won't be able to be there, the discussions are going to be
lively !
Looking forward to see where this is going.
Cheers,
Edouard.
Paul Lalonde <paul.a.lalonde@gmail.com> writes:
> I posted a week ago about the .md support and rich text I was getting
> an LLM to build for me.
>
> I've updated it significantly.
>
> I still don't recommend reading the source, but it's now my
> daily-driver and I do most of my markdown work in this now. It
> supports editing reasonably well, acme-ideomatically, and lets you
> directly type markdown annotations in-place, interpreting it as they
> become usable. And when it's too bad, a quick B2 on the Markdeep tag
> pops you into the regular text win that's backing the rich text.
>
> The rich text frame itself could probably be repurposed for other uses
> (LSP comes to mind) as it merely tracks spans and does layout from
> there, without knowledge of markdown itself.
>
> I'll note that it uses the font you currently have, building sizes,
> bold and italics using crap naming heuristics. I usually invoke my
> edwood with '-f /mnt/font/GoRegular/16a/font -F
> /mnt/font/GoMono/16a/font' which gives me lots. If I invoke without, I
> still get decent rendering, but the fonts and sizes are more limited.
>
> This is now in the mainline branch of my git repo:
> https://github.com/paul-lalonde/edwood
>
> Issues welcomed.
>
> Paul
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-M7b9f2bb01074fca40d780a9a
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-16 21:10 ` [9fans] Markdown/Markdeep in Edwood Edouard Klein
@ 2026-02-19 18:23 ` hiro
2026-02-19 18:58 ` sirjofri via 9fans
` (2 subsequent siblings)
3 siblings, 0 replies; 17+ messages in thread
From: hiro @ 2026-02-19 18:23 UTC (permalink / raw)
To: 9fans
"I think that Paul's method of seeing the *.md files as the
source and the LLM process as a kind of preprocessor is the best way to
go about it."
agreed, adding the markdown makes it better than average.
the biggest problem was that without clear warnings somebody might be
tricked into reading that output.
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-M6f07194a86e60952986a06ce
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-16 21:10 ` [9fans] Markdown/Markdeep in Edwood Edouard Klein
2026-02-19 18:23 ` hiro
@ 2026-02-19 18:58 ` sirjofri via 9fans
2026-02-20 0:39 ` witheld
2026-02-20 3:06 ` red
3 siblings, 0 replies; 17+ messages in thread
From: sirjofri via 9fans @ 2026-02-19 18:58 UTC (permalink / raw)
To: 9fans
Hi,
my personal two cents about AI, LLM, and their place in our community.
19.02.2026 18:46:46 Edouard Klein <edou@rdklein.fr>:
> I, for one, think this initiative is interesting. When I tried acme, the
> lack of syntax coloring was a big hindrance, I'm probably not the only
> one. This could lead to more adoption.
Actually, it's absence taught me to READ code, and not focus too much on writing. But that's just a personal story. It can be different for anyone else.
I personally prefer some slight context aware highlight nowadays, i.e. unused variables/includes, the symbol I'm currently hovering, parentheses/brackets/braces, to name a few. That's why my Rider IDE mostly looks like acme (thanks to the one who built the theme!)
> Now, LLMs companies are mostly evil, and LLM generated code is mostly
> shit, but with proper quality gates it can be OK. These tools are here
> to stay, and I think that Paul's method of seeing the *.md files as the
> source and the LLM process as a kind of preprocessor is the best way to
> go about it.
In the end, the result counts. And if someone wants to invest the time to clean up after the parrot, sure, why not. The technology is interesting and sometimes really helpful.
I think the ongoing experiment Paul does is valid, also under the premise that it'll be cleaned up before any integration attempt, but that's up to the maintainers then.
In that regard, I think it would be unethical to deliver AI slop for integration, just like any (non-AI) slop. And it's unethical to expect maintainers to look at AI generated code.
> I know that this community values the craft and prides itself on code
> quality. I use it as an example to strive for when I teach computer
> science, systems design, or programming, but compilers were once seen as
> LLMs are seen now (minus the copyright infringement and the ecological
> cost). Have you seen the output of the Go compiler for Hello world ? Yet
> go is an OK language for this community.
Personally, I never had any good experience with go, the package. The language can be fine, I don't know really, but any software I wanted to install fetched many dependencies and eventually corrupted my filesystem. That was on cwfs, mostly, and is a few years ago. Probably just not a good first experience.
> There are ways to run open models on one own's hardware, and when doing
> so I use less electricity than I use to heat up my oven when I cook. We
> can avoid the ethical pitfalls, and learn how to put them to good use.
That's one way to put it. However, in my current experiments, my computer isn't powerful enough for really useful models at appropriate speeds. My GPU has "just" 8GB memory, and the computer has "just" 32GB. It's ok for batch processing, but for live it's just too slow, at least in my current setup.
That being said, it's easy to say to "just run open models locally," but with the current hardware prices I personally can't just calculate the power consumption alone.
> I think Paul's approach of forking, being forward with the LLM use, and
> giving the prompts is a good standard to set to experiment with these.
Yes, definitely. Also sharing it with the community in its state, including the disclaimer "don't look at it, it's AI crap".
> I see in your repo that this is going to be submitted to IWP9 :) I'm
> sorry I won't be able to be there, the discussions are going to be
> lively !
I just stated my own thoughts, ignoring ethics and copyright. What you said about that is true, I just wanted to focus on the technical side.
I also do think that AI research shouldn't stop at "the ninth wall," but we should take the time to think about it from different angles, including us as humans. I think we have the luxury to go slowly, we don't have to rush things here. In my opinion, lots of computing is rushed nowadays anyways, and some things shouldn't have developed that fast.
Thanks for reading so far,
Have fun!
sirjofri
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-Mdbe454d8fa4deb5bdbe76eea
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-07 23:00 [9fans] Markdown/Markdeep in Edwood Paul Lalonde
` (2 preceding siblings ...)
2026-02-16 21:10 ` [9fans] Markdown/Markdeep in Edwood Edouard Klein
@ 2026-02-19 21:01 ` Shawn Rutledge
2026-02-22 17:36 ` Paul Lalonde
3 siblings, 1 reply; 17+ messages in thread
From: Shawn Rutledge @ 2026-02-19 21:01 UTC (permalink / raw)
To: 9fans
> On Feb 8, 2026, at 00:00, Paul Lalonde <paul.a.lalonde@gmail.com> wrote:
>
> I posted a week ago about the .md support and rich text I was getting
> an LLM to build for me.
>
> I've updated it significantly.
>
> I still don't recommend reading the source, but it's now my
> daily-driver and I do most of my markdown work in this now. It
> supports editing reasonably well, acme-ideomatically, and lets you
> directly type markdown annotations in-place, interpreting it as they
> become usable. And when it's too bad, a quick B2 on the Markdeep tag
> pops you into the regular text win that's backing the rich text.
>
> The rich text frame itself could probably be repurposed for other uses
> (LSP comes to mind) as it merely tracks spans and does layout from
> there, without knowledge of markdown itself.
>
> I'll note that it uses the font you currently have, building sizes,
> bold and italics using crap naming heuristics. I usually invoke my
> edwood with '-f /mnt/font/GoRegular/16a/font -F
> /mnt/font/GoMono/16a/font' which gives me lots. If I invoke without, I
> still get decent rendering, but the fonts and sizes are more limited.
>
> This is now in the mainline branch of my git repo:
> https://github.com/paul-lalonde/edwood
>
> Issues welcomed.
I tried it out on plan9port on Linux a couple of days ago.
The spans idea is interesting: similar to what I was presenting informally at the last IWP9, but implemented differently and from the other end. I’ve got a data source: a stream of messages go into a ring buffer as text plus spans, and the file server offers a text file and a spans file. If you want spans, you have to open the spans file first and then the text file: then there is supposed to be a guarantee that the address of the first span (which is line number and column in my case) is in sync with the first line of the text file, and you can read forward from there indefinitely. The line number only increases from there on: but if you close both files and reopen them, the line numbering might start over again, because the ring buffer no longer contains all the text that you read before. That way line numbers don’t grow too big: even though the stream might be infinite, you probably won’t read it for that long.
It would be pretty easy for me to add to my “mtail” utility a feature to output text and spans into edwood at this point (as opposed to switching to graphics mode and drawing the text in the window). As long as it’s ok with me appending text and spans at the same time...
You’re following the SSV convention: it has serialization overhead, it’s not self-describing like CSV with a header, not as potentially nice-looking as TSV. I’m not so fond of that convention, but at least it’s sortof human readable. I want to work on standardizing binary formats that are identical in memory and in files, and perhaps often self-describing too (so that one tool can read them all), but need to do more work before writing a paper on it. In the meantime I’ve used binary spans, 64 bits per span, divided bitwise into row, column and a numeric style that needs separate lookup (with a vague idea that ranges of lines could have different style tables). So they take little memory, but it’s a hack until I figure out the details of making it self-describing (which is the sort of thing I have done before, but it will be different this time).
But why do you prefer to use rune index instead of row and column?
You have fewer chances of sync problems by making spans write-only, I suppose. But could there be a race condition, if the user is editing furiously (or tracking changes to a file that changes often) and an LSP or some parser process is marking spans at the same time? If you can avoid the race condition when writing two files at the same time, maybe you can offer the ability to read the spans too, and solve the race the same way?
I suspect this potential race condition is the reason people always used some means of multiplexing data and metadata (codes and escapes, SGML, markdown, …). You could store your rich text in two files, plain text and separate spans file, but does anybody ever do it? And this must be directly related to why rio and acme windows don’t have rich text: multiplexing markup or control codes always has its own issues (whereas on other systems every terminal emulates an ASR-33, VT-100 etc., and editors are built on top of browsers these days).
Likewise in memory you have the choice of keeping text in the span structs (so text is no longer contiguous) or paying the overhead of keeping a parallel set of spans in sync during editing in the main text buffer. To multiplex, or not...
Maybe a rope would work better than a gap buffer, so that you can associate spans with nodes in the tree, and compute offsets lazily? I was thinking of trying that at some point, maybe by hacking on an existing nice rope-based editor, since I don’t think it’s the easiest thing to sit down and write a correct implementation of from scratch. (Suggestions welcome. I found Xi; that’s in Rust.)
Another thing I want next is a text widget that has this span-formatting feature, that I can use inside larger GUIs (perhaps even acme-like GUIs), and that a window system can use for each window. So, you know, precedents matter. The structure of the spans file could get to be a convention before we are sure whether it’s the best we can do. And we should be rigorous about how the race condition can always be avoided.
When you used an LLM of course you got results much faster than I could get them by trying to first stop procrastinating and then refactor existing C code by hand, and keep it elegant. So how much of this knee-jerk reaction against it here is disgust or jealousy or feeling risk of being usurped, is something only each person knows. I know that the results I’ve gotten from LLMs so far are not as good as I can write myself, but it’s improving; so maybe getting good results is a matter of having real QC processes (at least write the tests first, which I don’t normally do), maybe write prompts that emphasize how important it is to keep the code small and in the same style as other examples (thus filling up the context window with examples), insist on reviewing every patch hunk-by-hunk (which LLMs tend to frustrate by rewriting too much), and so on. Maybe it’s a bit like supervising a junior programmer, except you don’t (yet) have the chance to change its training, only to create guardrails for it. Maybe continuous learning will be developed soon. So I don’t think it’s wise to rule out ever being able to seriously use them. (Thanks Edouard for breaking the ice)
I also try offline LLMs but run into limits with speed decreasing as the context window fills up.
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-M36df07cc8e3ea8b6ec22abf2
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-16 21:10 ` [9fans] Markdown/Markdeep in Edwood Edouard Klein
2026-02-19 18:23 ` hiro
2026-02-19 18:58 ` sirjofri via 9fans
@ 2026-02-20 0:39 ` witheld
2026-02-20 11:55 ` hiro
` (2 more replies)
2026-02-20 3:06 ` red
3 siblings, 3 replies; 17+ messages in thread
From: witheld @ 2026-02-20 0:39 UTC (permalink / raw)
To: 9fans
On 2026-02-16 16:10, Edouard Klein wrote:
> Hi all,
>
> I, for one, think this initiative is interesting. When I tried acme,
> the
> lack of syntax coloring was a big hindrance, I'm probably not the only
> one. This could lead to more adoption.
Oh true, adoption! My kingdom for adoption, let us embed a web runtime,
a extensions marketplace, and so on! Adoption being the goal, clearly
everyone involved in Plan 9 has been a moron until now.
> compilers were once seen as LLMs are seen now (minus the copyright
> infringement and the ecological cost). Have you seen the output of the
> Go
> compiler for Hello world ? Yet
> go is an OK language for this community.
There are zero parallels here, none, zilch. The Go compiler was written
by
man, produces assembly with intention. And unlike LLMs, which are black
boxes of plagarism, the way the assembly was produced can be examined.
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-M561ea68f029634dbdf3cfeb9
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-16 21:10 ` [9fans] Markdown/Markdeep in Edwood Edouard Klein
` (2 preceding siblings ...)
2026-02-20 0:39 ` witheld
@ 2026-02-20 3:06 ` red
3 siblings, 0 replies; 17+ messages in thread
From: red @ 2026-02-20 3:06 UTC (permalink / raw)
To: edou; +Cc: 9fans
the problem i have with LLMs is that,
historically, humans have written code a magnitude
of times slower than they can read it
LLMs completely invert that.
that in itself might not look like a problem,
but we are now being forced to look through
LLM-generated code, being disingenuously presented
as serious code, in a world where most of it
already _sucks_. this is outright disrespectful.
LLM-generated code is empty. when a human
writes code, you can ask them what they were
thinking. they had a theory behind the problem,
they made tradeoffs, some were wrong, but the
reasoning is there to interrogate.
you can't ask an LLM what it was thinking, because
the reasoning doesn't exist. so who's responsible
for the code it writes? nobody? well, scale that
up and all we will get is: codebases that still
compile, still run, but that are way beyond
humanity's collective ability to understand.
now, when it's being used upfront like this, it
certainly earns my respect; because it's being
used _honestly_ as a means of prototyping and
previewing a feature that might or might not
be worth it, especially when we are literally
being told to _not_ read the generated slop.
what i do concern about is us getting too
comfortable with these prototypes and not
cleaning them up and reimplementing them properly,
with reason.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-20 0:39 ` witheld
@ 2026-02-20 11:55 ` hiro
2026-02-20 12:56 ` Silvan Jegen
2026-02-21 1:03 ` Clout Tolstoy
2 siblings, 0 replies; 17+ messages in thread
From: hiro @ 2026-02-20 11:55 UTC (permalink / raw)
To: 9fans
isn't it anyways more efficient to get rid of the naive heuristics
that lead to the different syntax colors and instead let LLM generate
more meaningful and understandable emojis instead?
On Fri, Feb 20, 2026 at 3:03 AM <witheld@limitedideas.org> wrote:
>
> On 2026-02-16 16:10, Edouard Klein wrote:
> > Hi all,
> >
> > I, for one, think this initiative is interesting. When I tried acme,
> > the
> > lack of syntax coloring was a big hindrance, I'm probably not the only
> > one. This could lead to more adoption.
>
> Oh true, adoption! My kingdom for adoption, let us embed a web runtime,
> a extensions marketplace, and so on! Adoption being the goal, clearly
> everyone involved in Plan 9 has been a moron until now.
>
> > compilers were once seen as LLMs are seen now (minus the copyright
> > infringement and the ecological cost). Have you seen the output of the
> > Go
> > compiler for Hello world ? Yet
> > go is an OK language for this community.
>
> There are zero parallels here, none, zilch. The Go compiler was written
> by
> man, produces assembly with intention. And unlike LLMs, which are black
> boxes of plagarism, the way the assembly was produced can be examined.
>
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-M19280a48ab3f5a557cee47e8
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-20 0:39 ` witheld
2026-02-20 11:55 ` hiro
@ 2026-02-20 12:56 ` Silvan Jegen
2026-02-21 1:03 ` Clout Tolstoy
2 siblings, 0 replies; 17+ messages in thread
From: Silvan Jegen @ 2026-02-20 12:56 UTC (permalink / raw)
To: 9fans
witheld@limitedideas.org wrote:
> On 2026-02-16 16:10, Edouard Klein wrote:
> > [...]
> >
> > compilers were once seen as LLMs are seen now (minus the copyright
> > infringement and the ecological cost). Have you seen the output of the
> > Go
> > compiler for Hello world ? Yet
> > go is an OK language for this community.
>
> There are zero parallels here, none, zilch. The Go compiler was written
> by
> man, produces assembly with intention. And unlike LLMs, which are black
> boxes of plagarism, the way the assembly was produced can be examined.
The output of a compiler I would also expect to be deterministic ...
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-20 0:39 ` witheld
2026-02-20 11:55 ` hiro
2026-02-20 12:56 ` Silvan Jegen
@ 2026-02-21 1:03 ` Clout Tolstoy
2 siblings, 0 replies; 17+ messages in thread
From: Clout Tolstoy @ 2026-02-21 1:03 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 2909 bytes --]
We could look at at how deepseek (an open source model) was created using
aftermarket mods and hacked firmware for Nvidia rtx cards. You could, at
least in China modify an rtx 4090 to support 96gb of vram, but that doesn't
solve the problem of price and if the price is worth it. Most ai companies
will probably die or get bought because they're expanding too fast (No llm
company is turning a profit or getting out of the red ink because of their
llm model). Running llms locally is probably the future for most people.
An issue I have is that they really are expanding too fast for what use
they have. Another point I have is locally, they are trying to displace
farmers in wetlands with data centers. The people in my city don't eat
data.
A silver lining I see with their use is that nothing has disrupted global
class structures like llms since perhaps the printing press, before that it
was probably written language, math, maybe art. Middle to upper management
has more to lose than the engineer that actually understands the domain.
Back to the actual topic:
Getting llms to reproduce code from the same prompt is a step in the right
direction.(how many systems offer fully reproducible code?) I see acme as
a good editor to work with llm code because you take a seat as the editor
and have to copy/paste your way to success.
I'm beginning to see this project as an attempt to "vibe" code with a
reproducible means. Like it or not, that's what our future is starting to
appear as.
Perhaps this is just a rant of a madman that wants AI but also actually
needs rainforests to exist.
Happy hacking,
Clout
On Thu, Feb 19, 2026, 6:03 PM <witheld@limitedideas.org> wrote:
> On 2026-02-16 16:10, Edouard Klein wrote:
> > Hi all,
> >
> > I, for one, think this initiative is interesting. When I tried acme,
> > the
> > lack of syntax coloring was a big hindrance, I'm probably not the only
> > one. This could lead to more adoption.
>
> Oh true, adoption! My kingdom for adoption, let us embed a web runtime,
> a extensions marketplace, and so on! Adoption being the goal, clearly
> everyone involved in Plan 9 has been a moron until now.
>
> > compilers were once seen as LLMs are seen now (minus the copyright
> > infringement and the ecological cost). Have you seen the output of the
> > Go
> > compiler for Hello world ? Yet
> > go is an OK language for this community.
>
> There are zero parallels here, none, zilch. The Go compiler was written
> by
> man, produces assembly with intention. And unlike LLMs, which are black
> boxes of plagarism, the way the assembly was produced can be examined.
>
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-M80c34fde8b3f146d8ae2e8cd
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
[-- Attachment #2: Type: text/html, Size: 4801 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [9fans] Markdown/Markdeep in Edwood
2026-02-19 21:01 ` Shawn Rutledge
@ 2026-02-22 17:36 ` Paul Lalonde
0 siblings, 0 replies; 17+ messages in thread
From: Paul Lalonde @ 2026-02-22 17:36 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 11325 bytes --]
Thanks for the long reply, Shawn. I'll attempt to do it justice below.
On Thu, Feb 19, 2026 at 4:13 PM Shawn Rutledge <lists@ecloud.org> wrote:
> I tried it out on plan9port on Linux a couple of days ago.
>
I hope it worked well enough - it's still lacking polish in a few areas:
too many re-draws, expensive inserts as files get longer, and interaction
polish in Markdown mode. These are getting better as I use it more and
force fixes through my pipeline and LLM. For me this has been my primary
exploration ground for repeatable processes for LLM tooling. I can't
pretend it's a compiler if I can't reproduce the expected behavior.
Undefined Behavior is a boundless source of "amusement" with these machines.
> The spans idea is interesting: similar to what I was presenting informally
> at the last IWP9, but implemented differently and from the other end.
Yes, I absolutely took some inspiration from our conversation. I've used
annotated spans in a few more places over the years, and expect to put them
to use some more.
> I’ve got a data source: a stream of messages go into a ring buffer as text
> plus spans, and the file server offers a text file and a spans file.
I had thought of doing this, but kept getting hung up at editing complexity
- the interleaving of the streams led to representation questions that in
the end I didn't want to address. I think the balance of putting the span
generation in an exterior tool has been useful at reducing implementation
complexity.
> It would be pretty easy for me to add to my “mtail” utility a feature to
> output text and spans into edwood at this point (as opposed to switching to
> graphics mode and drawing the text in the window). As long as it’s ok with
> me appending text and spans at the same time...
>
Yes, you can write to body and to spans independently, but don't expect
spans to reshape to match text insertions particularly well. That space is
fraught with edge cases. But for straight appending, works just fine.
You’re following the SSV convention: it has serialization overhead, it’s
> not self-describing like CSV with a header, not as potentially nice-looking
> as TSV. I’m not so fond of that convention, but at least it’s sortof human
> readable. I want to work on standardizing binary formats that are
> identical in memory and in files, and perhaps often self-describing too (so
> that one tool can read them all), but need to do more work before writing a
> paper on it.
Mostly I picked the format to kind of match what was already in use in
other parts of acme. Generate strings, and put up with the parsing
overhead. The real cost is in the re-painting, which I'll get to a bit
further down this message.
> But why do you prefer to use rune index instead of row and column?
>
I use the rune index because I think of the spans as per-rune markup
instead of as screen-space markup. I wasn't thinking of newlines as
special, except as interpretation as "early line break". I do have a
pretty significant amount of code handling a "source map" taking screen-box
indices back to source text indices (to elide markdown notation, deal with
hidden text, etc); I'm not sure those would be easier if I was thinking
about lines.
One of the issues that has to be adapted for is the dreaded first and last
line problem. When the window was a view over a span of lines, and tag
fonts were the same as the window fonts, it was easy to size windows to a
multiple of the height of a line and always start with a fully-exposed new
line and fully-exposed last line, perhaps with some padding space. That
fails in a world of increasingly arbitrary box sizes being laid out. I'm
still experimenting with different policies, but handing cases like an
inline image taller than the screen focuses the mind. What should the
scroll bar show? What should the granularity of scrolling be? When I say
it's not ready for prime time due to interaction issues, these are the key
ones.
> You have fewer chances of sync problems by making spans write-only, I
> suppose.
Yes, exactly the reason.
> But could there be a race condition, if the user is editing furiously (or
> tracking changes to a file that changes often) and an LSP or some parser
> process is marking spans at the same time?
And there *is* such a race. I treat it as allowed, knowing that the
styling tool will re-work on observing the text has changed.
> If you can avoid the race condition when writing two files at the same
> time, maybe you can offer the ability to read the spans too, and solve the
> race the same way?
>
I mostly made spans write-only so that tooling wouldn't rely on the span
states in Edwood. It seemed like a good place to decouple the systems.
> Likewise in memory you have the choice of keeping text in the span structs
> (so text is no longer contiguous) or paying the overhead of keeping a
> parallel set of spans in sync during editing in the main text buffer. To
> multiplex, or not...
>
The text remains in Edwood's text implementation - I honestly don't know
where we wound up on representation, but it does a good job of managing
insertions and changes in large files. My spans implementation *does
not*.
Maybe a rope would work better than a gap buffer, so that you can associate
> spans with nodes in the tree, and compute offsets lazily?
Yes, it certainly would. I chose the gap buffer to have a trivial
implementation while working out the surrounding problems. When I'm
satisfied enough that the interactions are right and that the editor
remains stable enough, I intend to switch the representation. But that
will only happen one day when I'm feeling too much pain editing the bottom
of a large Markdown file; right now I'm willing to switch to Plain mode
when I do operations that might be too painful in the styled modes.
> I was thinking of trying that at some point, maybe by hacking on an
> existing nice rope-based editor, since I don’t think it’s the easiest thing
> to sit down and write a correct implementation of from scratch.
> (Suggestions welcome. I found Xi; that’s in Rust.)
>
Yes.
> Another thing I want next is a text widget that has this span-formatting
> feature, that I can use inside larger GUIs (perhaps even acme-like GUIs),
> and that a window system can use for each window. So, you know, precedents
> matter. The structure of the spans file could get to be a convention
> before we are sure whether it’s the best we can do. And we should be
> rigorous about how the race condition can always be avoided.
>
I think this implementation gives us some information in that direction.
And I'm happy to continue the discussion with code artifacts.
> When you used an LLM of course you got results much faster than I could
> get them by trying to first stop procrastinating and then refactor existing
> C code by hand, and keep it elegant.
I've made a few changes to acme over the years (I did the first expanding
tags implementation, before Russ rewrote it to keep it elegant - it might
be that elegant is not my strong suit)
I had bounced off this at least 3 times in the last 20 years. That was
part of the motivation for Edwood: the C version of acme is a product of
its era, with vast amounts of global state and generally poor separation of
concerns and poor encapsulation. That caused me to drop the effort the
first time. The second time led to Edwood, though most of the credit there
goes to Rob Kroeger. I was just the sloppy programmer who pushed for
quick-and-dirty-but-working, so that the real work could happen from an
operating editor - that's where the much better text handling, undo
handling, etc, came from, that gave me such an easier time now doing this
version. The third time just had me bounce for lack of time, which I think
matches almost everyone's experience trying significant changes to
something like acme. As soon as you touch the interaction model it gets
very hard.
> So how much of this knee-jerk reaction against it here is disgust or
> jealousy or feeling risk of being usurped, is something only each person
> knows.
I think it's entirely fair to be disdainful/distrustful of the LLMs. There
are important interrogations about the place for these tools in our
practice, both professional and hobby.
For my part, the tools have shown their value - they work, and I expect we
will have them able to produce reliable code with a minimum of undefined
behavior in the next year. The tooling is moving very fast.
There are many externalized costs, which is where I grapple with the ethics
of the technology. I'll leave that to discussions over a beverage.
I know that the results I’ve gotten from LLMs so far are not as good as I
> can write myself, but it’s improving; so maybe getting good results is a
> matter of having real QC processes (at least write the tests first, which I
> don’t normally do), maybe write prompts that emphasize how important it is
> to keep the code small and in the same style as other examples (thus
> filling up the context window with examples), insist on reviewing every
> patch hunk-by-hunk (which LLMs tend to frustrate by rewriting too much),
> and so on. Maybe it’s a bit like supervising a junior programmer, except
> you don’t (yet) have the chance to change its training, only to create
> guardrails for it. Maybe continuous learning will be developed soon. So I
> don’t think it’s wise to rule out ever being able to seriously use them.
> (Thanks Edouard for breaking the ice)
>
Yes, it's a lot like supervising a junior programmer, or a team of them.
Though lately, they've been becoming more senior. But the biggest gap is
that the focus of the tooling is on the code output, rather than the
iteration of the specification. I want to be able to throw away the code
and have it produce a new implementation from the specification.
Today, I'm getting close, but it's specification plus tests. The tests
that come from the debug loop are particularly important, as they capture
things that were undefined behavior and that have now become defined. If I
could effectively and reliably scrape that context from the tests back into
the specification I'd believe that we had an implementation machine and
debugger for specifications.
I also try offline LLMs but run into limits with speed decreasing as the
> context window fills up.\
>
Every important piece of work with LLMs I'm seeing today is about context
management. This post shows some of the potential of very explicit context
management: https://blog.can.ac/2026/02/12/the-harness-problem/. By
enriching code listings with "memorable"/arbitrary context hooks, even much
"dumber" LLMs are able to become significantly better coders by becoming
better at diff merging, effectively. The harness around the LLM is
critically important to managing context.
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T3af2222bdb4b9c14-Mb59f7ce58a945b0d88646110
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
[-- Attachment #2: Type: text/html, Size: 15095 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2026-02-22 21:25 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-07 23:00 [9fans] Markdown/Markdeep in Edwood Paul Lalonde
2026-02-08 5:08 ` [9fans] " penny
2026-02-08 9:16 ` Clout Tolstoy
2026-02-08 11:06 ` hiro
2026-02-08 10:46 ` [9fans] " Ori Bernstein
2026-02-09 11:02 ` tlaronde
2026-02-09 17:22 ` [9fans] "Maintenir" (was: Markdown/Markdeep in Edwood) sirjofri via 9fans
2026-02-16 21:10 ` [9fans] Markdown/Markdeep in Edwood Edouard Klein
2026-02-19 18:23 ` hiro
2026-02-19 18:58 ` sirjofri via 9fans
2026-02-20 0:39 ` witheld
2026-02-20 11:55 ` hiro
2026-02-20 12:56 ` Silvan Jegen
2026-02-21 1:03 ` Clout Tolstoy
2026-02-20 3:06 ` red
2026-02-19 21:01 ` Shawn Rutledge
2026-02-22 17:36 ` Paul Lalonde
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).