The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
From: segaloco via TUHS <tuhs@tuhs.org>
To: Paul Ruizendaal <pnr@planet.nl>
Cc: "tuhs@tuhs.org" <tuhs@tuhs.org>
Subject: [TUHS] Re: Origins of the frame buffer device
Date: Mon, 06 Mar 2023 23:20:22 +0000	[thread overview]
Message-ID: <4nTxKwDpHfGp_UnVOUXR9bqHYa4KaRjXKfM_-KyDU-AQFFwNnN8vcsq85l8_bVWwyzKP8dcO6kUikL0t4LXlrGDDrrSVgZrcCTnpEQGfaFA=@protonmail.com> (raw)
In-Reply-To: <73309724-1F69-49D4-B54B-63DD298CBD27@planet.nl>

I don't know if this is what Rob was getting at, but I've also been thinking about the relative homogenization of the display technologies field in recent years.  Massively-parallel vector engines calculating vertices, geometry, tesselation, and pixel color after rasterization are pretty much the only game in town these days.  Granted, we're finding new and exciting ways to *use* the data these technologies generate, but put pixels on a surface inside a VR headset, on a flat-panel monitor in front of my face, or on a little bundle of plastic, glass, and silicon in my pocket, and the general behavior is still the same: I'm going to give the card a bunch of vertices and data, it's going to then convert that data into a bunch of parallel jobs that eventually land as individual pixels on my screen.

One area I am actually interested to see how it develops though is hologram-type stuff.  VR is one thing, you're still creating what is ultimately a 2D image composed of individual samples (pixels) that capture a sampling of the state of a hypothetical 3D environment maintained in your application.  With AR, you're a little closer, basing depth on the physical environment around you, but I wouldn't be surprised if that same depth is still just put in a depth buffer, pixels are masked accordingly, and the application pastes a 2D rasterized image of the content over the camera display every 1/60th of a second (or whatever absurd refresh rates we're using now.)  However, a hologram must maintain the concept of 3D space, at least well enough that if multiple objects were projected, they wouldn't simply disappear from view because one was "behind" another from just one viewpoint, the way the painters algorithm for most depth culling in 3D stuff works.  Although I guess I just answered my own quandary, that one would just disable depth culling and display the results before the rasterizer, but still, there's implications beyond just sample this 3D "scene" into a 2D matrix of pixels.

Another area that I wish there was more development in, but it's probably too late, is 2D acceleration ala 80's arcade hardware.  Galaxian is allegedly the first, at least as held in lore, to provide a chip in which the scrolling background was based on a constantly updating coordinate into a scrollplane and the individual moving graphics were implemented as hardware sprites.  I haven't researched much further back on scrollplane 2D acceleration, but I personally see it as still a very valid and useful technology, one that has been overlooked in the wake of 3D graphics and GPUs.  Given that many, many computer users don't really tax the 3D capabilities of their machine, they'd probably benefit from the lower costs and complexity involved with accelerating the movement of windows and visual content via this kind of hardware instead of GPUs based largely on vertex arrays.  Granted, I don't know how a modern windowing system leverages GPU technology to provide for window acceleration, but if it's using vertex arrays and that sort of thing, that's potentially a lot of extra processing of flat geometric stuff into vertices to feed a GPU, then in the GPU back into pixels, when the 2D graphics already exist and could easily be moved around by coordinate using a scrolling/spriting chip.  Granted the "thesis" on my thoughts on 2D scrolling vs GPU efficiency is half-baked, it's something I think about every so often.

Anywho, not sure if that's what you meant by things seeming to have rusted a bit, but that's my general line of thought when the state of graphics/display technology comes up.  Anything that is just another variation on putting a buffer of vertex properties on a 2k+-core vector engine to spit out a bunch of pixels isn't really pushing into new territory.  I don't care how many more triangles it can do nor how many of those pixels there are, and certainly not how close those pixels are to my eyeballs, it's still the same technology just keeping up with Moore's Law.  That all said, GPGPU on the other hand is such an interesting field and I hope to learn more in the coming years and start working compute shading into projects in earnest...

- Matt G.

------- Original Message -------
On Monday, March 6th, 2023 at 2:47 PM, Paul Ruizendaal via TUHS <tuhs@tuhs.org> wrote:


> I had read that paper and I have just read it again. I am not sure I understand the point you are making.
> 
> My take away from the paper -- reading beyond its 1969 vista -- was that it made two points:
> 
> 1. We put ever more compute power behind our displays
> 
> 2. The compute power tends to grow by first adding special purpose hardware for some core tasks, that then develops into a generic processor and the process repeats
> 
> From this one could imply a third point: we always tend to want displays that are at the high end of what is economically feasible, even if that requires an architectural wart.
> 
> More narrowly it concludes that the compute power behind the display is better organised as being general to the system than to be dedicated to the display.
> 
> The wikipedia page with a history of GPU’s since 1970 (https://en.wikipedia.org/wiki/Graphics_processing_unit) shows the wheel rolling decade after decade.
> 
> However, the above observations are probably not what you wanted to direct my attention to ...
> 
> 
> > On 6 Mar 2023, at 09:57, Rob Pike robpike@gmail.com wrote:
> > 
> > I would think you have read Sutherland's "wheel of reincarnation" paper, but if you haven't, please do. What fascinates me today is that it seems for about a decade now the bearings on that wheel have rusted solid, and no one seems interested in lubricating them to get it going again.
> > 
> > -rob
> > 
> > On Mon, Mar 6, 2023 at 7:52 PM Paul Ruizendaal via TUHS tuhs@tuhs.org wrote:
> > Thanks for this.
> > 
> > My question was unclear: I wasn't thinking of the hardware, but of the software abstraction, i.e. the device files living in /dev
> > 
> > I’ve now read through SunOS man pages and it would seem that the /dev/fb file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite the same though, as initially it seems to have been tied to the kernel part of the SunWindows software. My understanding of the latter is still limited though. The later Linux usage is designed around mmap() and I am not sure when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD, but was not implemented at that time). Maybe at the time of the Sun-1 and Sun-2 it worked differently.
> > 
> > The frame buffer hardware is exposed differently in Plan9. Here there are device files (initially /dev/bit/screen and /dev/bit/bitblt) but these are not designed around mmap(), which does not exist on Plan9 by design. It later develops into the /dev/draw/... files. However, my understanding of graphics in Plan9 is also still limited.
> > 
> > All in all, finding a conceptually clean but still performant way to expose the frame buffer (and acceleration) hardware seems to have been a hard problem. Arguably it still is.
> > 
> > > On 5 Mar 2023, at 19:25, Kenneth Goodwin kennethgoodwin56@gmail.com wrote:
> > > 
> > > The first frame buffers from Evans and Sutherland were at University of Utah, DOD SITES and NYIT CGL as I recall.
> > > 
> > > Circa 1974 to 1978.
> > > 
> > > They were 19 inch RETMA racks.
> > > Took three to get decent RGB.
> > > 
> > > 8 bits per pixel per FB.
> > > 
> > > On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS tuhs@tuhs.org wrote:
> > > I am confused on the history of the frame buffer device.
> > > 
> > > On Linux, it seems that /dev/fbdev originated in 1999 from work done by Martin Schaller and Geert Uytterhoeven (and some input from Fabrice Bellard?).
> > > 
> > > However, it would seem at first glance that early SunOS also had a frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature (a character device that could be mmap’ed to give access to the hardware frame buffer, and ioctl’s to probe and configure the hardware). Is that correct, or were these entirely different in nature?
> > > 
> > > Paul

  parent reply	other threads:[~2023-03-06 23:20 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-05 15:01 [TUHS] " Paul Ruizendaal via TUHS
2023-03-05 17:29 ` [TUHS] " Grant Taylor via TUHS
2023-03-05 18:25 ` Kenneth Goodwin
2023-03-06  8:51   ` Paul Ruizendaal via TUHS
2023-03-06  8:57     ` Rob Pike
2023-03-06 11:09       ` Henry Bent
2023-03-06 16:02         ` Theodore Ts'o
2023-03-06 22:47       ` Paul Ruizendaal via TUHS
2023-03-06 23:10         ` Rob Pike
2023-03-08 12:53           ` Paul Ruizendaal
2023-03-08 14:23             ` Dan Cross
2023-03-08 15:06               ` Paul Ruizendaal
2023-03-08 19:35                 ` Dan Cross
2023-03-08 16:55               ` Theodore Ts'o
2023-03-08 17:46                 ` Clem Cole
2023-03-08 17:45               ` Clem Cole
2023-03-08 18:12                 ` segaloco via TUHS
2023-03-08 18:21                   ` Larry McVoy
2023-03-08 18:43                     ` Kenneth Goodwin
2023-03-08 18:45                     ` Steffen Nurpmeso
2023-03-08 22:44                     ` Clem Cole
2023-03-09 14:42                 ` Paul Winalski
2023-03-06 23:20         ` segaloco via TUHS [this message]
2023-03-07  1:24     ` Kenneth Goodwin
2023-03-08  3:07     ` Rob Gingell
2023-03-08 12:51       ` Paul Ruizendaal via TUHS
2023-03-08 13:05         ` Warner Losh
2023-03-08 13:17         ` Arno Griffioen via TUHS
2023-03-07  1:54 ` Kenneth Goodwin
2023-03-05 18:52 Noel Chiappa
2023-03-05 20:43 ` Rob Pike
2023-03-06 10:43   ` Jonathan Gray
2023-03-07  1:21 ` Kenneth Goodwin
2023-03-08  5:43 ` Lars Brinkhoff
2023-03-09 23:24   ` emanuel stiebler
2023-03-10  1:44     ` Lawrence Stewart
2023-03-06 23:16 Norman Wilson
2023-03-06 23:24 ` Larry McVoy
2023-03-07 12:08   ` arnold
2023-03-07 16:42   ` Theodore Ts'o

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='4nTxKwDpHfGp_UnVOUXR9bqHYa4KaRjXKfM_-KyDU-AQFFwNnN8vcsq85l8_bVWwyzKP8dcO6kUikL0t4LXlrGDDrrSVgZrcCTnpEQGfaFA=@protonmail.com' \
    --to=tuhs@tuhs.org \
    --cc=pnr@planet.nl \
    --cc=segaloco@protonmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).