The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
* [TUHS] Origins of the frame buffer device
@ 2023-03-05 15:01 Paul Ruizendaal via TUHS
  2023-03-05 17:29 ` [TUHS] " Grant Taylor via TUHS
                   ` (2 more replies)
  0 siblings, 3 replies; 40+ messages in thread
From: Paul Ruizendaal via TUHS @ 2023-03-05 15:01 UTC (permalink / raw)
  To: tuhs

I am confused on the history of the frame buffer device.

On Linux, it seems that /dev/fbdev originated in 1999 from work done by  Martin Schaller and  Geert Uytterhoeven (and some input from Fabrice Bellard?).

However, it would seem at first glance that early SunOS also had a frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature (a character device that could be mmap’ed to give access to the hardware frame buffer, and ioctl’s to probe and configure the hardware). Is that correct, or were these entirely different in nature?

Paul


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-05 15:01 [TUHS] Origins of the frame buffer device Paul Ruizendaal via TUHS
@ 2023-03-05 17:29 ` Grant Taylor via TUHS
  2023-03-05 18:25 ` Kenneth Goodwin
  2023-03-07  1:54 ` Kenneth Goodwin
  2 siblings, 0 replies; 40+ messages in thread
From: Grant Taylor via TUHS @ 2023-03-05 17:29 UTC (permalink / raw)
  To: tuhs

[-- Attachment #1: Type: text/plain, Size: 883 bytes --]

On 3/5/23 8:01 AM, Paul Ruizendaal via TUHS wrote:
> However, it would seem at first glance that early SunOS also had a 
> frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar 
> in nature (a character device that could be mmap’ed to give access 
> to the hardware frame buffer, and ioctl’s to probe and configure the 
> hardware). Is that correct, or were these entirely different in nature?

My limited understanding is that PC compatibles, and thus Linux, with 
their VGA et al. cards had /text/ character support that other systems 
did not have.  As such PCs ~> Linux /didn't/ /need/ frame buffer support 
in the beginning.

Conversely all systems that didn't have such cards /did/ /need/ frame 
buffer from the start.

I consider Linux's frame buffer to be a late comer compared to other 
systems.



-- 
Grant. . . .
unix || die


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4017 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-05 15:01 [TUHS] Origins of the frame buffer device Paul Ruizendaal via TUHS
  2023-03-05 17:29 ` [TUHS] " Grant Taylor via TUHS
@ 2023-03-05 18:25 ` Kenneth Goodwin
  2023-03-06  8:51   ` Paul Ruizendaal via TUHS
  2023-03-07  1:54 ` Kenneth Goodwin
  2 siblings, 1 reply; 40+ messages in thread
From: Kenneth Goodwin @ 2023-03-05 18:25 UTC (permalink / raw)
  To: Paul Ruizendaal; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 925 bytes --]

The first frame buffers from Evans and Sutherland were at University of
Utah, DOD SITES and NYIT CGL as I recall.

Circa 1974 to 1978.

They were 19 inch RETMA racks.
Took three to get decent RGB.

8 bits per pixel per FB.

On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org>
wrote:

> I am confused on the history of the frame buffer device.
>
> On Linux, it seems that /dev/fbdev originated in 1999 from work done by
> Martin Schaller and  Geert Uytterhoeven (and some input from Fabrice
> Bellard?).
>
> However, it would seem at first glance that early SunOS also had a frame
> buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature
> (a character device that could be mmap’ed to give access to the hardware
> frame buffer, and ioctl’s to probe and configure the hardware). Is that
> correct, or were these entirely different in nature?
>
> Paul
>
>

[-- Attachment #2: Type: text/html, Size: 1328 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-05 18:25 ` Kenneth Goodwin
@ 2023-03-06  8:51   ` Paul Ruizendaal via TUHS
  2023-03-06  8:57     ` Rob Pike
                       ` (2 more replies)
  0 siblings, 3 replies; 40+ messages in thread
From: Paul Ruizendaal via TUHS @ 2023-03-06  8:51 UTC (permalink / raw)
  To: tuhs

Thanks for this.

My question was unclear: I wasn't thinking of the hardware, but of the software abstraction, i.e. the device files living in /dev

I’ve now read through SunOS man pages and it would seem that the /dev/fb file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite the same though, as initially it seems to have been tied to the kernel part of the SunWindows software. My understanding of the latter is still limited though. The later Linux usage is designed around mmap() and I am not sure when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD, but was not implemented at that time). Maybe at the time of the Sun-1 and Sun-2 it worked differently.

The frame buffer hardware is exposed differently in Plan9. Here there are device files (initially /dev/bit/screen and /dev/bit/bitblt) but these are not designed around mmap(), which does not exist on Plan9 by design. It later develops into the /dev/draw/... files. However, my understanding of graphics in Plan9 is also still limited.

All in all, finding a conceptually clean but still performant way to expose the frame buffer (and acceleration) hardware seems to have been a hard problem. Arguably it still is.



> On 5 Mar 2023, at 19:25, Kenneth Goodwin <kennethgoodwin56@gmail.com> wrote:
> 
> The first frame buffers from Evans and Sutherland were at University of Utah, DOD SITES and NYIT CGL as I recall.
> 
> Circa 1974 to 1978.
> 
> They were 19 inch RETMA racks.
> Took three to get decent RGB.
> 
> 8 bits per pixel per FB.
> 
> On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org> wrote:
> I am confused on the history of the frame buffer device.
> 
> On Linux, it seems that /dev/fbdev originated in 1999 from work done by  Martin Schaller and  Geert Uytterhoeven (and some input from Fabrice Bellard?).
> 
> However, it would seem at first glance that early SunOS also had a frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature (a character device that could be mmap’ed to give access to the hardware frame buffer, and ioctl’s to probe and configure the hardware). Is that correct, or were these entirely different in nature?
> 
> Paul
> 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06  8:51   ` Paul Ruizendaal via TUHS
@ 2023-03-06  8:57     ` Rob Pike
  2023-03-06 11:09       ` Henry Bent
  2023-03-06 22:47       ` Paul Ruizendaal via TUHS
  2023-03-07  1:24     ` Kenneth Goodwin
  2023-03-08  3:07     ` Rob Gingell
  2 siblings, 2 replies; 40+ messages in thread
From: Rob Pike @ 2023-03-06  8:57 UTC (permalink / raw)
  To: Paul Ruizendaal; +Cc: tuhs

[-- Attachment #1: Type: text/plain, Size: 2749 bytes --]

I would think you have read Sutherland's "wheel of reincarnation" paper,
but if you haven't, please do. What fascinates me today is that it seems
for about a decade now the bearings on that wheel have rusted solid, and no
one seems interested in lubricating them to get it going again.

-rob


On Mon, Mar 6, 2023 at 7:52 PM Paul Ruizendaal via TUHS <tuhs@tuhs.org>
wrote:

> Thanks for this.
>
> My question was unclear: I wasn't thinking of the hardware, but of the
> software abstraction, i.e. the device files living in /dev
>
> I’ve now read through SunOS man pages and it would seem that the /dev/fb
> file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite
> the same though, as initially it seems to have been tied to the kernel part
> of the SunWindows software. My understanding of the latter is still limited
> though. The later Linux usage is designed around mmap() and I am not sure
> when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD,
> but was not implemented at that time). Maybe at the time of the Sun-1 and
> Sun-2 it worked differently.
>
> The frame buffer hardware is exposed differently in Plan9. Here there are
> device files (initially /dev/bit/screen and /dev/bit/bitblt) but these are
> not designed around mmap(), which does not exist on Plan9 by design. It
> later develops into the /dev/draw/... files. However, my understanding of
> graphics in Plan9 is also still limited.
>
> All in all, finding a conceptually clean but still performant way to
> expose the frame buffer (and acceleration) hardware seems to have been a
> hard problem. Arguably it still is.
>
>
>
> > On 5 Mar 2023, at 19:25, Kenneth Goodwin <kennethgoodwin56@gmail.com>
> wrote:
> >
> > The first frame buffers from Evans and Sutherland were at University of
> Utah, DOD SITES and NYIT CGL as I recall.
> >
> > Circa 1974 to 1978.
> >
> > They were 19 inch RETMA racks.
> > Took three to get decent RGB.
> >
> > 8 bits per pixel per FB.
> >
> > On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org>
> wrote:
> > I am confused on the history of the frame buffer device.
> >
> > On Linux, it seems that /dev/fbdev originated in 1999 from work done by
> Martin Schaller and  Geert Uytterhoeven (and some input from Fabrice
> Bellard?).
> >
> > However, it would seem at first glance that early SunOS also had a frame
> buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature
> (a character device that could be mmap’ed to give access to the hardware
> frame buffer, and ioctl’s to probe and configure the hardware). Is that
> correct, or were these entirely different in nature?
> >
> > Paul
> >
>
>

[-- Attachment #2: Type: text/html, Size: 3519 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06  8:57     ` Rob Pike
@ 2023-03-06 11:09       ` Henry Bent
  2023-03-06 16:02         ` Theodore Ts'o
  2023-03-06 22:47       ` Paul Ruizendaal via TUHS
  1 sibling, 1 reply; 40+ messages in thread
From: Henry Bent @ 2023-03-06 11:09 UTC (permalink / raw)
  To: tuhs

[-- Attachment #1: Type: text/plain, Size: 2978 bytes --]

The paper Rob refers to is https://dl.acm.org/doi/10.1145/363347.363368

-Henry

On Mon, 6 Mar 2023 at 03:57, Rob Pike <robpike@gmail.com> wrote:

> I would think you have read Sutherland's "wheel of reincarnation" paper,
> but if you haven't, please do. What fascinates me today is that it seems
> for about a decade now the bearings on that wheel have rusted solid, and no
> one seems interested in lubricating them to get it going again.
>
> -rob
>
>
> On Mon, Mar 6, 2023 at 7:52 PM Paul Ruizendaal via TUHS <tuhs@tuhs.org>
> wrote:
>
>> Thanks for this.
>>
>> My question was unclear: I wasn't thinking of the hardware, but of the
>> software abstraction, i.e. the device files living in /dev
>>
>> I’ve now read through SunOS man pages and it would seem that the /dev/fb
>> file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite
>> the same though, as initially it seems to have been tied to the kernel part
>> of the SunWindows software. My understanding of the latter is still limited
>> though. The later Linux usage is designed around mmap() and I am not sure
>> when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD,
>> but was not implemented at that time). Maybe at the time of the Sun-1 and
>> Sun-2 it worked differently.
>>
>> The frame buffer hardware is exposed differently in Plan9. Here there are
>> device files (initially /dev/bit/screen and /dev/bit/bitblt) but these are
>> not designed around mmap(), which does not exist on Plan9 by design. It
>> later develops into the /dev/draw/... files. However, my understanding of
>> graphics in Plan9 is also still limited.
>>
>> All in all, finding a conceptually clean but still performant way to
>> expose the frame buffer (and acceleration) hardware seems to have been a
>> hard problem. Arguably it still is.
>>
>>
>>
>> > On 5 Mar 2023, at 19:25, Kenneth Goodwin <kennethgoodwin56@gmail.com>
>> wrote:
>> >
>> > The first frame buffers from Evans and Sutherland were at University of
>> Utah, DOD SITES and NYIT CGL as I recall.
>> >
>> > Circa 1974 to 1978.
>> >
>> > They were 19 inch RETMA racks.
>> > Took three to get decent RGB.
>> >
>> > 8 bits per pixel per FB.
>> >
>> > On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org>
>> wrote:
>> > I am confused on the history of the frame buffer device.
>> >
>> > On Linux, it seems that /dev/fbdev originated in 1999 from work done
>> by  Martin Schaller and  Geert Uytterhoeven (and some input from Fabrice
>> Bellard?).
>> >
>> > However, it would seem at first glance that early SunOS also had a
>> frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in
>> nature (a character device that could be mmap’ed to give access to the
>> hardware frame buffer, and ioctl’s to probe and configure the hardware). Is
>> that correct, or were these entirely different in nature?
>> >
>> > Paul
>> >
>>
>>

[-- Attachment #2: Type: text/html, Size: 3972 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06 11:09       ` Henry Bent
@ 2023-03-06 16:02         ` Theodore Ts'o
  0 siblings, 0 replies; 40+ messages in thread
From: Theodore Ts'o @ 2023-03-06 16:02 UTC (permalink / raw)
  To: Henry Bent; +Cc: tuhs

On Mon, Mar 06, 2023 at 06:09:22AM -0500, Henry Bent wrote:
> The paper Rob refers to is https://dl.acm.org/doi/10.1145/363347.363368

A PDF of the paper is available here:

http://cva.stanford.edu/classes/cs99s/papers/myer-sutherland-design-of-display-processors.pdf


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06  8:57     ` Rob Pike
  2023-03-06 11:09       ` Henry Bent
@ 2023-03-06 22:47       ` Paul Ruizendaal via TUHS
  2023-03-06 23:10         ` Rob Pike
  2023-03-06 23:20         ` segaloco via TUHS
  1 sibling, 2 replies; 40+ messages in thread
From: Paul Ruizendaal via TUHS @ 2023-03-06 22:47 UTC (permalink / raw)
  To: Rob Pike; +Cc: tuhs

I had read that paper and I have just read it again. I am not sure I understand the point you are making.

My take away from the paper -- reading beyond its 1969 vista -- was that it made two points:

1. We put ever more compute power behind our displays

2. The compute power tends to grow by first adding special purpose hardware for some core tasks, that then develops into a generic processor and the process repeats

From this one could imply a third point: we always tend to want displays that are at the high end of what is economically feasible, even if that requires an architectural wart.

More narrowly it concludes that the compute power behind the display is better organised as being general to the system than to be dedicated to the display.

The wikipedia page with a history of GPU’s since 1970 (https://en.wikipedia.org/wiki/Graphics_processing_unit) shows the wheel rolling decade after decade.

However, the above observations are probably not what you wanted to direct my attention to ...



> On 6 Mar 2023, at 09:57, Rob Pike <robpike@gmail.com> wrote:
> 
> I would think you have read Sutherland's "wheel of reincarnation" paper, but if you haven't, please do. What fascinates me today is that it seems for about a decade now the bearings on that wheel have rusted solid, and no one seems interested in lubricating them to get it going again.
> 
> -rob
> 
> 
> On Mon, Mar 6, 2023 at 7:52 PM Paul Ruizendaal via TUHS <tuhs@tuhs.org> wrote:
> Thanks for this.
> 
> My question was unclear: I wasn't thinking of the hardware, but of the software abstraction, i.e. the device files living in /dev
> 
> I’ve now read through SunOS man pages and it would seem that the /dev/fb file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite the same though, as initially it seems to have been tied to the kernel part of the SunWindows software. My understanding of the latter is still limited though. The later Linux usage is designed around mmap() and I am not sure when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD, but was not implemented at that time). Maybe at the time of the Sun-1 and Sun-2 it worked differently.
> 
> The frame buffer hardware is exposed differently in Plan9. Here there are device files (initially /dev/bit/screen and /dev/bit/bitblt) but these are not designed around mmap(), which does not exist on Plan9 by design. It later develops into the /dev/draw/... files. However, my understanding of graphics in Plan9 is also still limited.
> 
> All in all, finding a conceptually clean but still performant way to expose the frame buffer (and acceleration) hardware seems to have been a hard problem. Arguably it still is.
> 
> 
> 
> > On 5 Mar 2023, at 19:25, Kenneth Goodwin <kennethgoodwin56@gmail.com> wrote:
> > 
> > The first frame buffers from Evans and Sutherland were at University of Utah, DOD SITES and NYIT CGL as I recall.
> > 
> > Circa 1974 to 1978.
> > 
> > They were 19 inch RETMA racks.
> > Took three to get decent RGB.
> > 
> > 8 bits per pixel per FB.
> > 
> > On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org> wrote:
> > I am confused on the history of the frame buffer device.
> > 
> > On Linux, it seems that /dev/fbdev originated in 1999 from work done by  Martin Schaller and  Geert Uytterhoeven (and some input from Fabrice Bellard?).
> > 
> > However, it would seem at first glance that early SunOS also had a frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature (a character device that could be mmap’ed to give access to the hardware frame buffer, and ioctl’s to probe and configure the hardware). Is that correct, or were these entirely different in nature?
> > 
> > Paul
> > 
> 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06 22:47       ` Paul Ruizendaal via TUHS
@ 2023-03-06 23:10         ` Rob Pike
  2023-03-08 12:53           ` Paul Ruizendaal
  2023-03-06 23:20         ` segaloco via TUHS
  1 sibling, 1 reply; 40+ messages in thread
From: Rob Pike @ 2023-03-06 23:10 UTC (permalink / raw)
  To: Paul Ruizendaal; +Cc: tuhs

[-- Attachment #1: Type: text/plain, Size: 5189 bytes --]

Through the 1970s and 1980s, there was a continuous invention in how
graphics, computing, and software interacted. With the dominance of the PC
and the VGA card, we now have a largely fixed architecture for graphics in
personal computers. There is some playing around with the software model,
but basically we have a powerful, privately managed external "card" that
does the graphics, and we talk to it with one particular mechanism. During
my work on Plan 9, it got harder over time to try new things because the
list of ways to talk to the card was shrinking, and other than the blessed
ones, slower and slower.

For example, a shared-memory frame buffer, which I interpret as the
mechanism that started this conversation, is now a historical artifact.
Back when, it was a thing to play with.

 As observed by many others, there is far more grunt today in the graphics
card than the CPU, which in Sutherland's timeline would mean it was time to
push that power back to the CPU. But no.

Not a value judgement, just an observation. The wheel has stopped turning,
mostly.

-rob


On Tue, Mar 7, 2023 at 9:47 AM Paul Ruizendaal <pnr@planet.nl> wrote:

> I had read that paper and I have just read it again. I am not sure I
> understand the point you are making.
>
> My take away from the paper -- reading beyond its 1969 vista -- was that
> it made two points:
>
> 1. We put ever more compute power behind our displays
>
> 2. The compute power tends to grow by first adding special purpose
> hardware for some core tasks, that then develops into a generic processor
> and the process repeats
>
> From this one could imply a third point: we always tend to want displays
> that are at the high end of what is economically feasible, even if that
> requires an architectural wart.
>
> More narrowly it concludes that the compute power behind the display is
> better organised as being general to the system than to be dedicated to the
> display.
>
> The wikipedia page with a history of GPU’s since 1970 (
> https://en.wikipedia.org/wiki/Graphics_processing_unit) shows the wheel
> rolling decade after decade.
>
> However, the above observations are probably not what you wanted to direct
> my attention to ...
>
>
>
> > On 6 Mar 2023, at 09:57, Rob Pike <robpike@gmail.com> wrote:
> >
> > I would think you have read Sutherland's "wheel of reincarnation" paper,
> but if you haven't, please do. What fascinates me today is that it seems
> for about a decade now the bearings on that wheel have rusted solid, and no
> one seems interested in lubricating them to get it going again.
> >
> > -rob
> >
> >
> > On Mon, Mar 6, 2023 at 7:52 PM Paul Ruizendaal via TUHS <tuhs@tuhs.org>
> wrote:
> > Thanks for this.
> >
> > My question was unclear: I wasn't thinking of the hardware, but of the
> software abstraction, i.e. the device files living in /dev
> >
> > I’ve now read through SunOS man pages and it would seem that the /dev/fb
> file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite
> the same though, as initially it seems to have been tied to the kernel part
> of the SunWindows software. My understanding of the latter is still limited
> though. The later Linux usage is designed around mmap() and I am not sure
> when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD,
> but was not implemented at that time). Maybe at the time of the Sun-1 and
> Sun-2 it worked differently.
> >
> > The frame buffer hardware is exposed differently in Plan9. Here there
> are device files (initially /dev/bit/screen and /dev/bit/bitblt) but these
> are not designed around mmap(), which does not exist on Plan9 by design. It
> later develops into the /dev/draw/... files. However, my understanding of
> graphics in Plan9 is also still limited.
> >
> > All in all, finding a conceptually clean but still performant way to
> expose the frame buffer (and acceleration) hardware seems to have been a
> hard problem. Arguably it still is.
> >
> >
> >
> > > On 5 Mar 2023, at 19:25, Kenneth Goodwin <kennethgoodwin56@gmail.com>
> wrote:
> > >
> > > The first frame buffers from Evans and Sutherland were at University
> of Utah, DOD SITES and NYIT CGL as I recall.
> > >
> > > Circa 1974 to 1978.
> > >
> > > They were 19 inch RETMA racks.
> > > Took three to get decent RGB.
> > >
> > > 8 bits per pixel per FB.
> > >
> > > On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org>
> wrote:
> > > I am confused on the history of the frame buffer device.
> > >
> > > On Linux, it seems that /dev/fbdev originated in 1999 from work done
> by  Martin Schaller and  Geert Uytterhoeven (and some input from Fabrice
> Bellard?).
> > >
> > > However, it would seem at first glance that early SunOS also had a
> frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in
> nature (a character device that could be mmap’ed to give access to the
> hardware frame buffer, and ioctl’s to probe and configure the hardware). Is
> that correct, or were these entirely different in nature?
> > >
> > > Paul
> > >
> >
>
>

[-- Attachment #2: Type: text/html, Size: 6754 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06 22:47       ` Paul Ruizendaal via TUHS
  2023-03-06 23:10         ` Rob Pike
@ 2023-03-06 23:20         ` segaloco via TUHS
  1 sibling, 0 replies; 40+ messages in thread
From: segaloco via TUHS @ 2023-03-06 23:20 UTC (permalink / raw)
  To: Paul Ruizendaal; +Cc: tuhs

I don't know if this is what Rob was getting at, but I've also been thinking about the relative homogenization of the display technologies field in recent years.  Massively-parallel vector engines calculating vertices, geometry, tesselation, and pixel color after rasterization are pretty much the only game in town these days.  Granted, we're finding new and exciting ways to *use* the data these technologies generate, but put pixels on a surface inside a VR headset, on a flat-panel monitor in front of my face, or on a little bundle of plastic, glass, and silicon in my pocket, and the general behavior is still the same: I'm going to give the card a bunch of vertices and data, it's going to then convert that data into a bunch of parallel jobs that eventually land as individual pixels on my screen.

One area I am actually interested to see how it develops though is hologram-type stuff.  VR is one thing, you're still creating what is ultimately a 2D image composed of individual samples (pixels) that capture a sampling of the state of a hypothetical 3D environment maintained in your application.  With AR, you're a little closer, basing depth on the physical environment around you, but I wouldn't be surprised if that same depth is still just put in a depth buffer, pixels are masked accordingly, and the application pastes a 2D rasterized image of the content over the camera display every 1/60th of a second (or whatever absurd refresh rates we're using now.)  However, a hologram must maintain the concept of 3D space, at least well enough that if multiple objects were projected, they wouldn't simply disappear from view because one was "behind" another from just one viewpoint, the way the painters algorithm for most depth culling in 3D stuff works.  Although I guess I just answered my own quandary, that one would just disable depth culling and display the results before the rasterizer, but still, there's implications beyond just sample this 3D "scene" into a 2D matrix of pixels.

Another area that I wish there was more development in, but it's probably too late, is 2D acceleration ala 80's arcade hardware.  Galaxian is allegedly the first, at least as held in lore, to provide a chip in which the scrolling background was based on a constantly updating coordinate into a scrollplane and the individual moving graphics were implemented as hardware sprites.  I haven't researched much further back on scrollplane 2D acceleration, but I personally see it as still a very valid and useful technology, one that has been overlooked in the wake of 3D graphics and GPUs.  Given that many, many computer users don't really tax the 3D capabilities of their machine, they'd probably benefit from the lower costs and complexity involved with accelerating the movement of windows and visual content via this kind of hardware instead of GPUs based largely on vertex arrays.  Granted, I don't know how a modern windowing system leverages GPU technology to provide for window acceleration, but if it's using vertex arrays and that sort of thing, that's potentially a lot of extra processing of flat geometric stuff into vertices to feed a GPU, then in the GPU back into pixels, when the 2D graphics already exist and could easily be moved around by coordinate using a scrolling/spriting chip.  Granted the "thesis" on my thoughts on 2D scrolling vs GPU efficiency is half-baked, it's something I think about every so often.

Anywho, not sure if that's what you meant by things seeming to have rusted a bit, but that's my general line of thought when the state of graphics/display technology comes up.  Anything that is just another variation on putting a buffer of vertex properties on a 2k+-core vector engine to spit out a bunch of pixels isn't really pushing into new territory.  I don't care how many more triangles it can do nor how many of those pixels there are, and certainly not how close those pixels are to my eyeballs, it's still the same technology just keeping up with Moore's Law.  That all said, GPGPU on the other hand is such an interesting field and I hope to learn more in the coming years and start working compute shading into projects in earnest...

- Matt G.

------- Original Message -------
On Monday, March 6th, 2023 at 2:47 PM, Paul Ruizendaal via TUHS <tuhs@tuhs.org> wrote:


> I had read that paper and I have just read it again. I am not sure I understand the point you are making.
> 
> My take away from the paper -- reading beyond its 1969 vista -- was that it made two points:
> 
> 1. We put ever more compute power behind our displays
> 
> 2. The compute power tends to grow by first adding special purpose hardware for some core tasks, that then develops into a generic processor and the process repeats
> 
> From this one could imply a third point: we always tend to want displays that are at the high end of what is economically feasible, even if that requires an architectural wart.
> 
> More narrowly it concludes that the compute power behind the display is better organised as being general to the system than to be dedicated to the display.
> 
> The wikipedia page with a history of GPU’s since 1970 (https://en.wikipedia.org/wiki/Graphics_processing_unit) shows the wheel rolling decade after decade.
> 
> However, the above observations are probably not what you wanted to direct my attention to ...
> 
> 
> > On 6 Mar 2023, at 09:57, Rob Pike robpike@gmail.com wrote:
> > 
> > I would think you have read Sutherland's "wheel of reincarnation" paper, but if you haven't, please do. What fascinates me today is that it seems for about a decade now the bearings on that wheel have rusted solid, and no one seems interested in lubricating them to get it going again.
> > 
> > -rob
> > 
> > On Mon, Mar 6, 2023 at 7:52 PM Paul Ruizendaal via TUHS tuhs@tuhs.org wrote:
> > Thanks for this.
> > 
> > My question was unclear: I wasn't thinking of the hardware, but of the software abstraction, i.e. the device files living in /dev
> > 
> > I’ve now read through SunOS man pages and it would seem that the /dev/fb file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite the same though, as initially it seems to have been tied to the kernel part of the SunWindows software. My understanding of the latter is still limited though. The later Linux usage is designed around mmap() and I am not sure when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD, but was not implemented at that time). Maybe at the time of the Sun-1 and Sun-2 it worked differently.
> > 
> > The frame buffer hardware is exposed differently in Plan9. Here there are device files (initially /dev/bit/screen and /dev/bit/bitblt) but these are not designed around mmap(), which does not exist on Plan9 by design. It later develops into the /dev/draw/... files. However, my understanding of graphics in Plan9 is also still limited.
> > 
> > All in all, finding a conceptually clean but still performant way to expose the frame buffer (and acceleration) hardware seems to have been a hard problem. Arguably it still is.
> > 
> > > On 5 Mar 2023, at 19:25, Kenneth Goodwin kennethgoodwin56@gmail.com wrote:
> > > 
> > > The first frame buffers from Evans and Sutherland were at University of Utah, DOD SITES and NYIT CGL as I recall.
> > > 
> > > Circa 1974 to 1978.
> > > 
> > > They were 19 inch RETMA racks.
> > > Took three to get decent RGB.
> > > 
> > > 8 bits per pixel per FB.
> > > 
> > > On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS tuhs@tuhs.org wrote:
> > > I am confused on the history of the frame buffer device.
> > > 
> > > On Linux, it seems that /dev/fbdev originated in 1999 from work done by Martin Schaller and Geert Uytterhoeven (and some input from Fabrice Bellard?).
> > > 
> > > However, it would seem at first glance that early SunOS also had a frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature (a character device that could be mmap’ed to give access to the hardware frame buffer, and ioctl’s to probe and configure the hardware). Is that correct, or were these entirely different in nature?
> > > 
> > > Paul

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06  8:51   ` Paul Ruizendaal via TUHS
  2023-03-06  8:57     ` Rob Pike
@ 2023-03-07  1:24     ` Kenneth Goodwin
  2023-03-08  3:07     ` Rob Gingell
  2 siblings, 0 replies; 40+ messages in thread
From: Kenneth Goodwin @ 2023-03-07  1:24 UTC (permalink / raw)
  To: Paul Ruizendaal; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 2760 bytes --]

As I recall the device file names on our systems at NYIT CGL were simply

/dev/fb0
/dev/fb1
Etc.

There might have been  raw device as well as well as a control channel
device to the pdp 11/04

One path to the frame buffer memory and a separate path to the 11/04.

But TD would remember better than I.

On Mon, Mar 6, 2023, 3:52 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org> wrote:

> Thanks for this.
>
> My question was unclear: I wasn't thinking of the hardware, but of the
> software abstraction, i.e. the device files living in /dev
>
> I’ve now read through SunOS man pages and it would seem that the /dev/fb
> file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite
> the same though, as initially it seems to have been tied to the kernel part
> of the SunWindows software. My understanding of the latter is still limited
> though. The later Linux usage is designed around mmap() and I am not sure
> when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD,
> but was not implemented at that time). Maybe at the time of the Sun-1 and
> Sun-2 it worked differently.
>
> The frame buffer hardware is exposed differently in Plan9. Here there are
> device files (initially /dev/bit/screen and /dev/bit/bitblt) but these are
> not designed around mmap(), which does not exist on Plan9 by design. It
> later develops into the /dev/draw/... files. However, my understanding of
> graphics in Plan9 is also still limited.
>
> All in all, finding a conceptually clean but still performant way to
> expose the frame buffer (and acceleration) hardware seems to have been a
> hard problem. Arguably it still is.
>
>
>
> > On 5 Mar 2023, at 19:25, Kenneth Goodwin <kennethgoodwin56@gmail.com>
> wrote:
> >
> > The first frame buffers from Evans and Sutherland were at University of
> Utah, DOD SITES and NYIT CGL as I recall.
> >
> > Circa 1974 to 1978.
> >
> > They were 19 inch RETMA racks.
> > Took three to get decent RGB.
> >
> > 8 bits per pixel per FB.
> >
> > On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org>
> wrote:
> > I am confused on the history of the frame buffer device.
> >
> > On Linux, it seems that /dev/fbdev originated in 1999 from work done by
> Martin Schaller and  Geert Uytterhoeven (and some input from Fabrice
> Bellard?).
> >
> > However, it would seem at first glance that early SunOS also had a frame
> buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature
> (a character device that could be mmap’ed to give access to the hardware
> frame buffer, and ioctl’s to probe and configure the hardware). Is that
> correct, or were these entirely different in nature?
> >
> > Paul
> >
>
>

[-- Attachment #2: Type: text/html, Size: 3468 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-05 15:01 [TUHS] Origins of the frame buffer device Paul Ruizendaal via TUHS
  2023-03-05 17:29 ` [TUHS] " Grant Taylor via TUHS
  2023-03-05 18:25 ` Kenneth Goodwin
@ 2023-03-07  1:54 ` Kenneth Goodwin
  2 siblings, 0 replies; 40+ messages in thread
From: Kenneth Goodwin @ 2023-03-07  1:54 UTC (permalink / raw)
  To: Paul Ruizendaal; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 1171 bytes --]

The NYIT setup had multiple Barcovision color CRT monitors connected to the
frame buffers via multiple coax video cables

I presume through some sort of video switch hardware.

8 bits per pixel. (Unsigned char)
The numbers stored in the pixel frame buffer memory were used to index a
color map that held the actual RGB or HSV values for the actual color.

Cycling the color map array values was a cheap animation trick that Alex
Schure had a particular fondness for.

On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org>
wrote:

> I am confused on the history of the frame buffer device.
>
> On Linux, it seems that /dev/fbdev originated in 1999 from work done by
> Martin Schaller and  Geert Uytterhoeven (and some input from Fabrice
> Bellard?).
>
> However, it would seem at first glance that early SunOS also had a frame
> buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature
> (a character device that could be mmap’ed to give access to the hardware
> frame buffer, and ioctl’s to probe and configure the hardware). Is that
> correct, or were these entirely different in nature?
>
> Paul
>
>

[-- Attachment #2: Type: text/html, Size: 1576 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06  8:51   ` Paul Ruizendaal via TUHS
  2023-03-06  8:57     ` Rob Pike
  2023-03-07  1:24     ` Kenneth Goodwin
@ 2023-03-08  3:07     ` Rob Gingell
  2023-03-08 12:51       ` Paul Ruizendaal via TUHS
  2 siblings, 1 reply; 40+ messages in thread
From: Rob Gingell @ 2023-03-08  3:07 UTC (permalink / raw)
  To: Paul Ruizendaal, tuhs

On 3/6/23 12:51 AM, Paul Ruizendaal via TUHS wrote:
> I’ve now read through SunOS man pages and it would seem that the /dev/fb file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite the same though, as initially it seems to have been tied to the kernel part of the SunWindows software. 

/dev/fb was a basic device-independent interface that indirected to the 
hardware that was actually installed for operations like open(), 
ioctl(), and mmap(). Through those the window systems figured out what 
they were actually working with. The (4S) man pages had entries for each 
of the frame buffers and graphics processors if you're able to find them.

Sunview did have some kernel support but /dev/fb and the support of 
mmap() were more generic and used by the non-kernel parts of the window 
systems (as well as diagnostics).

Sun had quite a collection of frame buffers through the years, including 
oscillating usage of graphics accelerators that might be viewed as 
wobbles in the wheel rather than true rotation per Sutherland's paper.

(Cultural note: the Computer History Museum published a 2-part oral 
history with Ivan Sutherland, and its home page currently features the 
interviews. The first part can be found at https://youtu.be/Bjr7qLeyvlw. 
Each part is several hours long.)

On 3/6/23 12:51 AM, Paul Ruizendaal via TUHS wrote:
> The later Linux usage is designed around mmap() and I am not sure when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD, but was not implemented at that time). Maybe at the time of the Sun-1 and Sun-2 it worked differently.

mmap() was in SunOS starting (I believe) with the BSD-derived SunOS 1.x. 
Prior to 4.0 it could be used with character devices that supported 
being mapped into an address space: frame buffers, graphics 
accelerators, and the bus address spaces of VME or Multibus being 
possible sources. There might have been others that I'm not recalling now.

The implementation had a number of restrictions from the BSD 
specification. All mappings had to be MAP_SHARED. The caller had to 
specify (page-aligned) addresses within its data segment as the process 
address space had the traditional text, data, and stack segments vs. 
being available as a vector of pages.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08  3:07     ` Rob Gingell
@ 2023-03-08 12:51       ` Paul Ruizendaal via TUHS
  2023-03-08 13:05         ` Warner Losh
  2023-03-08 13:17         ` Arno Griffioen via TUHS
  0 siblings, 2 replies; 40+ messages in thread
From: Paul Ruizendaal via TUHS @ 2023-03-08 12:51 UTC (permalink / raw)
  To: tuhs


> On 8 Mar 2023, at 04:07, Rob Gingell <gingell@computer.org> wrote:
> 
> On 3/6/23 12:51 AM, Paul Ruizendaal via TUHS wrote:
>> I’ve now read through SunOS man pages and it would seem that the /dev/fb file was indeed similar to /dev/fbdev on Linux 15 years later. Not quite the same though, as initially it seems to have been tied to the kernel part of the SunWindows software. 
> 
> /dev/fb was a basic device-independent interface that indirected to the hardware that was actually installed for operations like open(), ioctl(), and mmap(). Through those the window systems figured out what they were actually working with. The (4S) man pages had entries for each of the frame buffers and graphics processors if you're able to find them.
> 
> Sunview did have some kernel support but /dev/fb and the support of mmap() were more generic and used by the non-kernel parts of the window systems (as well as diagnostics).
> 
> Sun had quite a collection of frame buffers through the years, including oscillating usage of graphics accelerators that might be viewed as wobbles in the wheel rather than true rotation per Sutherland's paper.
> 
> (Cultural note: the Computer History Museum published a 2-part oral history with Ivan Sutherland, and its home page currently features the interviews. The first part can be found at https://youtu.be/Bjr7qLeyvlw. Each part is several hours long.)
> 
> On 3/6/23 12:51 AM, Paul Ruizendaal via TUHS wrote:
>> The later Linux usage is designed around mmap() and I am not sure when that arrived in SunOS (the mmap call exists in the manpages of 4.2BSD, but was not implemented at that time). Maybe at the time of the Sun-1 and Sun-2 it worked differently.
> 
> mmap() was in SunOS starting (I believe) with the BSD-derived SunOS 1.x. Prior to 4.0 it could be used with character devices that supported being mapped into an address space: frame buffers, graphics accelerators, and the bus address spaces of VME or Multibus being possible sources. There might have been others that I'm not recalling now.
> 
> The implementation had a number of restrictions from the BSD specification. All mappings had to be MAP_SHARED. The caller had to specify (page-aligned) addresses within its data segment as the process address space had the traditional text, data, and stack segments vs. being available as a vector of pages.

Thank you for those clarifications.

The man pages for SunOS 1.1 (March ’84) are available on Bitsavers and your comments helped me along. Your memory is correct: the man pages for SunOS 1.1 describe it that way, and this description matches with the mmap kernel code in 4.2BSD (it is ifdef'ed out there; I had always assumed it was non-functional, but now it seems to have been a Sun special -- unfortunately there is no matching driver in the 4.2BSD tree).

As far as I can tell, in SunOS 1.1 mmap only worked on the frame buffer device. Its mmap implementation mapped out the normal memory pages for a data area and then mapped in the frame buffer location into that area; I suppose this means that the data area of graphics programs contained a large array as a placeholder for this space.

With that knowledge it is I think safe to say that SunOS had an mmap’ed frame buffer device file as its access abstraction from its earliest days and that this design did not really change throughout the lifetime of SunOS (it is still there in version 4.1.3).

It is then surprising that Linux did not come up with /dev/fbdev until 1999, especially in view of the importance of X to the early Linux developers. Maybe the reason was that the headache of video drivers was delegated to the XFree community.
 






^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06 23:10         ` Rob Pike
@ 2023-03-08 12:53           ` Paul Ruizendaal
  2023-03-08 14:23             ` Dan Cross
  0 siblings, 1 reply; 40+ messages in thread
From: Paul Ruizendaal @ 2023-03-08 12:53 UTC (permalink / raw)
  To: tuhs


> There is some playing around with the software model, but basically we have a powerful, privately managed external "card" that does the graphics, and we talk to it with one particular mechanism. During my work on Plan 9, it got harder over time to try new things because the list of ways to talk to the card was shrinking, and other than the blessed ones, slower and slower.
> 
> For example, a shared-memory frame buffer, which I interpret as the mechanism that started this conversation, is now a historical artifact. Back when, it was a thing to play with.

Ah, now it is clear.

This also has a relation to the point about what constitutes a "workstation" and one of the comments was that it needs to have "integrated graphics". What is integrated in this historical context -- is it a shared memory frame buffer, is it a shared CPU, is it physically in the same box or just an integrated user experience? It seems to me that it is not easy to delineate. Consider a Vax connected to a Tek raster scan display, a Vax connected to a Blit, Clem’s Magnolia and a networked Sun-1. Which ones are workstations? If shared memory is the key only Clem’s Magnolia and the Sun-1 qualify. If it is a shared CPU only a standalone Sun-1 qualifies, but its CPU would be heavily taxed when doing graphics, so standalone graphics was maybe not a normal use case. For now my rule of thumb is that it means (in a 1980’s-1990’s context) a high-bandwidth path between the compute side and display side, with enough total combined power to drive both the workload and the display.

This is also what is underlying the question about the /dev/fbdev etc. devices. In Unix exposing the screen as a file seems a logical choice, although with challenges on how to make this efficient.

SunOS 1.1 had a /dev/fb device that accesses the shared memory frame buffer. It seems that Sun made 4.2BSD mmap work specifically for this device only; only later on this became more generic. The 1999 Linux /dev/fbdev device seems to have more or less copied this SunOS design, providing the illusion of a shared memory frame buffer even when that was not always the underlying hardware reality. Access to acceleration was through display commands issued via ioctl.

Early Plan9 has the /dev/bit/screen device, which is read-only. Writing to the screen appears to have been through display commands written to /dev/bit/bitblt. Later this was refined into the /dev/draw/data and /dev/draw/ctl devices. No mmap of course, and it was possible to remotely mount these devices. Considering these abstractions, did it matter all that much how the screen hardware was organised?

The 1986 Whitechapel MG-1 / Oriel window system used a very different approach: applications drew to private window bitmaps in main memory, which the kernel composited to an otherwise user-inaccessible frame buffer upon request. In a way it resembles the current approach. I’m surprised this was workable with the hardware speeds and memory sizes of the era. However, maybe it did not work well at all, as they went out of business after just a few years.


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 12:51       ` Paul Ruizendaal via TUHS
@ 2023-03-08 13:05         ` Warner Losh
  2023-03-08 13:17         ` Arno Griffioen via TUHS
  1 sibling, 0 replies; 40+ messages in thread
From: Warner Losh @ 2023-03-08 13:05 UTC (permalink / raw)
  To: Paul Ruizendaal; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 426 bytes --]

On Wed, Mar 8, 2023, 4:51 AM Paul Ruizendaal via TUHS <tuhs@tuhs.org> wrote:

> It is then surprising that Linux did not come up with /dev/fbdev until
> 1999, especially in view of the importance of X to the early Linux
> developers. Maybe the reason was that the headache of video drivers was
> delegated to the XFree community.
>

Mapping /dev/mem was good enough for SVGA cards. No surprise there: just be
lazy.

Warner

>

[-- Attachment #2: Type: text/html, Size: 950 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 12:51       ` Paul Ruizendaal via TUHS
  2023-03-08 13:05         ` Warner Losh
@ 2023-03-08 13:17         ` Arno Griffioen via TUHS
  1 sibling, 0 replies; 40+ messages in thread
From: Arno Griffioen via TUHS @ 2023-03-08 13:17 UTC (permalink / raw)
  To: tuhs

On Wed, Mar 08, 2023 at 01:51:10PM +0100, Paul Ruizendaal via TUHS wrote:
> It is then surprising that Linux did not come up with /dev/fbdev until 1999, 
> especially in view of the importance of X to the early Linux developers. 

The PC/intel 'birthplace' of Linux had the advantage that the video hardware
always had an ASCII mode, so from that viewpoint there was no need for 
any fb interface initially to get things going.

If I remember correctly, things like SCO Unix also did not use any fb-style
interface for their X11.

> Maybe the reason was that the headache of video drivers was delegated to 
> the XFree community.

Correct. There was definitely a sense that keeping all the graphics bits out 
of the kernel and just letting Xfree open /dev/mem and do it's magic was the 
preferred way. Also meant that Xfree wasn't waiting on kernel bits and 
vice-versa and could keep going in parallel.

Unlike today where there's only a few cards/manufacturers left, we had 
hundreds of similar-but-not-quite-the-same PC videocards on the market,
so it was an understandable sentiment as change was fast and frequent
on the Xfree end.

The FBdev idea really started brewing because of the non-intel Linux ports 
that did not have the luxury of a built-in ASCII-capable 'console' so needed 
to get something going there (after initial porting with serial output).

Something was needed 'in kernel' even for a plain console on the framebuffer 
and to keep it from falling apart some sort of general interface for some sort 
of generic X11 (even un-accelerated) as well.

Most of the non-PC framebuffer hardware was too 'obscure' for Xfree to spend 
much time on as well..

The Linux/M68k port with Geert being very active here and really pushing to 
getting a more generalised display setup going that would work on the wildly 
different framebuffer hardware between all the m68k platforms but make 
basically use of 1 simple X server that didn't need to know that much
about the hardware. 

Ultimately this led to the general FBdev project for all platforms.

							Bye, Arno.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 12:53           ` Paul Ruizendaal
@ 2023-03-08 14:23             ` Dan Cross
  2023-03-08 15:06               ` Paul Ruizendaal
                                 ` (2 more replies)
  0 siblings, 3 replies; 40+ messages in thread
From: Dan Cross @ 2023-03-08 14:23 UTC (permalink / raw)
  To: Paul Ruizendaal; +Cc: tuhs

On Wed, Mar 8, 2023 at 7:53 AM Paul Ruizendaal <pnr@planet.nl> wrote:
> This also has a relation to the point about what constitutes a "workstation" and one of the comments was that it needs to have "integrated graphics". What is integrated in this historical context -- is it a shared memory frame buffer, is it a shared CPU, is it physically in the same box or just an integrated user experience? It seems to me that it is not easy to delineate. Consider a Vax connected to a Tek raster scan display, a Vax connected to a Blit, Clem’s Magnolia and a networked Sun-1. Which ones are workstations? If shared memory is the key only Clem’s Magnolia and the Sun-1 qualify. If it is a shared CPU only a standalone Sun-1 qualifies, but its CPU would be heavily taxed when doing graphics, so standalone graphics was maybe not a normal use case. For now my rule of thumb is that it means (in a 1980’s-1990’s context) a high-bandwidth path between the compute side and display side, with enough total combined power to drive both the workload and the display.

I wouldn't try to be too rigid in your terms here. The term
"workstation" was probably never well-defined; it had more of an
intuitive connotation of a machine that was more powerful than
something you could get on the consumer market (like a PC or 8-bit
microcomputer), but wasn't a minicomputer or mainframe/supercomputer.

By the early 90s this was understood to mean a single-user machine in
a desktop or deskside form factor with a graphics display, and a more
advanced operating system than something you'd get on a consumer-grade
machine.  But the term probably predated that. Generally, workstations
were machines marketed towards science/engineering/technology
applications, and so intended for a person doing "work", as opposed to
home computing or large scale business data-processing.

Would a Tek 4014 connected to a VAX count? Maybe? I mean, bicycles
have saddles, but we also put saddles on horses, so sure, in the late
70s or early 80s, why not. By the 1990s, maybe not so much.

        - Dan C.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 14:23             ` Dan Cross
@ 2023-03-08 15:06               ` Paul Ruizendaal
  2023-03-08 19:35                 ` Dan Cross
  2023-03-08 16:55               ` Theodore Ts'o
  2023-03-08 17:45               ` Clem Cole
  2 siblings, 1 reply; 40+ messages in thread
From: Paul Ruizendaal @ 2023-03-08 15:06 UTC (permalink / raw)
  To: tuhs


> On 8 Mar 2023, at 15:23, Dan Cross <crossd@gmail.com> wrote:
> 
> On Wed, Mar 8, 2023 at 7:53 AM Paul Ruizendaal <pnr@planet.nl> wrote:
>> This also has a relation to the point about what constitutes a "workstation" and one of the comments was that it needs to have "integrated graphics". What is integrated in this historical context -- is it a shared memory frame buffer, is it a shared CPU, is it physically in the same box or just an integrated user experience? It seems to me that it is not easy to delineate. Consider a Vax connected to a Tek raster scan display, a Vax connected to a Blit, Clem’s Magnolia and a networked Sun-1. Which ones are workstations? If shared memory is the key only Clem’s Magnolia and the Sun-1 qualify. If it is a shared CPU only a standalone Sun-1 qualifies, but its CPU would be heavily taxed when doing graphics, so standalone graphics was maybe not a normal use case. For now my rule of thumb is that it means (in a 1980’s-1990’s context) a high-bandwidth path between the compute side and display side, with enough total combined power to drive both the workload and the display.
> 
> I wouldn't try to be too rigid in your terms here. The term
> "workstation" was probably never well-defined; it had more of an
> intuitive connotation of a machine that was more powerful than
> something you could get on the consumer market (like a PC or 8-bit
> microcomputer), but wasn't a minicomputer or mainframe/supercomputer.

Yes, I got a bit carried away there. The point I was trying to make was in context of the wheel of reincarnation, though: if the system is a Vax and a Blit, we could conceptually think of the Blit as an accelerated graphics card for the Vax, having made one full revolution. If this is nonsense, why is the Magnolia different?




^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 14:23             ` Dan Cross
  2023-03-08 15:06               ` Paul Ruizendaal
@ 2023-03-08 16:55               ` Theodore Ts'o
  2023-03-08 17:46                 ` Clem Cole
  2023-03-08 17:45               ` Clem Cole
  2 siblings, 1 reply; 40+ messages in thread
From: Theodore Ts'o @ 2023-03-08 16:55 UTC (permalink / raw)
  To: Dan Cross; +Cc: Paul Ruizendaal, tuhs

On Wed, Mar 08, 2023 at 09:23:56AM -0500, Dan Cross wrote:
> 
> By the early 90s this was understood to mean a single-user machine in
> a desktop or deskside form factor with a graphics display, and a more
> advanced operating system than something you'd get on a consumer-grade
> machine.  But the term probably predated that. Generally, workstations
> were machines marketed towards science/engineering/technology
> applications, and so intended for a person doing "work", as opposed to
> home computing or large scale business data-processing.

It's perhaps interesting to look at the history of A/UX.  In 1988
Apple released a version of Unix based on SVR2.  It was massively
criticized for being command-line only (no X windows or any other kind
of GUI).

A year later, they came out with a version with X Windows, which made
it roughly comprable to a low-end Sun Workstation, at a price of $9k.
Given that in addition to the 3M's (1 MIPS, 1 Meg of Memory, and 1
Megapixel graphics display), it also adhered to the "4th M", costing
roughly $10k, or a "Megapenny".  :-)

It seems to me that in the late 80's / early 90's, it was pretty clear
what people were expecting if you wanted something more than a "toy"
(err..., sorry, a "home") computer.  And Apple wanted to get into that
game and play, too.  Of course, they later decided this wasn't a place
where they really could compete, especially since in 1990 Sun released
a low-end Workstation for $5k (the Sparcstation SLC).

And by 1992, you could get a very credible Linux home machine with X
Windows, for about $2k.  It's kind of amazing how quickly a personal
Workstation became quite affordable, even for a graduate student or a
new college grad (like me!), out of their own personal checking
account.

  	  	      	      	   		- Ted

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 14:23             ` Dan Cross
  2023-03-08 15:06               ` Paul Ruizendaal
  2023-03-08 16:55               ` Theodore Ts'o
@ 2023-03-08 17:45               ` Clem Cole
  2023-03-08 18:12                 ` segaloco via TUHS
  2023-03-09 14:42                 ` Paul Winalski
  2 siblings, 2 replies; 40+ messages in thread
From: Clem Cole @ 2023-03-08 17:45 UTC (permalink / raw)
  To: Dan Cross; +Cc: Paul Ruizendaal, tuhs

[-- Attachment #1: Type: text/plain, Size: 3452 bytes --]

On Wed, Mar 8, 2023 at 9:24 AM Dan Cross <crossd@gmail.com> wrote:

> I wouldn't try to be too rigid in your terms here. The term
> "workstation" was probably never well-defined
>
I agree.


> By the early 90s this was understood to mean a single-user machine in
> a desktop or deskside form factor with a graphics display, and a more
> advanced operating system than something you'd get on a consumer-grade
> machine.  But the term probably predated that.
>
Definitely.

>
> Would a Tek 4014 connected to a VAX count?
>
And herein lies the issue.  The term was taken from the
engineering/architecture style definition of the 50s/60s - where someone
had a desk/table/bench and *area to do 'work'*.

With the CTSS/Multrics et al., the birth of interactive computing is the
term used to define an area (usually in a shared computer terminal room). By
the time of Tek 4014 and ME-CAD in particular, you often saw darkened rooms
where one or two Tek 4000 series terminals might be attached to a large
(more capable) computer - be it a PDP-10, IBM, or later Vaxen.  At this
point, everything is shared - because the computer is shared - only on
the terminal itself is a single user, but this was called a 'workstation,'
at that time *as the place where you did work*..

Fast forward to the first personal (mini) computer -  *a.k.a.* the. Xerox
Alto

These were intended to be single-user computer systems, and the CPU was not
a shared resource like a time-shared system. Next, we see the MIT LISP
machine and the PascALTO [*a.k.a*. the. 3-Rivers Perq] -- same thing. BTW: I
also just looked at my copy of the CMU SPICE (Scientific Personal
Integrated Computing Environment). In none of these does the term
workstation show up (be. used) *to describe the computer itself* - *i.e.,*
the term is still only used in the context of the place/area you do work.
All of these use the term *personal computer *to describe the device being
used in that place*.*

We also start to see the birth of firms like Apollo, Masscomp, and later
VLSI Systems (later renamed Sun Microsystems).  But also build personal
computers that can perform the same computing task as 32-bit minicomputers
such as the Vax.

Fast forward to the IBM release of the IBM 5150 Personal Computer based on
an Intel 8088 - which is decidedly a much less capable computer than what
is being sold by the folks using Vaxen, M68000s, or Zilion Z8000.  While
this system can be a fine replacement for a 'word processor' and even run
the business friends 'Visicalc' - it is not suited for the CAD style work
that is ruining on minicomputers.  But ... IBM usurps the term 'Personal
Computer' to describe their new product (and make it sound a bit more than
what it really was). But now you have a problem in the market at large.

Marketing folks at places like 3-Rivers, Apollo, and the like need a new
term to start to describe the capabilities of the computer in their more
expensive products to differentiate them from the new IBM product and
explain their value for that extra cost -> i.e. they were no selling
personal computers, but complete and much more capable systems that
integrated into a network, had raster graphics, *etc*. and to perform tasks
that the IBM PC was unable. So they took the term of how the product was
being used -> *to create a place to do work, to be the device that allowed
you to do (real) work*.
ᐧ

[-- Attachment #2: Type: text/html, Size: 8368 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 16:55               ` Theodore Ts'o
@ 2023-03-08 17:46                 ` Clem Cole
  0 siblings, 0 replies; 40+ messages in thread
From: Clem Cole @ 2023-03-08 17:46 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Paul Ruizendaal, tuhs

[-- Attachment #1: Type: text/plain, Size: 529 bytes --]

On Wed, Mar 8, 2023 at 11:55 AM Theodore Ts'o <tytso@mit.edu> wrote:

> A year later, they came out with a version with X Windows, which made
> it roughly comprable to a low-end Sun Workstation, at a price of $9k.
> Given that in addition to the 3M's (1 MIPS, 1 Meg of Memory, and 1
> Megapixel graphics display), it also adhered to the "4th M", costing
> roughly $10k, or a "Megapenny".  :-)
>
CMU's SPICE proposal was for a 3M machine at under $5K by 1985. But you are
right $10-15K was typical in those days.
ᐧ

[-- Attachment #2: Type: text/html, Size: 1331 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 17:45               ` Clem Cole
@ 2023-03-08 18:12                 ` segaloco via TUHS
  2023-03-08 18:21                   ` Larry McVoy
  2023-03-09 14:42                 ` Paul Winalski
  1 sibling, 1 reply; 40+ messages in thread
From: segaloco via TUHS @ 2023-03-08 18:12 UTC (permalink / raw)
  To: Clem Cole; +Cc: Paul Ruizendaal, tuhs

[-- Attachment #1: Type: text/plain, Size: 4197 bytes --]

I've always liked the "Workbench" terminology that AT&T used, it summons to mind a more communal environment than the idea of a "workstation" does. The former makes me think of my time in the lab "at the bench" with my coworkers, whereas the latter makes me think of my current remote programming job where my break room chat is limited to the cat.

It's a shame we all have multiple timesharing systems in our homes, pockets, etc. these days but a dearth of communal computing environments. That said, it's understandable given how many bad actors there are out there, a computing utility would be a frequent target...curse the crackers for ruining that along with the term hacking....

- Matt G.
------- Original Message -------
On Wednesday, March 8th, 2023 at 9:45 AM, Clem Cole <clemc@ccc.com> wrote:

> On Wed, Mar 8, 2023 at 9:24 AM Dan Cross <crossd@gmail.com> wrote:
>
>> I wouldn't try to be too rigid in your terms here. The term
>> "workstation" was probably never well-defined
>
> I agree.
>
>> By the early 90s this was understood to mean a single-user machine in
>> a desktop or deskside form factor with a graphics display, and a more
>> advanced operating system than something you'd get on a consumer-grade
>> machine. But the term probably predated that.
>
> Definitely.
>
>> Would a Tek 4014 connected to a VAX count?
>
> And herein lies the issue. The term was taken from the engineering/architecture style definition of the 50s/60s - where someone had a desk/table/bench and area to do 'work'.
>
> With the CTSS/Multrics et al., the birth of interactive computing is the term used to define an area (usually in a shared computer terminal room). By the time of Tek 4014 and ME-CAD in particular, you often saw darkened rooms where one or two Tek 4000 series terminals might be attached to a large (more capable) computer - be it a PDP-10, IBM, or later Vaxen.At this point, everything is shared - because the computer is shared - only on the terminal itself is a single user, but this was called a 'workstation,' at that time as the place where you did work..
>
> Fast forward to the first personal (mini) computer - a.k.a. the. Xerox Alto
>
> These were intended to be single-user computer systems, and the CPU was not a shared resource like a time-shared system. Next, we see the MIT LISP machine and the PascALTO [a.k.a. the. 3-Rivers Perq] -- same thing. BTW: I also just looked at my copy of the CMU SPICE (Scientific Personal Integrated Computing Environment).  In none of these does the term workstation show up (be. used) to describe the computer itself - i.e., the term is still only used in the context of the place/area you do work. All of these use the term personal computer to describe the device being used in that place.
>
> We also start to see the birth of firms like Apollo, Masscomp, and later VLSI Systems (later renamed Sun Microsystems). But also build personal computers that can perform the same computing task as 32-bit minicomputers such as the Vax.
>
> Fast forward to the IBM release of the IBM 5150 Personal Computer based on an Intel 8088 - which is decidedly a much less capable computer than what is being sold by the folks using Vaxen, M68000s, or Zilion Z8000. While this system can be a fine replacement for a 'word processor' and even run the business friends 'Visicalc' - it is not suited for the CAD style work that is ruining on minicomputers. But ... IBM usurps the term 'Personal Computer' to describe their new product (and make it sound a bit more than what it really was). But now you have a problem in the market at large.
>
> Marketing folks at places like 3-Rivers, Apollo, and the like need a new term to start to describe the capabilities of the computer in their more expensiveproducts to differentiate them from the new IBM product and explain their value for that extra cost -> i.e. they were no selling personal computers, but complete and much more capable systems that integrated into a network, had raster graphics, etc. and to perform tasks that the IBM PC was unable. So they took the term of how the product was being used -> to create a place to do work, to be the device that allowed you to do (real) work.
> ᐧ

[-- Attachment #2: Type: text/html, Size: 9483 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 18:12                 ` segaloco via TUHS
@ 2023-03-08 18:21                   ` Larry McVoy
  2023-03-08 18:43                     ` Kenneth Goodwin
                                       ` (2 more replies)
  0 siblings, 3 replies; 40+ messages in thread
From: Larry McVoy @ 2023-03-08 18:21 UTC (permalink / raw)
  To: segaloco; +Cc: Paul Ruizendaal, tuhs

I really miss terminal rooms.  I learned so much looking over the
shoulders of more experienced people.  Wait, you can run a paragraph of
text through fmt(1)?  How the heck did you do that?

I don't really miss it for me, I'm retired, but I think kids learning 
are sort of at a disadvantage.  Not sure there is a youtube video that
teaches you to say:

:map , !}fmt<CR>

On Wed, Mar 08, 2023 at 06:12:48PM +0000, segaloco via TUHS wrote:
> I've always liked the "Workbench" terminology that AT&T used, it summons to mind a more communal environment than the idea of a "workstation" does. The former makes me think of my time in the lab "at the bench" with my coworkers, whereas the latter makes me think of my current remote programming job where my break room chat is limited to the cat.
> 
> It's a shame we all have multiple timesharing systems in our homes, pockets, etc. these days but a dearth of communal computing environments. That said, it's understandable given how many bad actors there are out there, a computing utility would be a frequent target...curse the crackers for ruining that along with the term hacking....
> 
> - Matt G.
> ------- Original Message -------
> On Wednesday, March 8th, 2023 at 9:45 AM, Clem Cole <clemc@ccc.com> wrote:
> 
> > On Wed, Mar 8, 2023 at 9:24???AM Dan Cross <crossd@gmail.com> wrote:
> >
> >> I wouldn't try to be too rigid in your terms here. The term
> >> "workstation" was probably never well-defined
> >
> > I agree.
> >
> >> By the early 90s this was understood to mean a single-user machine in
> >> a desktop or deskside form factor with a graphics display, and a more
> >> advanced operating system than something you'd get on a consumer-grade
> >> machine. But the term probably predated that.
> >
> > Definitely.
> >
> >> Would a Tek 4014 connected to a VAX count?
> >
> > And herein lies the issue. The term was taken from the engineering/architecture style definition of the 50s/60s - where someone had a desk/table/bench and area to do 'work'.
> >
> > With the CTSS/Multrics et al., the birth of interactive computing is the term used to define an area (usually in a shared computer terminal room). By the time of Tek 4014 and ME-CAD in particular, you often saw darkened rooms where one or two Tek 4000 series terminals might be attached to a large (more capable) computer - be it a PDP-10, IBM, or later Vaxen.At this point, everything is shared - because the computer is shared - only on the terminal itself is a single user, but this was called a 'workstation,' at that time as the place where you did work..
> >
> > Fast forward to the first personal (mini) computer - a.k.a. the. Xerox Alto
> >
> > These were intended to be single-user computer systems, and the CPU was not a shared resource like a time-shared system. Next, we see the MIT LISP machine and the PascALTO [a.k.a. the. 3-Rivers Perq] -- same thing. BTW: I also just looked at my copy of the CMU SPICE (Scientific Personal Integrated Computing Environment).  In none of these does the term workstation show up (be. used) to describe the computer itself - i.e., the term is still only used in the context of the place/area you do work. All of these use the term personal computer to describe the device being used in that place.
> >
> > We also start to see the birth of firms like Apollo, Masscomp, and later VLSI Systems (later renamed Sun Microsystems). But also build personal computers that can perform the same computing task as 32-bit minicomputers such as the Vax.
> >
> > Fast forward to the IBM release of the IBM 5150 Personal Computer based on an Intel 8088 - which is decidedly a much less capable computer than what is being sold by the folks using Vaxen, M68000s, or Zilion Z8000. While this system can be a fine replacement for a 'word processor' and even run the business friends 'Visicalc' - it is not suited for the CAD style work that is ruining on minicomputers. But ... IBM usurps the term 'Personal Computer' to describe their new product (and make it sound a bit more than what it really was). But now you have a problem in the market at large.
> >
> > Marketing folks at places like 3-Rivers, Apollo, and the like need a new term to start to describe the capabilities of the computer in their more expensiveproducts to differentiate them from the new IBM product and explain their value for that extra cost -> i.e. they were no selling personal computers, but complete and much more capable systems that integrated into a network, had raster graphics, etc. and to perform tasks that the IBM PC was unable. So they took the term of how the product was being used -> to create a place to do work, to be the device that allowed you to do (real) work.
> > ???

-- 
---
Larry McVoy           Retired to fishing          http://www.mcvoy.com/lm/boat

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 18:21                   ` Larry McVoy
@ 2023-03-08 18:43                     ` Kenneth Goodwin
  2023-03-08 18:45                     ` Steffen Nurpmeso
  2023-03-08 22:44                     ` Clem Cole
  2 siblings, 0 replies; 40+ messages in thread
From: Kenneth Goodwin @ 2023-03-08 18:43 UTC (permalink / raw)
  To: Larry McVoy; +Cc: segaloco, Paul Ruizendaal, The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 1330 bytes --]

On Wed, Mar 8, 2023, 1:21 PM Larry McVoy <lm@mcvoy.com> wrote:

> I really miss terminal rooms.  I learned so much looking over the
> shoulders of more experienced people.  Wait, you can run a paragraph of
> text through fmt(1)?  How the heck did you do that?
>
> I don't really miss it for me, I'm retired, but I think kids learning
> are sort of at a disadvantage.  Not sure there is a youtube video that
> teaches you to say:
>
> :map , !}fmt<CR>
>

You could all create those videos.
How to"s
The histories
Evolution.
Working with the Legends of Computer Science

Insights into working on the R&D side

Give them an insight into the problems you faced and how you approached
finding a solution. Teaches critical thinking, how to approach the
impossible problem
From a personal perspective.
The emotional side of great research
The eureka moments in your lives.
 Teach inspire

The olde Geezer Geeks Educational Video Series for Young Minds and New
Hackers.




> On Wed, Mar 08, 2023 at 06:12:48PM +0000, segaloco via TUHS wrote:
> > I've always liked the "Workbench" terminology that AT&T used, it summons
> to mind a more communal environment than the idea of a "workstation" does.
> The former makes me think of my time in the lab.....
> --
> ---
> Larry McVoy           Retired to fishing
> http://www.mcvoy.com/lm/boat
>

[-- Attachment #2: Type: text/html, Size: 2407 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 18:21                   ` Larry McVoy
  2023-03-08 18:43                     ` Kenneth Goodwin
@ 2023-03-08 18:45                     ` Steffen Nurpmeso
  2023-03-08 22:44                     ` Clem Cole
  2 siblings, 0 replies; 40+ messages in thread
From: Steffen Nurpmeso @ 2023-03-08 18:45 UTC (permalink / raw)
  To: Larry McVoy; +Cc: segaloco, Paul Ruizendaal, tuhs

Larry McVoy wrote in
 <20230308182120.GL5993@mcvoy.com>:
 |I really miss terminal rooms.  I learned so much looking over the
 |shoulders of more experienced people.  Wait, you can run a paragraph of
 |text through fmt(1)?  How the heck did you do that?
 |
 |I don't really miss it for me, I'm retired, but I think kids learning 
 |are sort of at a disadvantage.  Not sure there is a youtube video that
 |teaches you to say:
 |
 |:map , !}fmt<CR>

vim can do this built-in (sufficiently for me).

  map     <F11>   <Esc>{gq}

Rarely (GNU) fmt (only, the latter) can really do better

  map     <F12>M  <Esc>:'{,'}!fmt 64 66<CR>
  map     <F12>m  <Esc>:'{,'}!fmt --uniform-spacing --width=66 --goal=64<CR>

I have forgotten the cryptic meaning - Don't ask me no question,
I know a little, All i can do is write about it, Am i losin',
I never dreamed That smell (sweet home alabama).
I must now get explicitly rid of things in [ci]?map however (like
<C-A>   <Nop>, C-X, C-E etc) to be a free bird.  'Just a Simple
Man.  But enough Lynyrd Skynyrd southern US!  (Sorry!)

For decades i want to check out vile, but never made it there for
long.  (Now that i switched back to tabulator indentation maybe.
It now can spaces, on the other hand, i think.  Comments
auto-joining back over lines in vim's C mode is so nice, then
again.)

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 15:06               ` Paul Ruizendaal
@ 2023-03-08 19:35                 ` Dan Cross
  0 siblings, 0 replies; 40+ messages in thread
From: Dan Cross @ 2023-03-08 19:35 UTC (permalink / raw)
  To: Paul Ruizendaal; +Cc: tuhs

On Wed, Mar 8, 2023 at 10:06 AM Paul Ruizendaal <pnr@planet.nl> wrote:
> > On 8 Mar 2023, at 15:23, Dan Cross <crossd@gmail.com> wrote:
> > I wouldn't try to be too rigid in your terms here. The term
> > "workstation" was probably never well-defined; it had more of an
> > intuitive connotation of a machine that was more powerful than
> > something you could get on the consumer market (like a PC or 8-bit
> > microcomputer), but wasn't a minicomputer or mainframe/supercomputer.
>
> Yes, I got a bit carried away there. The point I was trying to make was in context of the wheel of reincarnation, though: if the system is a Vax and a Blit, we could conceptually think of the Blit as an accelerated graphics card for the Vax, having made one full revolution. If this is nonsense, why is the Magnolia different?

I don't think it's nonsense. I do think it is reaching a bit to draw
an analogy that was probably unconsidered at the time, but that's ok.
I think the study of history turns up such things all the time.

        - Dan C.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 18:21                   ` Larry McVoy
  2023-03-08 18:43                     ` Kenneth Goodwin
  2023-03-08 18:45                     ` Steffen Nurpmeso
@ 2023-03-08 22:44                     ` Clem Cole
  2 siblings, 0 replies; 40+ messages in thread
From: Clem Cole @ 2023-03-08 22:44 UTC (permalink / raw)
  To: Larry McVoy; +Cc: segaloco, Paul Ruizendaal, tuhs

[-- Attachment #1: Type: text/plain, Size: 963 bytes --]

On Wed, Mar 8, 2023 at 1:21 PM Larry McVoy <lm@mcvoy.com> wrote:

> I really miss terminal rooms.  I learned so much looking over the
> shoulders of more experienced people.
>
Amen bro.. Funny Guy Almes and I were chatting about this last week WRT to
some CMU history.

I've observed that the theory could be learned in lectures and reading
books, but the art is taught to an apprentice by a master - carefully
looking over her/his shoulder, trying it yourself, being self-critical of
your work, as well as taking advice/critical observations from others --
then little by little learning to appreciate what works. That was all so
much easier in a terminal room.

Or as I sometimes have said to students: "Hacking is done in the privacy of
your office between you and your computer terminal. Programming may start
in private, but needs the public transparency of the other eyes and
thoughts if you want it to be useful for a long time."
ᐧ

[-- Attachment #2: Type: text/html, Size: 2191 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08 17:45               ` Clem Cole
  2023-03-08 18:12                 ` segaloco via TUHS
@ 2023-03-09 14:42                 ` Paul Winalski
  1 sibling, 0 replies; 40+ messages in thread
From: Paul Winalski @ 2023-03-09 14:42 UTC (permalink / raw)
  To: Clem Cole; +Cc: Paul Ruizendaal, tuhs

On 3/8/23, Clem Cole <clemc@ccc.com> wrote:
(regarding the issue of the definition of "workstation")
> And herein lies the issue.  The term was taken from the
> engineering/architecture style definition of the 50s/60s - where someone
> had a desk/table/bench and *area to do 'work'*.

Correct.  According to etymologyonline.com, the term "workstation"
dates from 1950, and in the computer sense from 1972.  It is a place
(station) where one does one's work.  In a jeweler's shop the bench
where watches are cleaned and repaired could be called the
workstation.

Since the early 1980s when I first encountered them, I've always
regarded the distinction between a workstation, a PC, and a word
processor to be a matter of how the machine is used rather than the
hardware itself.  All three (workstation, PC, WP) are single-user
computing devices.  The distinction is that PCs are for non-business
use and word processors are limited-function devices.  A computer
workstation is a single-user, general-purpose computer used for
business or technical purposes.

>> Would a Tek 4014 connected to a VAX count?

If the VAX were only being used by one person at a time (i.e., not a
timesharing system), then I would say yes.  IMO the PDP-1 was often
used as a workstation.

-Paul W.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-09 23:24   ` emanuel stiebler
@ 2023-03-10  1:44     ` Lawrence Stewart
  0 siblings, 0 replies; 40+ messages in thread
From: Lawrence Stewart @ 2023-03-10  1:44 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society

Noel Chiappa wrote:
>>>> The first frame buffers from Evans and Sutherland were at University
>>>> of Utah, DOD SITES and NYIT CGL as I recall.  Circa 1974 to 1978.
>>> 

Other historical datapoints:

I was at MIT ’72-’76 in the Architecture Machine Group (mostly because I was intimidated by the reception desk at LCS).

There were a series of frame buffers during that time, including an 8-bit color mapped one I built in early 1976 for a senior thesis (a color printer).  

I don’t think anyone used them for programming or a GUI.  Instead they were image output devices.  Programming was mostly Imlacs or Tek or ASR33s or IBM 2741s.

The weirdest display was somehow we had a wacky prototype 24-bit machine with a vector display.  One of the scientists loaded a series of short vectors to turn it into a 128x128 raster display.  The thing filled 3 or 4 racks.

-L


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-08  5:43 ` Lars Brinkhoff
@ 2023-03-09 23:24   ` emanuel stiebler
  2023-03-10  1:44     ` Lawrence Stewart
  0 siblings, 1 reply; 40+ messages in thread
From: emanuel stiebler @ 2023-03-09 23:24 UTC (permalink / raw)
  To: Lars Brinkhoff, Noel Chiappa; +Cc: tuhs

On 2023-03-08 00:43, Lars Brinkhoff wrote:
> Noel Chiappa wrote:
>>> The first frame buffers from Evans and Sutherland were at University
>>> of Utah, DOD SITES and NYIT CGL as I recall.  Circa 1974 to 1978.
>>
>> Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
>> PDP-10's; '74-'78 sounds like an interim period.)
> 
> The Picture System from 1974 was based on a PDP-11/05.  It looks like
> vector graphics rather than a frame buffer though.
> 
> http://archive.computerhistory.org/resources/text/Evans_Sutherland/EvansSutherland.3D.1974.102646288.pdf

If the drawing speed is measured in inches, probably a vector screen ...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-05 18:52 Noel Chiappa
  2023-03-05 20:43 ` Rob Pike
  2023-03-07  1:21 ` Kenneth Goodwin
@ 2023-03-08  5:43 ` Lars Brinkhoff
  2023-03-09 23:24   ` emanuel stiebler
  2 siblings, 1 reply; 40+ messages in thread
From: Lars Brinkhoff @ 2023-03-08  5:43 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: tuhs

Noel Chiappa wrote:
>> The first frame buffers from Evans and Sutherland were at University
>> of Utah, DOD SITES and NYIT CGL as I recall.  Circa 1974 to 1978.
>
> Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
> PDP-10's; '74-'78 sounds like an interim period.)

The Picture System from 1974 was based on a PDP-11/05.  It looks like
vector graphics rather than a frame buffer though.

http://archive.computerhistory.org/resources/text/Evans_Sutherland/EvansSutherland.3D.1974.102646288.pdf

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06 23:24 ` Larry McVoy
  2023-03-07 12:08   ` arnold
@ 2023-03-07 16:42   ` Theodore Ts'o
  1 sibling, 0 replies; 40+ messages in thread
From: Theodore Ts'o @ 2023-03-07 16:42 UTC (permalink / raw)
  To: Larry McVoy; +Cc: coff

(Moving to COFF)

On Mon, Mar 06, 2023 at 03:24:29PM -0800, Larry McVoy wrote:
> But even that seems suspect, I would think they could put some logic
> in there that just doesn't feed power to the GPU if you aren't using
> it but maybe that's harder than I think.
> 
> If it's not about power then I don't get it, there are tons of transistors
> waiting to be used, they could easily plunk down a bunch of GPUs on the
> same die so why not?  Maybe the dev timelines are completely different
> (I suspect not, I'm just grabbing at straws).

Other potential reasons:

1) Moving functionality off-CPU also allows for those devices to have
their own specialized video memory that might be faster (SDRAM) or
dual-ported (VRAM) without having to add that complexity to the more
general system DRAM and/or the CPU's Northbridge.

2) In some cases, having an off-chip co-processor may not need any
access to the system memory at well.  An example of this is the "bump
in the wire" in-line crypto engines (ICE) which is located between the
Southbridge and the eMMC/UFS flash storage device.  If you are using a
Android device, it's likely to have an ICE.  The big advantage is that
it avoids needing to have a bounce buffer on the write path, where the
file system encryption layer has to copy-and-encrypt data from the
page cache to a bounce buffer, and then the encrypted block will then
get DMA'ed to the storage device.

3) From an architectural perspective, not all use cases need various
co-processors, whether it is to doing cryptography, or running some
kind of machine-learning module, or image manipulation to simulate
bokeh, or create HDR images, etc.  While RISC-V does have the concept
of instructure set extensions, which can be developed without getting
permission from the "owners" of the core CPU ISA (e.g., ARM, Intel,
etc.), it's a lot more convenient for someone who doesn't need to bend
the knee to ARM, inc. (or their new corporate overloads) or Intel, to
simply put that extension outside the core ISA.

(More recently, there is an interesting lawsuit about whether it's
"allowed" to put a 3rd party co-processor on the same SOC without
paying $$$$$ to the corporate overload, which may make this point moot
--- although it might cause people to simply switch to another ISA
that doesn't have this kind of lawsuit-happy rent-seeking....)

In any case, if you don't need to play Quake with 240 frames per
second, then there's no point putting the GPU in the core CPU
architecture, and it may turn out that the kind of co-processor which
is optimized for running ML models is different, and it is often
easier to make changes to the programming model for a GPU, compared to
making changes to a CPU's ISA.

						- Ted

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06 23:24 ` Larry McVoy
@ 2023-03-07 12:08   ` arnold
  2023-03-07 16:42   ` Theodore Ts'o
  1 sibling, 0 replies; 40+ messages in thread
From: arnold @ 2023-03-07 12:08 UTC (permalink / raw)
  To: norman, lm; +Cc: tuhs

Larry McVoy <lm@mcvoy.com> wrote:

> It's funny because back in my day those GPUs would have been called vector
> processors, at least I think they would.  It seems like somewhere along
> the way, vector processors became a dirty word but GPUS are fine.
>
> Color me confused.

I think vector processing is used for things like

	for (i = 0; i < SOME_CONSTANT; i++)
		a[i] = b[i] + c[i]

that is, vectoring general purpose code. GPUS are pretty specialized
SIMD machines which sort of happen to be useful for certain kinds
of parallelizable general computations, like password cracking.

Today there are both standardized and proprietary ways of programming
them.

> About the only reason I can see to keep things divided between the CPU
> and the GPU is battery power, or power consumption in general.  From
> what little I know, it seems like GPUs are pretty power thirsty so 
> maybe they keep them as optional devices so people who don't need them
> don't pay the power budget.
>
> But even that seems suspect, I would think they could put some logic
> in there that just doesn't feed power to the GPU if you aren't using
> it but maybe that's harder than I think.

You're on target, not just in the GPU world but also in the CPU
world.  Modern Intel CPUS have a lot of circuits for turning power
consumption up and down dynamically.

Modern-day CPU development is much harder than we software types generally
realize. I worked at Intel as a software guy (bad juju there, let me
tell you!) and learned a lot about it, from the outside. For a given
x86 microarchitecture, from planning until it's in the box in the store
is like a 5+ year journey. These days maybe even more, as I left
Intel 7.5 years ago.

Arnold

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-05 18:52 Noel Chiappa
  2023-03-05 20:43 ` Rob Pike
@ 2023-03-07  1:21 ` Kenneth Goodwin
  2023-03-08  5:43 ` Lars Brinkhoff
  2 siblings, 0 replies; 40+ messages in thread
From: Kenneth Goodwin @ 2023-03-07  1:21 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 902 bytes --]

Pdp 11/04  as a device controller.

 As I recall, they used a Dec parallel interface to connect the frame
buffer controller to the host systems. Tom Duff would be the subject matter
expert on the actual hardware. (If you can  get his attention and drag him
out of retirement for a bit.) This was pre networks.

There was a second company that entered the market a few years later. The
name is unfortunately in external offline storage...

We have come a LONG WAY since those days..

On Sun, Mar 5, 2023, 1:52 PM Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:

>     > From: Kenneth Goodwin
>
>     > The first frame buffers from Evans and Sutherland were at University
> of
>     > Utah, DOD SITES and NYIT CGL as I recall.
>     > Circa 1974 to 1978.
>
> Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
> PDP-10's; '74-'78 sounds like an interim period.)
>
>           Noel
>
>

[-- Attachment #2: Type: text/html, Size: 1388 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-06 23:16 Norman Wilson
@ 2023-03-06 23:24 ` Larry McVoy
  2023-03-07 12:08   ` arnold
  2023-03-07 16:42   ` Theodore Ts'o
  0 siblings, 2 replies; 40+ messages in thread
From: Larry McVoy @ 2023-03-06 23:24 UTC (permalink / raw)
  To: Norman Wilson; +Cc: tuhs


On Mon, Mar 06, 2023 at 06:16:19PM -0500, Norman Wilson wrote:
> Rob Pike:
> 
>   As observed by many others, there is far more grunt today in the graphics
>   card than the CPU, which in Sutherland's timeline would mean it was time to
>   push that power back to the CPU. But no.
> 
> ====
> 
> Indeed.  Instead we are evolving ways to use graphics cards to
> do general-purpose computation, and assembling systems that have
> many graphics cards not to do graphics but to crunch numbers.
> 
> My current responsibilities include running a small stable of
> those, because certain computer-science courses consider it
> important that students learn to use them.
> 
> I sometimes wonder when someone will think of adding secondary
> storage and memory management and network interfaces to GPUs,
> and push to run Windows on them.

It's funny because back in my day those GPUs would have been called vector
processors, at least I think they would.  It seems like somewhere along
the way, vector processors became a dirty word but GPUS are fine.

Color me confused.

About the only reason I can see to keep things divided between the CPU
and the GPU is battery power, or power consumption in general.  From
what little I know, it seems like GPUs are pretty power thirsty so 
maybe they keep them as optional devices so people who don't need them
don't pay the power budget.

But even that seems suspect, I would think they could put some logic
in there that just doesn't feed power to the GPU if you aren't using
it but maybe that's harder than I think.

If it's not about power then I don't get it, there are tons of transistors
waiting to be used, they could easily plunk down a bunch of GPUs on the
same die so why not?  Maybe the dev timelines are completely different
(I suspect not, I'm just grabbing at straws).

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
@ 2023-03-06 23:16 Norman Wilson
  2023-03-06 23:24 ` Larry McVoy
  0 siblings, 1 reply; 40+ messages in thread
From: Norman Wilson @ 2023-03-06 23:16 UTC (permalink / raw)
  To: tuhs

Rob Pike:

  As observed by many others, there is far more grunt today in the graphics
  card than the CPU, which in Sutherland's timeline would mean it was time to
  push that power back to the CPU. But no.

====

Indeed.  Instead we are evolving ways to use graphics cards to
do general-purpose computation, and assembling systems that have
many graphics cards not to do graphics but to crunch numbers.

My current responsibilities include running a small stable of
those, because certain computer-science courses consider it
important that students learn to use them.

I sometimes wonder when someone will think of adding secondary
storage and memory management and network interfaces to GPUs,
and push to run Windows on them.

Norman Wilson
Toronto ON

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-05 20:43 ` Rob Pike
@ 2023-03-06 10:43   ` Jonathan Gray
  0 siblings, 0 replies; 40+ messages in thread
From: Jonathan Gray @ 2023-03-06 10:43 UTC (permalink / raw)
  To: Rob Pike; +Cc: Noel Chiappa, tuhs

A 256x256 frame buffer at the University of Toronto is mentioned in

Ronald M. Baecker
Interactive graphics as a vehicle for the enhancement of human creativity
Proceedings of the 5th Canadian Man-Computer Communications Conference:
Calgary, Alberta, Canada, 26 - 27 May 1977
https://graphicsinterface.org/wp-content/uploads/cmccc1977-10.pdf
https://graphicsinterface.org/proceedings/cmccc1977/cmccc1977-10/

USA/Canada Visit, August 1977 by Bob Hopgood
http://www.chilton-computing.org.uk/acd/literature/reports/p002.htm

some discussion on early frame buffers in

Ronald Baecker
Digital video display systems and dynamic graphics
ACM SIGGRAPH August 1979
https://dl.acm.org/doi/pdf/10.1145/965103.807424
https://dl.acm.org/doi/10.1145/965103.807424

On Mon, Mar 06, 2023 at 07:43:48AM +1100, Rob Pike wrote:
> There was a Three Rivers Graphic Wonder (a vector display) on the
> University of Toronto PDP-11/45 in 1975, maybe 1974. There wasn't a frame
> buffer until Dave Tennenhouse built one around 1978 - not sure of that
> date, maybe a little later
> .
> 256x256, 8 bits per pixel, or 65kilobytes, the full data address space of
> the PDP-11.
> 
> -rob
> 
> 
> On Mon, Mar 6, 2023 at 5:52 AM Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
> 
> >     > From: Kenneth Goodwin
> >
> >     > The first frame buffers from Evans and Sutherland were at University
> > of
> >     > Utah, DOD SITES and NYIT CGL as I recall.
> >     > Circa 1974 to 1978.
> >
> > Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
> > PDP-10's; '74-'78 sounds like an interim period.)
> >
> >           Noel
> >
> >

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
  2023-03-05 18:52 Noel Chiappa
@ 2023-03-05 20:43 ` Rob Pike
  2023-03-06 10:43   ` Jonathan Gray
  2023-03-07  1:21 ` Kenneth Goodwin
  2023-03-08  5:43 ` Lars Brinkhoff
  2 siblings, 1 reply; 40+ messages in thread
From: Rob Pike @ 2023-03-05 20:43 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: tuhs

[-- Attachment #1: Type: text/plain, Size: 783 bytes --]

There was a Three Rivers Graphic Wonder (a vector display) on the
University of Toronto PDP-11/45 in 1975, maybe 1974. There wasn't a frame
buffer until Dave Tennenhouse built one around 1978 - not sure of that
date, maybe a little later
.
256x256, 8 bits per pixel, or 65kilobytes, the full data address space of
the PDP-11.

-rob


On Mon, Mar 6, 2023 at 5:52 AM Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:

>     > From: Kenneth Goodwin
>
>     > The first frame buffers from Evans and Sutherland were at University
> of
>     > Utah, DOD SITES and NYIT CGL as I recall.
>     > Circa 1974 to 1978.
>
> Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
> PDP-10's; '74-'78 sounds like an interim period.)
>
>           Noel
>
>

[-- Attachment #2: Type: text/html, Size: 1545 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [TUHS] Re: Origins of the frame buffer device
@ 2023-03-05 18:52 Noel Chiappa
  2023-03-05 20:43 ` Rob Pike
                   ` (2 more replies)
  0 siblings, 3 replies; 40+ messages in thread
From: Noel Chiappa @ 2023-03-05 18:52 UTC (permalink / raw)
  To: tuhs

    > From: Kenneth Goodwin

    > The first frame buffers from Evans and Sutherland were at University of
    > Utah, DOD SITES and NYIT CGL as I recall.
    > Circa 1974 to 1978.

Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
PDP-10's; '74-'78 sounds like an interim period.)

	  Noel


^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2023-03-10  1:45 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-05 15:01 [TUHS] Origins of the frame buffer device Paul Ruizendaal via TUHS
2023-03-05 17:29 ` [TUHS] " Grant Taylor via TUHS
2023-03-05 18:25 ` Kenneth Goodwin
2023-03-06  8:51   ` Paul Ruizendaal via TUHS
2023-03-06  8:57     ` Rob Pike
2023-03-06 11:09       ` Henry Bent
2023-03-06 16:02         ` Theodore Ts'o
2023-03-06 22:47       ` Paul Ruizendaal via TUHS
2023-03-06 23:10         ` Rob Pike
2023-03-08 12:53           ` Paul Ruizendaal
2023-03-08 14:23             ` Dan Cross
2023-03-08 15:06               ` Paul Ruizendaal
2023-03-08 19:35                 ` Dan Cross
2023-03-08 16:55               ` Theodore Ts'o
2023-03-08 17:46                 ` Clem Cole
2023-03-08 17:45               ` Clem Cole
2023-03-08 18:12                 ` segaloco via TUHS
2023-03-08 18:21                   ` Larry McVoy
2023-03-08 18:43                     ` Kenneth Goodwin
2023-03-08 18:45                     ` Steffen Nurpmeso
2023-03-08 22:44                     ` Clem Cole
2023-03-09 14:42                 ` Paul Winalski
2023-03-06 23:20         ` segaloco via TUHS
2023-03-07  1:24     ` Kenneth Goodwin
2023-03-08  3:07     ` Rob Gingell
2023-03-08 12:51       ` Paul Ruizendaal via TUHS
2023-03-08 13:05         ` Warner Losh
2023-03-08 13:17         ` Arno Griffioen via TUHS
2023-03-07  1:54 ` Kenneth Goodwin
2023-03-05 18:52 Noel Chiappa
2023-03-05 20:43 ` Rob Pike
2023-03-06 10:43   ` Jonathan Gray
2023-03-07  1:21 ` Kenneth Goodwin
2023-03-08  5:43 ` Lars Brinkhoff
2023-03-09 23:24   ` emanuel stiebler
2023-03-10  1:44     ` Lawrence Stewart
2023-03-06 23:16 Norman Wilson
2023-03-06 23:24 ` Larry McVoy
2023-03-07 12:08   ` arnold
2023-03-07 16:42   ` Theodore Ts'o

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).