Computer Old Farts Forum
 help / color / mirror / Atom feed
* [COFF] Re: [TUHS] Re: Origins of the frame buffer device
       [not found] ` <7w7cvr4x36.fsf@junk.nocrew.org>
@ 2023-03-29 23:07   ` Rob Gingell
  0 siblings, 0 replies; 2+ messages in thread
From: Rob Gingell @ 2023-03-29 23:07 UTC (permalink / raw)
  To: Lars Brinkhoff, Noel Chiappa, Larry McVoy, segaloco; +Cc: Paul Ruizendaal, coff

[Redirected to COFF for some anecdotal E&S-related history and non-UNIX 
terminal room nostalgia.]

On 3/7/23 9:43 PM, Lars Brinkhoff wrote:
> Noel Chiappa wrote:
>>> The first frame buffers from Evans and Sutherland were at University
>>> of Utah, DOD SITES and NYIT CGL as I recall.  Circa 1974 to 1978.
>>
>> Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
>> PDP-10's; '74-'78 sounds like an interim period.)
> 
> The Picture System from 1974 was based on a PDP-11/05.  It looks like
> vector graphics rather than a frame buffer though.
> 
> http://archive.computerhistory.org/resources/text/Evans_Sutherland/EvansSutherland.3D.1974.102646288.pdf

E&S LDS-1s used PDP-10s as their host systems. LDS-2s could at least in 
principle use several different hosts (including spanning a range of 
word sizes, e.g., a SEL-840 with 24 bit words or a 16 bit PDP-11.)

The Line Drawing Systems drove calligraphic displays. No frame buffers. 
The early Picture Systems (like the brochure referenced by Lars) also 
drove calligraphic displays but did sport a line segment "refresh 
buffer" so that screen refreshes weren't dependent on the whole 
pipeline's performance.

At least one heavily customized LDS-2 (described further below) produced 
raster output by 1974 (and likely earlier in design and testing) and had 
a buffer for raster refresh which exhibited some of what we think of as 
the functionality of a frame buffer fitting the time frame referenced by 
Noel for other E&S products.

On 3/8/23 10:21 AM, Larry McVoy wrote:
> I really miss terminal rooms.  I learned so much looking over the
> shoulders of more experienced people.

Completely agree. They were the "playground learning" that did all of 
educate, build craft and community, and occasionally bestow humility.

Although it completely predates frame buffer technology, the PDP-10 
terminal room of the research computing environment at CWRU in the 1970s 
was especially remarkable as well as personally influential. All 
(calligraphic) graphics terminals and displays (though later a few 
Datapoint CRTs appeared.) There was an LDS-1 hosted on the PDP-10 and 
later an LDS-2 (which was co-located but not part of the PDP-10 
environment.)

The chair of the department, Edward (Ted) Glaser, had been recruited 
from MIT in 1968 and was heavily influential in guiding the graphics 
orientation of the facilities, and later, in the design of the 
customized LDS-2. Especially remarkable as he had been blind since he 
was 8. He had a comprehensive vision of systems and thinking about them 
that influenced a lot about the department's programs and research.

When I arrived in 1972, I only had a fleeting overlap with the LDS-1 to 
experience some of its games (color wheel lorgnettes and carrier 
landings!). The PDP-10 was being modified for TENEX and the LDS-1 was 
being decommissioned. I recall a tablet and button box for LDS-1 input 
devices.

The room was kept dimly lit with the overhead lighting off and only the 
glow of the displays and small wattage desk lamps. It shared the raised 
floor environment with the PDP-10 machine room (though was walled off 
from it) and so had a "quiet-loud" aura from all the white noise. The 
white noise cocooned you but permitted conversation and interaction with 
others that didn't usually disturb the uninvolved.

The luxury terminals were IMLAC PDS-1s. There was a detachable switch 
and indicator console that could be swapped between them for debugging 
or if you simply liked having the blinking lights in view. When not in 
use for real work the IMLACs would run Space War, much to the detriment 
of IMLAC keyboards. They could handle pretty complex displays, like, a 
screen full of dense text before flicker might set in. Light pens 
provided pointing input.

The bulk of the terminals were an array of DEC VT02s. Storage tube 
displays (so no animation possible), but with joysticks for pointing and 
interacting. There were never many VT02s made and we always believed we 
had the largest single collection of them.

None of these had character generators. The LDS-1 and the IMLACs drew 
their own characters programmatically. A PDP-8/I drove the VT02s and 
stroked all the characters. It did it at about 2400 baud but when the 8 
got busy you could perceive the drawing of the characters like a scribe 
on speed. If you stood way back to take in the room you could also watch 
the PDP-8 going around as the screens brightened momentarily as the 
characters/images were drawn. I was told that CWRU wrote the software 
for the PDP-8 and gave it to DEC, in return DEC gave CWRU $1 and the 
biggest line printer they sold. (The line printer did upper and lower 
case, and the University archivists swooned when presented with theses 
printed on it -- RUNOFF being akin to magic in a typewriter primitive 
world.)

Until the Datapoint terminals arrived all the devices in the room either 
were computers themselves or front-ended by one. Although I only saw it 
happen once, the LDS-1 with it's rather intimate connection to the -10 
was particularly attuned to the status of TOPS-10 and would flash 
"CRASH" before users could tell that something was wrong vs. just being 
slow.

(We would later run TOPS-10 for amusement. The system had 128K words in 
total: 4 MA10 16K bays and 1 MD10 64K bay. TENEX needed a minimum of 80K 
to "operate" though it'd be misleading to describe that configuration as 
"running". If we lost the MD10 bay that meant no TENEX so we had a 
DECtape-swapping configuration of TOPS-10 for such moments because, 
well, a PDP-10 with 8 DECtapes twirling is pretty humorously theatrical.)

All the displays (even the later Datapoints) had green or blue-green 
phosphors. This had the side effect that after several hours of
staring at them made anything which was white look pink. This was 
especially pronounced in the winter in that being Cleveland it wasn't 
that unusual to leave to find a large deposit of seemingly psychedelic 
snow that hadn't been there when you went in.

The LDS-2 arrived in the winter of 1973-4. It was a highly modified 
LDS-2 that produced raster graphics and shaded images in real-time. It 
was the first system to do that and was called the Case Shaded Graphics 
System (SGS). (E&S called it the Halftone System as it wouldn't do color 
in real-time. In addition to a black & white raster display, It had a 
35mm movie camera, a Polaroid camera, and an RGB filter that would 
triple-expose each frame and so in a small way retained the charm of the 
lorgnettes used on the LDS-1 to make color happen but not in real-time.) 
It was hosted by a PDP-11/40 running RT-11.

Declining memory prices helped enable the innovations in the SGS as it 
incorporated more memory components than the previous calligraphic 
systems. The graphics pipeline was extended such that after translation 
and clipping there was a Y-sort box that ordered the polygons from top 
to bottom for raster scanning followed by a Visible Surface Processor 
that separated hither from yon and finally a Gouraud Shader that 
produced the final image to a monitor or one of the cameras. Physically 
the system was 5 or maybe 6 bays long not including the 11/40 bay.

The SGS had some teething problems after its delivery. Ivan Sutherland 
even came to Cleveland to work on it though he has claimed his main 
memory of that is the gunfire he heard from the Howard Johnson's hotel 
next to campus. The University was encircled by several distressed 
communities at the time. A "bullet hole through glass" decal appeared on 
the window of the SGS's camera bay to commemorate his experience.

The SGS configuration was unique but a number of its elements were 
incorporated into later Picture Systems. It's my impression that the LDS 
systems were pretty "one off" and the Picture Systems became the 
(relative) "volume, off the shelf" product from E&S. (I'd love to read a 
history of all the things E&S did in that era.)

By 1975-6 the SGS was being used by projects ranging from SST stress 
analyses to mathematicians producing videos of theoretical concepts. The 
exaggerated images of stresses on aircraft structures got pretty widely 
distributed and referenced at the time. The SGS was more of a production 
system used by other departments and entities rather than computer 
graphics research as such, in some ways its (engineering) research 
utility was achieved by its having existed. One student, Ben Jones, 
created an extended ALGOL-60 to allow programming in something other 
than the assembly language.

As the SGS came online in 1975 the PDP-10 was being decommissioned and 
the calligraphic technologies associated with it vanished along with it. 
A couple of years later a couple of Teraks appeared and by the end of 
the 1970s frame buffers as we generally think of them were economically 
practical. That along with other processing improvements rendered the 
SGS obsolete and and so it was decommissioned in 1980 and donated to the 
Computer History Museum where I imagine it sits in storage next to a 
LINC-8 or the Ark of the Covenant or something.

One of the SGS's bays (containing the LDS-2 Channel Control, the front 
of the pipeline LDS program interpreter running out of the host's 
memory) and the PDP-11 interface is visible via this link:

https://www.computerhistory.org/collections/catalog/102691213

The bezels on the E&S bays were cosmetically like the DEC ones of the 
same era. They were all smoked glass so the blinking lights were visible 
but had to be raised if you wanted to see the identifying legends for them.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [COFF] Re: [TUHS] Re: Origins of the frame buffer device
       [not found] ` <20230306232429.GL5398@mcvoy.com>
@ 2023-03-07 16:42   ` Theodore Ts'o
  0 siblings, 0 replies; 2+ messages in thread
From: Theodore Ts'o @ 2023-03-07 16:42 UTC (permalink / raw)
  To: Larry McVoy; +Cc: Norman Wilson, coff

(Moving to COFF)

On Mon, Mar 06, 2023 at 03:24:29PM -0800, Larry McVoy wrote:
> But even that seems suspect, I would think they could put some logic
> in there that just doesn't feed power to the GPU if you aren't using
> it but maybe that's harder than I think.
> 
> If it's not about power then I don't get it, there are tons of transistors
> waiting to be used, they could easily plunk down a bunch of GPUs on the
> same die so why not?  Maybe the dev timelines are completely different
> (I suspect not, I'm just grabbing at straws).

Other potential reasons:

1) Moving functionality off-CPU also allows for those devices to have
their own specialized video memory that might be faster (SDRAM) or
dual-ported (VRAM) without having to add that complexity to the more
general system DRAM and/or the CPU's Northbridge.

2) In some cases, having an off-chip co-processor may not need any
access to the system memory at well.  An example of this is the "bump
in the wire" in-line crypto engines (ICE) which is located between the
Southbridge and the eMMC/UFS flash storage device.  If you are using a
Android device, it's likely to have an ICE.  The big advantage is that
it avoids needing to have a bounce buffer on the write path, where the
file system encryption layer has to copy-and-encrypt data from the
page cache to a bounce buffer, and then the encrypted block will then
get DMA'ed to the storage device.

3) From an architectural perspective, not all use cases need various
co-processors, whether it is to doing cryptography, or running some
kind of machine-learning module, or image manipulation to simulate
bokeh, or create HDR images, etc.  While RISC-V does have the concept
of instructure set extensions, which can be developed without getting
permission from the "owners" of the core CPU ISA (e.g., ARM, Intel,
etc.), it's a lot more convenient for someone who doesn't need to bend
the knee to ARM, inc. (or their new corporate overloads) or Intel, to
simply put that extension outside the core ISA.

(More recently, there is an interesting lawsuit about whether it's
"allowed" to put a 3rd party co-processor on the same SOC without
paying $$$$$ to the corporate overload, which may make this point moot
--- although it might cause people to simply switch to another ISA
that doesn't have this kind of lawsuit-happy rent-seeking....)

In any case, if you don't need to play Quake with 240 frames per
second, then there's no point putting the GPU in the core CPU
architecture, and it may turn out that the kind of co-processor which
is optimized for running ML models is different, and it is often
easier to make changes to the programming model for a GPU, compared to
making changes to a CPU's ISA.

						- Ted

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-03-29 23:08 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20230305185202.91B7B18C08D@mercury.lcs.mit.edu>
     [not found] ` <7w7cvr4x36.fsf@junk.nocrew.org>
2023-03-29 23:07   ` [COFF] Re: [TUHS] Re: Origins of the frame buffer device Rob Gingell
     [not found] <8BD57BAB138946830AF560E17376A63B.for-standards-violators@oclsc.org>
     [not found] ` <20230306232429.GL5398@mcvoy.com>
2023-03-07 16:42   ` Theodore Ts'o

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).