* [TUHS] Has this been discussed on-list? How Unix changed Software. @ 2022-09-05 23:48 steve jenkin 2022-09-06 16:09 ` [TUHS] " Marc Donner 2022-09-07 12:53 ` [TUHS] STDIN/OUT vs APIs [was: How Unix changed Software] Brian Zick 0 siblings, 2 replies; 14+ messages in thread From: steve jenkin @ 2022-09-05 23:48 UTC (permalink / raw) To: TUHS I’ve been looking at this question for a time and thought it could’ve appeared on the TUHS list - but don’t have an idea of the search terms to use on the list. Perhaps someone suggest some to me. As a starting point, below is what John Lions wrote on a similar topic in 1978. Conspicuously, “Security” is missing, though “Reliability & Maintenance” would encompass the idea. With hindsight, I’d suggest (Research) Unix took a very strong stance on “Technical Debt” - it was small, clean & efficient, even elegant. And ‘shipped' with zero known bugs. It didn’t just bring the Unix kernel to many architectures, the same tools were applied to create what we now call “Open Source” in User land: - Multi-platform / portable - the very act of porting software to diverse architectures uncovered new classes of bugs and implicit assumptions. Big- & Little-endian were irrelevant or unknown Before Unix. - full source - compatibility layers via - written in common, well-known, well-supported languages [ solving the maintenance & update problem ] - standard, portable “toolchains” - shell, make, compiler, library tools for system linker, documentation & doc reading tools - distribution systems including test builds, issue / fault reporting & tracking An emergent property is "Good Security”, both by Design and by (mostly) error-free implementations. In the Epoch Before Unix (which started when exactly?), there was a lot of Shared Software, but very little that could be mechanically ported to another architecture. Tools like QED and ROFF were reimplemented on multiple platforms, not ‘ported’ in current lingo. There are still large, complex FORTRAN libraries shared as source. There’s an important distinction between “Open” and “Free” : cost & availability. We’ve gone on to have broadband near universally available with easy to use Internet collaboration tools - e.g. “git”, “mercurial” and “Subversion” just as CVS’s. The Unix-created Open Source concept broke Vendor Lock-in & erased most “Silos”. The BSD TCP/IP stack, and Berkeley sockets library, were sponsored by DARPA, and made freely available to vendors as source code. Similarly, important tools for SMTP and DNS were freely available as Source Code, both speeding the implementation of Internet services and providing “out of the box” protocol / function compatibility. The best tools, or even just adequate, became only a download & install away for all coding shops, showing up a lot of poor code developed by in-house “experts” and radically trimming many project schedules. While the Unix “Software Tools” approach - mediated by the STDOUT / STDIN interface, not API’s - was new & radical, and for many classes of problems, provided a definitive solution, I’d not include it in a list of “Open Source” features. It assumes a “command line” and process pipelines, which aren’t relevant to very large post-Unix program classes: Graphical Apps and Web / Internet services. regards steve jenkin ============== Lions, J., "An operating system case study" ACM SIGOPS Operating Systems Review, July 1978, ACM SIGOPS Oper. Syst. Rev. 12(3): 46-53 (1978) 2. Some Comments on UNIX ------------------------ There is no space here to describe the technical features of UNIX in detail (see Ritchie and Thompson, 1974 ; also Kernighan and Plauger, 1976), nor to document its performance characteristics, which we have found to be very satisfactory. The following general comments do bear upon the present discussion: (a) Cost. UNIX is distributed for "academic and educational purposes" to educational institutions by the Western Electric Company for only a nominal fee, and may be implemented effectively on hardware configurations costing less than $50,000. (b) Reliability and Maintenance. Since no support of any kind is provided by Western Electric, each installation is potentially on its own for software maintenance. UNIX would not have prospered if it were not almost completely error-free and easy to use. There are few disappointments and no unpleasant surprises. (c) Conciseness. The PDP-11 architecture places a strong limitation on the size of the resident operating system nucleus. As Ritchie and Thompson (1974) observe, "the size constraint has encouraged not only economy but a certain elegance of design". The nucleus provides support services and basic management of processes, files and other resources. Many important system functions are carried out by utility programs. Perhaps the most important of these is the command language interpreter, known as the "shell". (Modification of this program could alter, even drastically, the interface between the system and the user.) (d) Source Code. UNIX is written almost entirely in a high level language called "C" which is derived from BCPL and which is well matched to the PDP-11. It provides record and pointer types, has well developed control structures, and is consistent with modern ideas on structured Programming. (For the curious, the paper by Kernighan (1975) indirectly indicates the flavour of "C" and exemplifies one type of utility program contained in UNIX.) Something less than i0,000 lines of code are needed to describe the resident nucleus. pg 47 (e) Amenability. Changes can be made to UNIX with little difficulty. A new system can be instituted by recompiling one or more files (at an average of 20 to 30 seconds per file), relinking the file containing the nucleus (another 30 seconds or so), and rebooting using the new file. In simple cases the whole process need take no more than a few minutes. (f) Intrinsic Interest. UNIX contains a number of features which make it interesting in its own right: the run-time support for the general tree structured file system is particularly efficient; the use of a reserved set of file names smooths the concepts of device independence; multiple processes (three or four per user is average) are used in a way which in most systems is regarded as totally extravagant (this leads to considerable simplification of the system/user interface); and the interactive intent of the system has resulted in an unusually rich set of text editing and formatting programs. (g) Limitations. There are few limitations which are of concern to us. The PDP-11 architecture limits program size, and this for example frustrated an initial attempt to transfer Pascal P onto the 11/40. Perhaps the greatest weakness of UNIX as it is presently distributed (and this is not fundamental!) is in the area where other systems usually claim to be strong: support for "bread and butter" items such as Fortran and Basic. (h) Documentation. The entire official UNIX documentation, including tutorial material, runs to less than 500 pages. By some standards this is incredibly meagre, but it does mean that student can carry his own copy in his brief case. Features of the documentation include: - an unconventional arrangement of material (unsettling at first, but really very convenient); - a terse, enigmatic style, with much information conveyed by innuendo; - a permuted KWIC index. Most importantly perhaps UNIX encourages the programmer to document his work. There is a very full set of programs for editing and formatting text. The extent to which this has been developed can be gauged from the paper by Kernighan and Cherry (1975). ============== -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: Has this been discussed on-list? How Unix changed Software. 2022-09-05 23:48 [TUHS] Has this been discussed on-list? How Unix changed Software steve jenkin @ 2022-09-06 16:09 ` Marc Donner 2022-09-07 4:00 ` steve jenkin 2022-09-07 5:15 ` steve jenkin 2022-09-07 12:53 ` [TUHS] STDIN/OUT vs APIs [was: How Unix changed Software] Brian Zick 1 sibling, 2 replies; 14+ messages in thread From: Marc Donner @ 2022-09-06 16:09 UTC (permalink / raw) To: TUHS [-- Attachment #1: Type: text/plain, Size: 10782 bytes --] Having spent many formative years at IBM Research throwing (metaphorical) bombs at mainframe systems and infiltrating the place with BSD, I have thought about this question on and off. I would like to augment your comments with a couple of extra observations: UNIX was built for a particular set of users - document writers and programmers Before UNIX the systems were industrial in design and scope. Sort of like MVS was a locomotive - designed to be used for hauling heavy freight (acres of data entry clerks answering 800 numbers and entering transactions). UNIX was more like cars and trucks - designed for use by knowledge workers. When I was a grad student I hung out with some remarkable programmers. We all observed that learning to program was impossible without a body of code to read, study, and learn from. The best places to learn programming in the 70s and 80s were places like MIT, Berkeley, Bell Labs, and IBM Research ... places with an established culture of sharing code internally and large repositories of code to read. By the mid-1980s the Microsoft folks established the notion that software was economically valuable. People stopped giving away source code (IBM's change in strategy was called OCO - "Object Code Only") and it totally shocked the software developer community by destroying the jobs for programmers at user sites. Combine that with the mid-1980s recession and the first layoffs that programmers had ever seen and we saw the first horrified realization that the social contract between programmers and employers did not actually exist. We, the programmer community, woke up and committed ourselves as much as ever we could to non-proprietary languages and tools, putting our shoulders to the OSS movement and hence to UNIX and the layer of tools built on top of it. Of course it helped to have some brilliant engineers like Ken, Dennis, Doug, Rob, Michael, Stu (and and and) and brilliant writers like Brian so that the thing (UNIX) had intellectual integrity and scope. It took UNIX twenty to thirty years, but the economic logic of our approach put an end to efforts to totally dominate the tech world. ===== nygeek.net mindthegapdialogs.com/home <https://www.mindthegapdialogs.com/home> On Mon, Sep 5, 2022 at 7:49 PM steve jenkin <sjenkin@canb.auug.org.au> wrote: > I’ve been looking at this question for a time and thought it could’ve > appeared on the TUHS list - but don’t have an idea of the search terms to > use on the list. > > Perhaps someone suggest some to me. > > As a starting point, below is what John Lions wrote on a similar topic in > 1978. Conspicuously, “Security” is missing, though “Reliability & > Maintenance” would encompass the idea. > > With hindsight, I’d suggest (Research) Unix took a very strong stance on > “Technical Debt” - it was small, clean & efficient, even elegant. And > ‘shipped' with zero known bugs. > > It didn’t just bring the Unix kernel to many architectures, the same tools > were applied to create what we now call “Open Source” in User land: > > - Multi-platform / portable > - the very act of porting software to diverse architectures > uncovered new classes of bugs and implicit assumptions. Big- & > Little-endian were irrelevant or unknown Before Unix. > - full source > - compatibility layers via > - written in common, well-known, well-supported languages [ solving the > maintenance & update problem ] > - standard, portable “toolchains” > - shell, make, compiler, library tools for system linker, > documentation & doc reading tools > - distribution systems including test builds, issue / fault > reporting & tracking > > An emergent property is "Good Security”, both by Design and by (mostly) > error-free implementations. > > In the Epoch Before Unix (which started when exactly?), there was a lot of > Shared Software, but very little that could be mechanically ported to > another architecture. > Tools like QED and ROFF were reimplemented on multiple platforms, not > ‘ported’ in current lingo. > There are still large, complex FORTRAN libraries shared as source. > > There’s an important distinction between “Open” and “Free” : cost & > availability. > > We’ve gone on to have broadband near universally available with easy to > use Internet collaboration tools - e.g. “git”, “mercurial” and “Subversion” > just as CVS’s. > > The Unix-created Open Source concept broke Vendor Lock-in & erased most > “Silos”. > The BSD TCP/IP stack, and Berkeley sockets library, were sponsored by > DARPA, and made freely available to vendors as source code. > Similarly, important tools for SMTP and DNS were freely available as > Source Code, both speeding the implementation of Internet services and > providing “out of the box” protocol / function compatibility. > > The best tools, or even just adequate, became only a download & install > away for all coding shops, showing up a lot of poor code developed by > in-house “experts” and radically trimming many project schedules. > > While the Unix “Software Tools” approach - mediated by the STDOUT / STDIN > interface, not API’s - was new & radical, and for many classes of problems, > provided a definitive solution, > I’d not include it in a list of “Open Source” features. > > It assumes a “command line” and process pipelines, which aren’t relevant > to very large post-Unix program classes: Graphical Apps and Web / Internet > services. > > regards > steve jenkin > > ============== > > Lions, J., "An operating system case study" ACM SIGOPS Operating Systems > Review, July 1978, ACM SIGOPS Oper. Syst. Rev. 12(3): 46-53 (1978) > > > 2. Some Comments on UNIX > ------------------------ > > There is no space here to describe the technical features of UNIX in > detail (see Ritchie and Thompson, 1974 ; also Kernighan and Plauger, 1976), > nor to document its performance characteristics, which we have found to be > very satisfactory. > > The following general comments do bear upon the present discussion: > > (a) Cost. > UNIX is distributed for "academic and educational purposes" to > educational institutions by the Western Electric Company for only a nominal > fee, > and may be implemented effectively on hardware configurations costing > less than $50,000. > > (b) Reliability and Maintenance. > Since no support of any kind is provided by Western Electric, > each installation is potentially on its own for software > maintenance. > UNIX would not have prospered if it were not almost completely > error-free and easy to use. > There are few disappointments and no unpleasant surprises. > > (c) Conciseness. > The PDP-11 architecture places a strong limitation on the size of the > resident operating system nucleus. > As Ritchie and Thompson (1974) observe, > "the size constraint has encouraged not only economy but a certain > elegance of design". > The nucleus provides support services and basic management of > processes, files and other resources. > Many important system functions are carried out by utility programs. > Perhaps the most important of these is the command language > interpreter, known as the "shell". > (Modification of this program could alter, even drastically, the > interface between the system and the user.) > > (d) Source Code. > UNIX is written almost entirely in a high level language called "C" > which is derived from BCPL and which is well matched to the PDP-11. > It provides record and pointer types, > has well developed control structures, > and is consistent with modern ideas on structured Programming. > (For the curious, the paper by Kernighan (1975) indirectly indicates > the flavour of "C" > and exemplifies one type of utility program contained in UNIX.) > Something less than i0,000 lines of code are needed to describe the > resident nucleus. > > pg 47 > > (e) Amenability. > Changes can be made to UNIX with little difficulty. > A new system can be instituted by recompiling one or more files (at an > average of 20 to 30 seconds per file), > relinking the file containing the nucleus (another 30 seconds or so), > and rebooting using the new file. > In simple cases the whole process need take no more than a few minutes. > > (f) Intrinsic Interest. > UNIX contains a number of features which make it interesting in its own > right: > the run-time support for the general tree structured file system > is particularly efficient; > the use of a reserved set of file names smooths the concepts of > device independence; > multiple processes (three or four per user is average) are used in > a way which in most systems is regarded as totally extravagant > (this leads to considerable simplification of the system/user > interface); > and the interactive intent of the system has resulted in an > unusually rich set of text editing and formatting programs. > > (g) Limitations. > There are few limitations which are of concern to us. > The PDP-11 architecture limits program size, and this for example > frustrated an initial attempt to transfer Pascal P onto the 11/40. > Perhaps the greatest weakness of UNIX as it is presently distributed > (and this is not fundamental!) > is in the area where other systems usually claim to be strong: > support for "bread and butter" items such as Fortran and Basic. > > (h) Documentation. > The entire official UNIX documentation, including tutorial material, > runs to less than 500 pages. > By some standards this is incredibly meagre, > but it does mean that student can carry his own copy in his brief > case. > > Features of the documentation include: > - an unconventional arrangement of material (unsettling at first, > but really very convenient); > - a terse, enigmatic style, with much information conveyed by > innuendo; > - a permuted KWIC index. > > Most importantly perhaps UNIX encourages the programmer to document his > work. > There is a very full set of programs for editing and formatting text. > The extent to which this has been developed can be gauged from the > paper by Kernighan and Cherry (1975). > > ============== > > -- > Steve Jenkin, IT Systems and Design > 0412 786 915 (+61 412 786 915) > PO Box 38, Kippax ACT 2615, AUSTRALIA > > mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin > > [-- Attachment #2: Type: text/html, Size: 13289 bytes --] ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: Has this been discussed on-list? How Unix changed Software. 2022-09-06 16:09 ` [TUHS] " Marc Donner @ 2022-09-07 4:00 ` steve jenkin 2022-09-07 14:58 ` John Cowan 2022-09-07 17:13 ` Paul Winalski 2022-09-07 5:15 ` steve jenkin 1 sibling, 2 replies; 14+ messages in thread From: steve jenkin @ 2022-09-07 4:00 UTC (permalink / raw) To: TUHS Marc, the first I.T. Recession in Australia occurred in 1991. It was the first economic recession where corporates couldn’t easily save money by “automating” - all the low-hanging fruit - like Inventory, Payroll & Accounting - had been computerised, at least by companies that’d survive. Thanks for mentioning the IBM OCO - I’d left mainframe by then. Your insight about the ’social contract’ ring true - never heard that before. Since that first recession, the regard managers have for I.T. / Computing staff - embodied in wages & conditions - has declined markedly outside business where software & systems are their business. The hype and over-expenditure on Y2K, then the Dot Crash, resulted in a 5 year I.T. recession in Australia - and a very jaded attitude towards I.T. and their budgets within the Corporates I know. The deskilling and mediocre work of programmers and support staff alike doesn’t seem to improve whole-of-enterprise productivity. Your summation of the Professional response to the dissolution of the ’social contract’ is very insightful. Explains the rapid rise and proliferation of OSS in the 1990’s. stevej > On 7 Sep 2022, at 02:09, Marc Donner <marc.donner@gmail.com> wrote: > > By the mid-1980s the Microsoft folks established the notion that software was economically valuable. People stopped giving away source code (IBM's change in strategy was called OCO - "Object Code Only") and it totally shocked the software developer community by destroying the jobs for programmers at user sites. Combine that with the mid-1980s recession and the first layoffs that programmers had ever seen and we saw the first horrified realization that the social contract between programmers and employers did not actually exist. > > We, the programmer community, woke up and committed ourselves as much as ever we could to non-proprietary languages and tools, putting our shoulders to the OSS movement and hence to UNIX and the layer of tools built on top of it. > -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: Has this been discussed on-list? How Unix changed Software. 2022-09-07 4:00 ` steve jenkin @ 2022-09-07 14:58 ` John Cowan 2022-09-07 17:13 ` Paul Winalski 1 sibling, 0 replies; 14+ messages in thread From: John Cowan @ 2022-09-07 14:58 UTC (permalink / raw) To: steve jenkin; +Cc: TUHS [-- Attachment #1: Type: text/plain, Size: 403 bytes --] On Wed, Sep 7, 2022 at 12:01 AM steve jenkin <sjenkin@canb.auug.org.au> wrote: > Since that first recession, the regard managers have for I.T. / Computing > staff - embodied in wages & conditions - has declined markedly outside > business where software & systems are their business. > Once it became clear that we are not miracle workers, we were degraded to the status of (interchangeable) clerks. [-- Attachment #2: Type: text/html, Size: 1041 bytes --] ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: Has this been discussed on-list? How Unix changed Software. 2022-09-07 4:00 ` steve jenkin 2022-09-07 14:58 ` John Cowan @ 2022-09-07 17:13 ` Paul Winalski 2022-09-08 14:12 ` Paul Winalski 1 sibling, 1 reply; 14+ messages in thread From: Paul Winalski @ 2022-09-07 17:13 UTC (permalink / raw) To: steve jenkin; +Cc: TUHS > On 7 Sep 2022, at 02:09, Marc Donner <marc.donner@gmail.com> wrote: > > By the mid-1980s the Microsoft folks established the notion that software > was economically valuable. People stopped giving away source code (IBM's > change in strategy was called OCO - "Object Code Only") and it totally > shocked the software developer community by destroying the jobs for > programmers at user sites. Combine that with the mid-1980s recession and > the first layoffs that programmers had ever seen and we saw the first > horrified realization that the social contract between programmers and > employers did not actually exist. Microsoft was a late-comer to the software-as-a-product game. Back in the 1970s IBM was forced into a consent decree to unbundle its software--OS, SW development tools, utilities--from its hardware. Originally IBM only leased its computers and the OSes, software development toolchain, and various products such as the sort utility were provided for free, along with the source code for them. IBM was later forced to sell its machines as well as lease them, and the unbundling of software was the last step in the progression. There were already third party software vendors in the IBM mainframe world in the 1970s. If you were at all serious about sorting you bought SyncSort instead of using the freebie IBM sort utility. In the research world there were pay-for statistical packages such as SPSS and BMDP. And third-party database products. IBM decided to make lemons out of lemonade and discovered to their delight that they now could make money by selling the software that they used to just give away. Naturally if you're making customers pay for software you don't want the liability risk of having them tinker with it, so you don't provide the sources anymore. Software has a radically different business model from hardware--there's a big initial development cost but manufacturing comes essentially for free. Management types used to the hardware-centric world initially had a lot of trouble seeing software as a revenue source. DEC never quite got the hang of it, and even today Intel doesn't understand software. By the time Microsoft came along selling software for profit was well-established. I remember when I saw PC software for sale for the first time being astonished that people would actually pay for copies of simple game programs such as "Hunt the Wumpus" or "Adventure". -Paul W. ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: Has this been discussed on-list? How Unix changed Software. 2022-09-07 17:13 ` Paul Winalski @ 2022-09-08 14:12 ` Paul Winalski 0 siblings, 0 replies; 14+ messages in thread From: Paul Winalski @ 2022-09-08 14:12 UTC (permalink / raw) To: steve jenkin; +Cc: TUHS On 9/7/22, Paul Winalski <paul.winalski@gmail.com> wrote: > > IBM decided to make lemons out of lemonade and discovered to their > delight that they now could make money by selling the software that > they used to just give away. Naturally if you're making customers pay > for software you don't want the liability risk of having them tinker > with it, so you don't provide the sources anymore. I of course meant "make lemonade out of lemons". -Paul W. [wiping face off egg :-) ] ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: Has this been discussed on-list? How Unix changed Software. 2022-09-06 16:09 ` [TUHS] " Marc Donner 2022-09-07 4:00 ` steve jenkin @ 2022-09-07 5:15 ` steve jenkin 2022-09-07 13:20 ` Dan Cross 1 sibling, 1 reply; 14+ messages in thread From: steve jenkin @ 2022-09-07 5:15 UTC (permalink / raw) To: TUHS; +Cc: Marc Donner > On 7 Sep 2022, at 02:09, Marc Donner <marc.donner@gmail.com> wrote: > > Having spent many formative years at IBM Research throwing (metaphorical) bombs at mainframe systems and infiltrating the place with BSD, I have thought about this question on and off. > > I would like to augment your comments with a couple of extra observations: > > UNIX was built for a particular set of users - document writers and programmers > > Before UNIX the systems were industrial in design and scope. Sort of like MVS was a locomotive - designed to be used for hauling heavy freight (acres of data entry clerks answering 800 numbers and entering transactions). UNIX was more like cars and trucks - designed for use by knowledge workers. > > When I was a grad student I hung out with some remarkable programmers. We all observed that learning to program was impossible without a body of code to read, study, and learn from. The best places to learn programming in the 70s and 80s were places like MIT, Berkeley, Bell Labs, and IBM Research ... places with an established culture of sharing code internally and large repositories of code to read. > > By the mid-1980s the Microsoft folks established the notion that software was economically valuable. People stopped giving away source code (IBM's change in strategy was called OCO - "Object Code Only") and it totally shocked the software developer community by destroying the jobs for programmers at user sites. Combine that with the mid-1980s recession and the first layoffs that programmers had ever seen and we saw the first horrified realization that the social contract between programmers and employers did not actually exist. > > We, the programmer community, woke up and committed ourselves as much as ever we could to non-proprietary languages and tools, putting our shoulders to the OSS movement and hence to UNIX and the layer of tools built on top of it. > > Of course it helped to have some brilliant engineers like Ken, Dennis, Doug, Rob, Michael, Stu (and and and) and brilliant writers like Brian so that the thing (UNIX) had intellectual integrity and scope. > > It took UNIX twenty to thirty years, but the economic logic of our approach put an end to efforts to totally dominate the tech world. > ===== > nygeek.net > mindthegapdialogs.com/home Marc, Good observations. Thank you. I’ve never heard anyone mention that “reading large codebases” was the best way to learn programming. Absolutely my experience as well. If Professional Programmers aren’t doing “Programming in the Large” to provide critical services for others, then what work are they doing? In its first 10 years (1974-84), the future of Unix was uncertain. The formation of SUN in 1982 and other Unix-only vendors made Unix a commercial alternative, complete with support and a help number. At UNSW, there was a significant political battle over Unix. The manager of CSU (central Computing Services Unit) resigned over Unix. His 35 staff later supported Unix across the Uni. If he’d won the battle, it’s very likely all Unix at UNSW would’ve been expunged, stopping the networking work with Sydney University, shutting down the Unix kernel course & dramatically slowing the spread of Unix in Australia. Robert Elz at Melbourne Uni was later an important contributor to IP protocols and DNS. In the 1984 BSTJ issue on Unix, there’s no mention of SUN (1982) & SUNOS, but they do note “100,000 licenses” had been shipped, up from the 300 internal & ~600 total licenses mentioned in the 1978 BSTJ. While still not “cannot fail” status, Unix’s future was becoming more certain. Today, there are 2-4 billion active smartphone and tablets - almost all of which are Unix variants - Android and iOS. [ I’m sure other ‘platforms’ exist, but haven’t followed the market closely ] There’s an estimated 200M-250M “Personal Computers” in active use - 10% of all active devices. Even if all run Microsoft and not Chromebooks, MS-Windows is now a minor player. I’ve no idea how big the fleet of servers powering “The Cloud” in Datacentres is, but inferring from power consumption, it’s measured in millions. Only MS-Azure provide Windows instances and then it runs on top of a hypervisor, not bare metal. Is MS Hyper-V derived from a Unix variant? If not, is certainly influenced by VMware & Xen which were. To a first approximation, 90%+ of ‘computers' now run a Unix variant. [ disregarding the larger fleet of embedded devices, especially in cars ] As you say, UNIX & its variants broke the monopoly / lock-in of software by hardware vendors. The timing of Unix and it displacing hardware enforced “Software Silos" wasn’t accidental. [ A notable beneficiary of breaking Silos is Oracle - their early promise was database “portability”. ] It falls directly out of “Moore’s Law” and “Bell's Law of Computer Classes”. The PDP-11 ‘regressed’ to 16-bits compared to the IBM 360’s 32-bits: Bell’s Law in action - a new, much cheaper, lower performance “class” appearing each decade. In 1977, UNSW acquired a PDP-11/70 for teaching that was 1/10th the price of the IBM 360/50 that’d been purchased in 1966. [11/70 was in service in April 1978 - hardware delays] This 11/70 provided at least the same raw performance as the 360/50, but had ~50 2400baud terminals attached, not cards + printer. It was much more effective for learning/ teaching and provided much higher “useful throughput” than the IBM batch system. Certainly with VDU’s, much less paper was wasted :) DEC and others first leveraged the cheaper, faster silicon transistors to build bipolar discrete-part machines: e.g. the PDP-7 co-opted for “Space Travel” by Ken in 1969. DTL became TTL, digital IC’s grew larger, cheaper, faster - with “Mini computer” manufacturers rolling out new models at ever better price-points, more rapidly than ’traditional’ mainframe vendors. Minicomputers adopted “chipsets” to implement the CPU, leading in a few years to single-chip “Microprocessors”, often with co-processors for expensive operations, like floating point. The invention of RISC led to a whole new class of mini-computers with single-chip CPU's and a new class of system: the Graphical Workstation [ SUN & SGI ] - not quite Bell’s lowest performance class, one with significant new non-CPU capabilities. Without UNIX, there couldn’t have been a RISC revolution, because there’d have been no quality software for vendors to pick up: kernel, tools, Graphical UI and 3rd party Software on these platforms. The “Dot Boom” that ended in 2000 was only possible because of high-performance UNIX servers for web, storage & database. e-Bay started with Oracle on SUN servers. A solid, dependable system design. Google didn’t invent Internet Search, but they did come up with the Internet-scale Data Centre, creating highly available systems using low-cost, “less” reliabile commodity hardware. Is this a new Bell’s Law Class? it’s more a system architecture and operational arrangement, implemented almost alone in software. Amazon leveraged their expertise and design of Internet-scale DataCentres into a massive “Cloud” business - not bundled into its own products, but ‘rentable’ by customers by the hour. Netflix, when it changed from mailing DVD’s to streaming, based its business on renting Amazon servers, storage & bandwidth. We now have cheaper again computing services available, with zero Capital outlay and scalable to unprecedented sizes. It follows Bells’ Law, while extending it. In 2007 when Apple redefined the Smartphone - using ARM (Acorn RISC Machine) and a variant of Unix - they created a new class of computing device. The device was designed to “Just Work” - near zero admin, self-configuring and a highly reliable O/S, UI & Apps. Critically, Apple never tried to maximise the “utilisation” of the CPU & its resources - they put in fast CPU’s & aggressively managed power consumption to extend battery life. The Mainframe era economic model was inverted with the desktop computer - minimise wasted User time, not Computer time. The Smartphone took this “people first” approach to a new level. For me, Apple’s most important invention - on top of “Just Works” platform - was the App Store. It builds on “The Cloud” and Internet services, providing an almost direct Software Vendor to Client channel, using a secure & verified distribution system with embedded payments. Modern smartphone/ tablet system design, based around “Sandboxes” and a stringent control layer, seems to contain “malevolent” Apps well enough (no security is perfect, but “Good Enough” seems attainable). Without the App Store and Sandboxed Apps, we couldn’t have 2B-4B smartphones. We know from the MS-Windows PC & Server ecosystem [ and PHP/ Wordpress ] that "Bad Actors” will organise and actively exploit system vulnerabilities, making large fleets of exploitable devices unusable either because resources are co-opted and the device is unresponsive, or it’s compromised and can’t be trusted. Ironically, Moore’s Law couldn’t have proceeded as long and as quickly as it has since 1965 without the availability of Software to turn raw Silicon + Watts into functional, useful systems. Intel now owes a lot more business to Unix and its variants than to MS-Windows. It’s not unreasonable IMHO to say that Unix and its variants “Changed the World” and are now are the most prevalent O/S on the planet. ======= Sorry for the long piece - I know that TUHS is not the forum for these observations not confined to Early Unix. I’d have moved this to COFF, but I’ve not been able to get onto that list so far. regards steve -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: Has this been discussed on-list? How Unix changed Software. 2022-09-07 5:15 ` steve jenkin @ 2022-09-07 13:20 ` Dan Cross 2022-09-07 13:52 ` Steve Nickolas 0 siblings, 1 reply; 14+ messages in thread From: Dan Cross @ 2022-09-07 13:20 UTC (permalink / raw) To: steve jenkin; +Cc: TUHS, Marc Donner On Wed, Sep 7, 2022 at 1:16 AM steve jenkin <sjenkin@canb.auug.org.au> wrote: > There’s an estimated 200M-250M “Personal Computers” in active use - 10% of all active devices. Even if all run Microsoft and not Chromebooks, MS-Windows is now a minor player. This is true, but in some sense has always been true, in that the number of embedded devices using microcontrollers and so forth has always dwarfed the number of desktop PC-class computers in the world (or, rather, has always done so for since the late 70s or so, I'd imagine). However, to call Windows a minor player is to ignore the importance of the applications that run on those desktop machines (and laptops, etc). The PC might be on the decline in absolute market terms, but it seems obvious that for specialist applications we're going to need something resembling it for the foreseeable future. > I’ve no idea how big the fleet of servers powering “The Cloud” in Datacentres is, but inferring from power consumption, it’s measured in millions. Aggregated across all the major players, I'd put the number at tens of millions, possibly in excess of 100 million at the high end of the estimate range. > Only MS-Azure provide Windows instances and then it runs on top of a hypervisor, not bare metal. Is MS Hyper-V derived from a Unix variant? If not, is certainly influenced by VMware & Xen which were. First, a factual correction: at least Google's cloud and I'm fairly certain AWS can run Windows, not just Azure. Second, I have it directly from folks who worked on Hyper-V that it is neither based on Unix/Linux, nor strongly influenced by either VMWare or Xen. It was an entirely in-house project at MSFT that probably includes more DNA strands from Windows than Unix et al. Some of the stories about clashes with Cutler to change the Windows startup sequence to accommodate Hyper-V are interesting (short version: Cutler, in typical fashion, didn't want to and early versions of Hyper-V basically booted under windows, then "took over" the running machine so that Windows resumed, but suddenly in a VM where it had not been before). > To a first approximation, 90%+ of ‘computers' now run a Unix variant. [ disregarding the larger fleet of embedded devices, especially in cars ] > As you say, UNIX & its variants broke the monopoly / lock-in of software by hardware vendors. The above notwithstanding, I absolutely believe these numbers. Unix and its derivatives (most notably Linux) have become the de-facto platform for modern computation, as you conclude. - Dan C. ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: Has this been discussed on-list? How Unix changed Software. 2022-09-07 13:20 ` Dan Cross @ 2022-09-07 13:52 ` Steve Nickolas 0 siblings, 0 replies; 14+ messages in thread From: Steve Nickolas @ 2022-09-07 13:52 UTC (permalink / raw) To: TUHS On Wed, 7 Sep 2022, Dan Cross wrote: > Second, I have it directly from folks who worked on Hyper-V that it is > neither based on Unix/Linux, nor strongly influenced by either VMWare > or Xen. It was an entirely in-house project at MSFT that probably > includes more DNA strands from Windows than Unix et al. Some of the > stories about clashes with Cutler to change the Windows startup > sequence to accommodate Hyper-V are interesting (short version: > Cutler, in typical fashion, didn't want to and early versions of > Hyper-V basically booted under windows, then "took over" the running > machine so that Windows resumed, but suddenly in a VM where it had not > been before). That sounds like what Compaq's CEMM (a.k.a. EMM386) did with MS-DOS. -uso. ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] STDIN/OUT vs APIs [was: How Unix changed Software] 2022-09-05 23:48 [TUHS] Has this been discussed on-list? How Unix changed Software steve jenkin 2022-09-06 16:09 ` [TUHS] " Marc Donner @ 2022-09-07 12:53 ` Brian Zick 2022-09-07 13:19 ` [TUHS] " John Cowan 1 sibling, 1 reply; 14+ messages in thread From: Brian Zick @ 2022-09-07 12:53 UTC (permalink / raw) To: tuhs On Mon, Sep 5, 2022, at 6:48 PM, steve jenkin wrote: > While the Unix “Software Tools” approach - mediated by the STDOUT / > STDIN interface, not API’s - was new & radical, and for many classes of > problems, provided a definitive solution, > I’d not include it in a list of “Open Source” features. I am very curious to hear more about the implications and practical benefits of this from folks that have thoroughly explored it. In my experience I’ve mainly used APIs, and I’m having trouble imagining the other approach other than for text-processing. B ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: STDIN/OUT vs APIs [was: How Unix changed Software] 2022-09-07 12:53 ` [TUHS] STDIN/OUT vs APIs [was: How Unix changed Software] Brian Zick @ 2022-09-07 13:19 ` John Cowan 2022-09-07 15:39 ` Joe 0 siblings, 1 reply; 14+ messages in thread From: John Cowan @ 2022-09-07 13:19 UTC (permalink / raw) To: Brian Zick; +Cc: tuhs [-- Attachment #1: Type: text/plain, Size: 779 bytes --] On Wed, Sep 7, 2022 at 8:54 AM Brian Zick <brian@zick.io> wrote: > I am very curious to hear more about the implications and practical > benefits of this from folks that have thoroughly explored it. In my > experience I’ve mainly used APIs, and I’m having trouble imagining the > other approach other than for text-processing. > See <https://jpaulm.github.io/fbp/> for an explanation of Flow-Based Programming, a realization of the same pipelining idea, but extended to arbitrary directed graphs with multiple input and output ports. It was developed in complete ignorance of Unix pipelines except by bare and misleading report, and in entirely distinct application domains, yet with exact convergence. "It steam-engines when it comes steam-engine time." [-- Attachment #2: Type: text/html, Size: 1340 bytes --] ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: STDIN/OUT vs APIs [was: How Unix changed Software] 2022-09-07 13:19 ` [TUHS] " John Cowan @ 2022-09-07 15:39 ` Joe 2022-09-07 15:43 ` John Cowan 0 siblings, 1 reply; 14+ messages in thread From: Joe @ 2022-09-07 15:39 UTC (permalink / raw) To: tuhs On 9/7/22 15:19, John Cowan wrote: > On Wed, Sep 7, 2022 at 8:54 AM Brian Zick <brian@zick.io> wrote: > > >> I am very curious to hear more about the implications and practical >> benefits of this from folks that have thoroughly explored it. In my >> experience I’ve mainly used APIs, and I’m having trouble imagining the >> other approach other than for text-processing. >> > > See <https://jpaulm.github.io/fbp/> for an explanation of Flow-Based > Programming, a realization of the same pipelining idea, but extended to > arbitrary directed graphs with multiple input and output ports. It was > developed in complete ignorance of Unix pipelines except by bare and > misleading report, and in entirely distinct application domains, yet with > exact convergence. "It steam-engines when it comes steam-engine time." > That page reminds me of CMS/Pipelines, a beautiful Rexx-like language: https://en.wikipedia.org/wiki/CMS_Pipelines ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: STDIN/OUT vs APIs [was: How Unix changed Software] 2022-09-07 15:39 ` Joe @ 2022-09-07 15:43 ` John Cowan 2022-09-07 16:01 ` Charles H Sauer (he/him) 0 siblings, 1 reply; 14+ messages in thread From: John Cowan @ 2022-09-07 15:43 UTC (permalink / raw) To: Joe; +Cc: tuhs [-- Attachment #1: Type: text/plain, Size: 350 bytes --] On Wed, Sep 7, 2022 at 11:39 AM Joe <joe@celo.io> wrote: > That page reminds me of CMS/Pipelines, a beautiful Rexx-like language: > > https://en.wikipedia.org/wiki/CMS_Pipelines In that case, however, it was a matter of what anthros call "stimulus diffusion". The CMS people read the Unix BSTJ issue, thought "We should do that", and wrote it. [-- Attachment #2: Type: text/html, Size: 1153 bytes --] ^ permalink raw reply [flat|nested] 14+ messages in thread
* [TUHS] Re: STDIN/OUT vs APIs [was: How Unix changed Software] 2022-09-07 15:43 ` John Cowan @ 2022-09-07 16:01 ` Charles H Sauer (he/him) 0 siblings, 0 replies; 14+ messages in thread From: Charles H Sauer (he/him) @ 2022-09-07 16:01 UTC (permalink / raw) To: tuhs On 9/7/2022 10:43 AM, John Cowan wrote: > > > On Wed, Sep 7, 2022 at 11:39 AM Joe <joe@celo.io <mailto:joe@celo.io>> > wrote: > > That page reminds me of CMS/Pipelines, a beautiful Rexx-like language: > > https://en.wikipedia.org/wiki/CMS_Pipelines > <https://en.wikipedia.org/wiki/CMS_Pipelines> > > > In that case, however, it was a matter of what anthros call "stimulus > diffusion". The CMS people read the Unix BSTJ issue, thought "We should > do that", and wrote it. That last sentence is literally true, based on page 63 of http://www.leeandmelindavarian.com/Melinda/25paper.pdf, and I can easily imagine Peter Capek doing as cited on that page. However, the general acceptance by "CMS people" was belated/begrudging, as CMS Pipelines weren't available outside IBM to any customers until 1986 and not available in U.S until 1989. -- voice: +1.512.784.7526 e-mail: sauer@technologists.com fax: +1.512.346.5240 Web: https://technologists.com/sauer/ Facebook/Google/Twitter: CharlesHSauer ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2022-09-08 14:14 UTC | newest] Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-09-05 23:48 [TUHS] Has this been discussed on-list? How Unix changed Software steve jenkin 2022-09-06 16:09 ` [TUHS] " Marc Donner 2022-09-07 4:00 ` steve jenkin 2022-09-07 14:58 ` John Cowan 2022-09-07 17:13 ` Paul Winalski 2022-09-08 14:12 ` Paul Winalski 2022-09-07 5:15 ` steve jenkin 2022-09-07 13:20 ` Dan Cross 2022-09-07 13:52 ` Steve Nickolas 2022-09-07 12:53 ` [TUHS] STDIN/OUT vs APIs [was: How Unix changed Software] Brian Zick 2022-09-07 13:19 ` [TUHS] " John Cowan 2022-09-07 15:39 ` Joe 2022-09-07 15:43 ` John Cowan 2022-09-07 16:01 ` Charles H Sauer (he/him)
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).