Computer Old Farts Forum
 help / color / mirror / Atom feed
* [COFF] Re: Terminology query - 'system process'?
@ 2023-12-20 20:35 Noel Chiappa
  0 siblings, 0 replies; 22+ messages in thread
From: Noel Chiappa @ 2023-12-20 20:35 UTC (permalink / raw)
  To: coff; +Cc: jnc

    > the first DEC machine with an IC processor was the -11/20, in 1970

Clem has reminded me that the first was the PDP-8/I-L (the second was a
cost-reduced version of the -I), from. The later, and much more common,
PDP-8/E-F-M, were contemporaneous with the -11/20.

Oh well, only two years; doesn't really affect my main point. Just about
'blink and you'll miss them'!

	Noel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-20 19:31 Noel Chiappa
  2023-12-20 20:29 ` Paul Winalski
@ 2023-12-31  3:51 ` steve jenkin
  1 sibling, 0 replies; 22+ messages in thread
From: steve jenkin @ 2023-12-31  3:51 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: coff

Noel,

Adding a little to your observation on how fast CMOS microprocessors took over.

Wasn’t just DEC and IBM who were in financial trouble in 1992:
 - the whole US minicomputer industry had been hit.

DEC did well to just survive, albeit only for another 5 years before merging with HP.

steve j

> On 21 Dec 2023, at 06:31, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
> 
> I'm impressed, in retrospect, with how quickly the world went from proceesors
> built with transistors, through proceesors built out discrete ICs, to
> microprocessors. To give an example; the first DEC machine with an IC
> processor was the -11/20, in 1970 (the KI10 was 1972); starting with the
> LSI-11, in 1975, DEC started using microprocessors; the last PDP-11 with a
> CPU made out of of discrete ICs was the -11/44, in 1979. All -11's produced
> after that used microprocessors.
> 
> So just 10 years... Wow.
> 
> Noel

============

<http://archive.computerhistory.org/resources/access/text/2018/08/102740418-05-01-acc.pdf> [ 1,070 pg ]
<https://gordonbell.azurewebsites.net/Digital/Minicomputers_The_DEC_aka_Digital_Story.ppt> [ power point ]

The Birth and Passing of Minicomputers: From A Digital Equipment Corp. (DEC) Perspective
Gordon Bell
11 October 2006

(intro slide)

(DEC) 1957-1998. 41 yrs., 4 generations: transistor, IC, VLSI, clusters - winner take all
How computer classes form...and die.
Not dealing with technology = change = disruption

pg 21

91 Minicomputer companies 1984
by 1990 only 4 survived. (DG, DEC, HP, IBM)

============

--
Steve Jenkin, IT Systems and Design 
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA

mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-20 19:31 Noel Chiappa
@ 2023-12-20 20:29 ` Paul Winalski
  2023-12-31  3:51 ` steve jenkin
  1 sibling, 0 replies; 22+ messages in thread
From: Paul Winalski @ 2023-12-20 20:29 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: coff

On 12/20/23, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>
> To give an example; the first DEC machine with an IC
> processor was the -11/20, in 1970 (the KI10 was 1972); starting with the
> LSI-11, in 1975, DEC started using microprocessors; the last PDP-11 with a
> CPU made out of of discrete ICs was the -11/44, in 1979. All -11's produced
> after that used microprocessors.

The VAX-11/780, 11/750, and 11/730 were all implemented using
7400-series discrete, gate-level TTL integrated circuits.  The planned
follow-on series was Venus (ECL gate level; originally planned to be
the 11/790 but after many delays released as the VAX 8600), Gemini (a
two-board VAX implementation; cancelled), and Scorpio (a chip set
eventually released as the VAX 8000).  Superstar, released as the
VAX-11/785, is a re-implementation of the 11/780 using faster TTL.
The floating point accelerator board for the 11/785 was implemented
using Fast Fairchild TTL.  The first microprocessor implementation of
the VAX architecture was the MicroVAX-I.  It, and all later VAX
processors, implemented only the MicroVAX subset of the VAX
architecture in hardware and firmware.  The instructions left out were
the character string instructions (except for MOVC), decimal
arithmetic, H-floating point, octaword, and a few obscure, little-used
instructions such as EDITPC and CRC.  The missing instructions were
simulated by the OS.  These instructions were originally dropped from
the architecture because there wasn't enough real estate on a chip to
hold the microcode for them.  It's interesting that they continued to
be simulated in macrocode even after several process shrink cycles
made it feasible to move them to microcode.

I wrote a distributed (computation could be done in parallel over a
DECnet LAN or WAN) Mandelbrot Set computation and display program for
VAX/VMS.  It was implemented in PL/I.  The actual display was
incredibly sluggish, and I went in search of the performance problem.
It turned out to be in the cartesian-to-screen coordinate translation
subroutine.  The program did its computations in VAX D-float double
precision floating point.  The window was 800x800 pixels and this was
divvied up into 32x32-pixel cells for distributed, parallel
computation.  The expression "800/32" occurred in the coordinate
conversion program.  In PL/I language data type semantics, this
expression is "fixed decimal(3,0) divided by fixed decimal(2,0)".
This expression was being multiplied by a VAX D-float (float decimal
in PL/I) and this mixture of fixed and float triggered one of the more
bizarre of PL/I's baroque implicit data type conversions.  First, the
D-float values were converted to full-precision (15-digit) packed
decimal by calling one of the Fortran RTL's subroutines.  All the
arithmetic in the routine was done in full-precision packed decimal.
The result was then converted back to D-float (again by a Fortran RTL
call).  There were effectively only two instructions in the whole
subroutine that weren't simulated in macrocode, and one of those was
the RET instruction at the end of the routine!  I changed the
offending expression to "800E0/32E0" and got a 100X
speedup--everything was now being done in (hardware) D-float.

-Paul W.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
@ 2023-12-20 19:31 Noel Chiappa
  2023-12-20 20:29 ` Paul Winalski
  2023-12-31  3:51 ` steve jenkin
  0 siblings, 2 replies; 22+ messages in thread
From: Noel Chiappa @ 2023-12-20 19:31 UTC (permalink / raw)
  To: coff; +Cc: jnc

    > From: Derek Fawcus

    > How early does that have to be? MP/M-1.0 (1979 spec) mentions this, as
    > "Resident System Processes" ... It was a banked switching, multiuser,
    > multitasking system for a Z80/8080.

Anything with a microprocessor is, by definition, late! :-)

I'm impressed, in retrospect, with how quickly the world went from proceesors
built with transistors, through proceesors built out discrete ICs, to
microprocessors. To give an example; the first DEC machine with an IC
processor was the -11/20, in 1970 (the KI10 was 1972); starting with the
LSI-11, in 1975, DEC started using microprocessors; the last PDP-11 with a
CPU made out of of discrete ICs was the -11/44, in 1979. All -11's produced
after that used microprocessors.

So just 10 years... Wow.

	Noel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-14 23:29 Noel Chiappa
                   ` (2 preceding siblings ...)
  2023-12-15 13:43 ` Dan Cross
@ 2023-12-19 13:54 ` Derek Fawcus via COFF
  3 siblings, 0 replies; 22+ messages in thread
From: Derek Fawcus via COFF @ 2023-12-19 13:54 UTC (permalink / raw)
  To: coff

On Thu, Dec 14, 2023 at 06:29:35PM -0500, Noel Chiappa wrote:
> Interestingly, other early systems don't seem to have thought of this structuring technique.

How early does that have to be?  MP/M-1.0 (1979 spec) mentions this,
as "Resident System Processes"

    http://www.bitsavers.org/pdf/digitalResearch/mpm_I/MPM_1.0_Specification_Aug79.pdf

It was a banked switching, multiuser, multitasking system for a Z80/8080.
It mentions 5 such processes.

Later versions, and the 8086 version still had them. The MP/M-86 docs mention
'Terminal Message', Clock, Echo and 'System Status' processes.  I believe the
first was spawned one per console.

(Some of the internal structures suggest it was intended to support swapping,
 but I don't know if that was implemented in terms of disk swapping)

DF


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-16 19:21       ` Paul Winalski
@ 2023-12-16 19:44         ` Paul Winalski
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Winalski @ 2023-12-16 19:44 UTC (permalink / raw)
  To: coff

IBM's OS/360 did not have the modern "process" concept using virtual
memory to implement a thread of control with its own, separate address
space. Instead they had the concept of threads of control associated
with contiguous segments of physical memory called "partitions".  You
had either a pre-defined set of partitions (OS MFT, multiprogramming
with a fixed number of tasks) or the OS allocated partitions
on-the-fly as needed for the current mix of jobs (OS/MVT,
multiprogramming with a variable number of tasks).  OS/VS1, OS/VS2
SVS, and DOS/VS for System/370 operated in the same way, except there
was a single virtual address space, usually much larger than physical
memory, that was partitioned up.  DOS/360 ran one job at a time.
DOS/VS had up to 5 partititions:  BG (background), and P1-P4.
Scheduling in DOS/VS was strictly preemptive, in the order P1, P2, P3,
P4, BG.  P1 got control whenever it was ready to run.  If P1 was
stalled, P2 was scheduled, then P3 through BG, which only got to run
whenever the higher-priority jobs were stalled.  The most
sophisticated (and resource-hogging) version of OS/VS was OS/VS MVS
(multiple virtual storages) which implemented the modern concept of
each partition (process) getting its own, separate 0-based address
space.

Some of these partitions might be allocated to privileged tasks, most
notably the spooling system such as HASP (Houston Automatic Spooling
Priorithy), which was developed by IBM contractors working at NASA's
Houston space facility in the mid-1960s.  HASP provided both spooling
and remote job entry services and ran at least partly in partition
(i.e., user process) context.  DOS/360 had a kernel (supervisor, in
IBM-speak) enhancement called POWER that provided spooling capability.
DOS/VS had POWER/VS, which ran in a separate partition (typically P1,
the highest priority).

VAX/VMS (and its successor OpenVMS) had a few privileged user-mode
processes to perform system tasks.  Two of these processes were OPCOM
(provides communication with the operator at the operator's console)
and JOB CONTROL (provides spooling and batch job services).  I think
OPCOM runs entirely in user mode.  JOB CONTROL may have some routines
that execute in kernel mode.  These system processes are similar to
daemons in Unix.

-Paul W.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-16  2:04     ` Greg 'groggy' Lehey
@ 2023-12-16 19:21       ` Paul Winalski
  2023-12-16 19:44         ` Paul Winalski
  0 siblings, 1 reply; 22+ messages in thread
From: Paul Winalski @ 2023-12-16 19:21 UTC (permalink / raw)
  To: Greg 'groggy' Lehey; +Cc: coff

On 12/15/23, Greg 'groggy' Lehey <grog@lemis.com> wrote:
>
> At least in assembler (I never programmed HLLs under MVS), by
> convention R13 pointed to the save area.  From memory, subroutine
> calls worked like:
>
>   LA    15,SUBR        load address of subroutine
>   BALR  14,15          call subroutine, storing address in R14
>
> The subroutine then starts
> with
>
>   STM  14,12,12(13)    save registers 14 to 12 (wraparound) in old save
> area
>   LA   14,SAVE         load address of our save area
>   ST   14,8(13)	       save in linkage of old save area
>   LR   13,14           and point to our save areas
>
> Returning from the subroutine was then
>
>   L    13,4(13)        restore old save area
>   LM   14,12,12(13)    restore the other registers
>   BR   14              and return to the caller
>
> Clearly this example isn't recursive, since it uses a static save
> area.  But with dynamic allocation it could be recursive.

Yes, that was the most common calling convention in S/360/370., and
the one that was used if you were implementing a subroutine package
for general use.  It has the advantage that the (caller-allocated)
register save area has room for all of the registers and so there is
no need to change the caller code if the callee is changed to use an
additional register.  It also makes it very convenient to implement
unwinding from an exception handler.  But it does burn 60 bytes for
the register save area and if you're programming for a S/360 model 25
with only 32K of user-available memory that can be significant.

Those writing their own assembly code typically cut corners on this
convention in order to reduce the memory footprint and the execution
time spent saving/restoring registers.

There's been long debate by ABI and compiler designers over the
relative merits of assigning the duties of allocating the register
save area  (RSA) and saving/restoring registers to either the caller
or the callee.  The IBM convention has the caller allocate the RSA and
the callee save and restore the register contents.  One can also have
a convention where the caller allocates an RSA and saves/restores the
registers it is actively using.  Or a convention where the callee
allocates the RSA and saves/restores the registers it has modified.
Each convention has its merits and demerits.

The IBM PL/I compiler for OS and OS/VS (but not DOS and DOS/VS) had
three routine declaration attributes to assist in optimization of
routine calls.  Absent any other information, the compiler must assume
the worst--that the subroutine call may modify any of the global
variables, and that it may be recursive.  IBM PL/I had a RECURSIVE
attribute to flag routines that are recursive.  It also had two
attributes--USES and SETS--to describe the global variables that are
either used by (USES) or changed by (SETS) the routine.  Global
variables not in the USES list did not have to be spilled before the
call.  Similarly, global variables not in the SETS list did not have
to be re-loaded after the call.

IBM dropped USES and SETS from the PL/I language with the S/370
compilers.  USES and SETS were something of a maintenance nightmare
for application programmers.  They were very error-prone.  If you
didn't keep the USES and SETS declarations up-to-date when you
modified a routine you could introduce all manner of subtle stale data
bugs.  On the compiler writers' side, data flow analysis wasn't yet
advanced enough to make good use of the USES and SETS information
anyway.  Modern compilers perform interprocedural analysis when they
can and derive accurate global variable data blow information on their
own.

-Paul W.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-15 17:51   ` Paul Winalski
  2023-12-15 18:08     ` Warner Losh
@ 2023-12-16  2:04     ` Greg 'groggy' Lehey
  2023-12-16 19:21       ` Paul Winalski
  1 sibling, 1 reply; 22+ messages in thread
From: Greg 'groggy' Lehey @ 2023-12-16  2:04 UTC (permalink / raw)
  To: Paul Winalski; +Cc: coff

[-- Attachment #1: Type: text/plain, Size: 2216 bytes --]

On Friday, 15 December 2023 at 12:51:47 -0500, Paul Winalski wrote:
> The usual programming convention for IBM S/360/370 operating systems
> (OS/360, OS/VS, TOS and DOS/360, DOS/VS) did not involve use of a
> stack at all, unless one was writing a routine involving recursive
> calls, and that was rare.  Addressing for both program and data was
> done using a base register + offset.  PL/I is the only IBM HLL I know
> that explicitly supported recursion.  I don't know how they
> implemented automatic variables assigned to memory in recursive
> routines.  It might have been a linked list rather than a stack.

Yes, the 360 architecture doesn't have a hardware stack.  Subroutine
calls worked with was something like a linked list.  Registers were
saved in a “save area", and they were linked.

At least in assembler (I never programmed HLLs under MVS), by
convention R13 pointed to the save area.  From memory, subroutine
calls worked like:

  LA    15,SUBR        load address of subroutine
  BALR  14,15          call subroutine, storing address in R14

The subroutine then starts
with

  STM  14,12,12(13)    save registers 14 to 12 (wraparound) in old save area
  LA   14,SAVE         load address of our save area
  ST   14,8(13)	       save in linkage of old save area
  LR   13,14           and point to our save areas

Returning from the subroutine was then

  L    13,4(13)        restore old save area
  LM   14,12,12(13)    restore the other registers
  BR   14              and return to the caller

Clearly this example isn't recursive, since it uses a static save
area.  But with dynamic allocation it could be recursive.

> I remember when I first went from the IBM world and started
> programming VAX/VMS, I thought it was really weird to burn an entire
> register just for a process stack.

Heh.  Only one register?

/370 was an experience for me, one I never wanted to repeat.

Greg
--
Sent from my desktop computer.
Finger grog@lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed.  If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-15  6:24 ` Lars Brinkhoff
@ 2023-12-15 18:30   ` Stuff Received
  0 siblings, 0 replies; 22+ messages in thread
From: Stuff Received @ 2023-12-15 18:30 UTC (permalink / raw)
  To: coff

On 2023-12-15 01:24, Lars Brinkhoff wrote:
> For the record, daring to stray outside Unix, ITS originally had a
> "system job", and later split off a "core job".  Both running in monitor
> mode.  Job means the same thing as process here, and monitor same as
> kernel.
> 
> --- >8 --- cut and stop reading here --- >8 --- off topic --- >8 ---

 From https://www.tuhs.org/cgi-bin/mailman/listinfo/coff

"The Computer Old Farts Forum provides a place for people to discuss the 
history of computers and their future."

It seems that you are well within bounds.

S.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-15 17:51   ` Paul Winalski
@ 2023-12-15 18:08     ` Warner Losh
  2023-12-16  2:04     ` Greg 'groggy' Lehey
  1 sibling, 0 replies; 22+ messages in thread
From: Warner Losh @ 2023-12-15 18:08 UTC (permalink / raw)
  To: Paul Winalski; +Cc: coff

[-- Attachment #1: Type: text/plain, Size: 4452 bytes --]

On Fri, Dec 15, 2023 at 10:51 AM Paul Winalski <paul.winalski@gmail.com>
wrote:

> For me, the term "system process" means either:
>
> o A conventional, but perhaps privileged user-mode process that
> performs a system function.  An example would be the output side of a
> spooling system, or an operator communications process.
>
> o A process, or at least an address space + execution thread, that
> runs in privileged mode on the hardware and whose address space is in
> the resident kernel.
>
> Do Unix system processes participate in time-sliced scheduling the way
> that user processes do?
>

Yes. At least on FreeBSD they do. They are just processes that get
scheduled. They may have different priorities, etc, but all that factors
in, and those priorities allow them to compete and/or preempt already
running processes depending on a number of things. The only thing
special about kernel-only thread/processes is that they are optimized
knowing they never have a userland associated with them...


> On 12/14/23, Bakul Shah <bakul@iitbombay.org> wrote:
> >
> > Exactly! If blocking was not required, you can do the work in an
> > interrupt handler. If blocking is required, you can't just use the
> > stack of a random process (while in supervisor mode) unless you
> > are doing some work specifically on its behalf.
> >
> >> Interestingly, other early systems don't seem to have thought of this
> >> structuring technique.
> >
> > I suspect IBM operating systems probably did use them. At least TSO
> > must have. Once you start *accounting* (and charging) for cpu time,
> > this idea must fall out naturally. You don't want to charge a process
> > for kernel time used for an unrelated work!
>
> The usual programming convention for IBM S/360/370 operating systems
> (OS/360, OS/VS, TOS and DOS/360, DOS/VS) did not involve use of a
> stack at all, unless one was writing a routine involving recursive
> calls, and that was rare.  Addressing for both program and data was
> done using a base register + offset.  PL/I is the only IBM HLL I know
> that explicitly supported recursion.  I don't know how they
> implemented automatic variables assigned to memory in recursive
> routines.  It might have been a linked list rather than a stack.
>
> I remember when I first went from the IBM world and started
> programming VAX/VMS, I thought it was really weird to burn an entire
> register just for a process stack.
>
> > There was a race condition in V7 swapping code. Once a colleague and I
> > spent two weeks of 16 hour debugging days!
>
> I had a race condition in some multithread code I wrote.  I couldn't
> find it the bug.  I even resorted to getting machine code listings of
> the whole program and marking the critical and non-critical sections
> with green and red markers.  I eventually threw all of the code out
> and rewrite it from scratch.  The second version didn't have the race
> condition.
>

The award for my 'longest bug chased' is at around 3-4 years. We had
a product, based on an arm9 CPU (so armv4) that would sometimes
hang. Well, individual threads in it would hang waiting for a lock and so
weird aspects of the program stopped working in unusual ways. But the
root cause was a stuck lock, or missed wakeup. It took months to recreate
this problem. I tried all manner of debugging to accelerate it reoccurring
(no
luck) to audit tall locks/unlocks/wakeups to make sure there was no leaks
or subtle mismatches (there wasn't, despite a 100MB log file). It went on
and on. I rewrote all the locking / sleeping / etc code, but also no dice.
The one day, by chance, I was talking to someone who asked me
about atomic operations. I blew them off at first, but then realized the
atomic
ops weren't implemented in hardware, but in software with the support of
the kernel (there were no CPU level atomic ops). Within an hour of realizing
this and auditing the code path, I had a fix to a race that was trivial to
discover
once you looked at the code closely. My friend also found the same race
that I
had about the same time I was finishing up my fix (which he found another
race
in, go pair programming). With the corrected fix, the weird hanging went
away, only to be reported once again... in a unit that hadn't been updated
with the patch!

tl;dr: you never know what the root cause might be in weird, racy
situations.

Warner

[-- Attachment #2: Type: text/html, Size: 5467 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-15  1:15 ` Bakul Shah
@ 2023-12-15 17:51   ` Paul Winalski
  2023-12-15 18:08     ` Warner Losh
  2023-12-16  2:04     ` Greg 'groggy' Lehey
  0 siblings, 2 replies; 22+ messages in thread
From: Paul Winalski @ 2023-12-15 17:51 UTC (permalink / raw)
  To: coff

For me, the term "system process" means either:

o A conventional, but perhaps privileged user-mode process that
performs a system function.  An example would be the output side of a
spooling system, or an operator communications process.

o A process, or at least an address space + execution thread, that
runs in privileged mode on the hardware and whose address space is in
the resident kernel.

Do Unix system processes participate in time-sliced scheduling the way
that user processes do?

On 12/14/23, Bakul Shah <bakul@iitbombay.org> wrote:
>
> Exactly! If blocking was not required, you can do the work in an
> interrupt handler. If blocking is required, you can't just use the
> stack of a random process (while in supervisor mode) unless you
> are doing some work specifically on its behalf.
>
>> Interestingly, other early systems don't seem to have thought of this
>> structuring technique.
>
> I suspect IBM operating systems probably did use them. At least TSO
> must have. Once you start *accounting* (and charging) for cpu time,
> this idea must fall out naturally. You don't want to charge a process
> for kernel time used for an unrelated work!

The usual programming convention for IBM S/360/370 operating systems
(OS/360, OS/VS, TOS and DOS/360, DOS/VS) did not involve use of a
stack at all, unless one was writing a routine involving recursive
calls, and that was rare.  Addressing for both program and data was
done using a base register + offset.  PL/I is the only IBM HLL I know
that explicitly supported recursion.  I don't know how they
implemented automatic variables assigned to memory in recursive
routines.  It might have been a linked list rather than a stack.

I remember when I first went from the IBM world and started
programming VAX/VMS, I thought it was really weird to burn an entire
register just for a process stack.

> There was a race condition in V7 swapping code. Once a colleague and I
> spent two weeks of 16 hour debugging days!

I had a race condition in some multithread code I wrote.  I couldn't
find it the bug.  I even resorted to getting machine code listings of
the whole program and marking the critical and non-critical sections
with green and red markers.  I eventually threw all of the code out
and rewrite it from scratch.  The second version didn't have the race
condition.

-Paul W.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-15 14:20   ` Dan Cross
  2023-12-15 16:25     ` Warner Losh
@ 2023-12-15 17:13     ` Bakul Shah
  1 sibling, 0 replies; 22+ messages in thread
From: Bakul Shah @ 2023-12-15 17:13 UTC (permalink / raw)
  To: Dan Cross; +Cc: Noel Chiappa, coff

On Dec 15, 2023, at 6:21 AM, Dan Cross <crossd@gmail.com> wrote:
> 
> I remember reading a paper on the design of NFS (it may have been the
> BSD paper) and there was a note about how the NFS server process ran
> mostly in the kernel; user code created it, but pretty much all it did
> was invoke a system call that implemented the server. That was kind of
> neat.

At Valid Logic Systems I prototyped a relatively simple network filesystem.
Here there was no user code. There was one “agent” kernel thread per remote
system accessing local filesystem + a few more. The agent thread acted on
behalf of a remote system and maintained a session as long as at least one
local file/dir was referenced from that system. There were complications as it
was not a stateless design. I had to add code to detect when the remote
server/client died or rebooted and return ENXIO / clear out old state.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-15 14:20   ` Dan Cross
@ 2023-12-15 16:25     ` Warner Losh
  2023-12-15 17:13     ` Bakul Shah
  1 sibling, 0 replies; 22+ messages in thread
From: Warner Losh @ 2023-12-15 16:25 UTC (permalink / raw)
  To: Dan Cross; +Cc: Noel Chiappa, coff

[-- Attachment #1: Type: text/plain, Size: 6327 bytes --]

On Fri, Dec 15, 2023 at 7:21 AM Dan Cross <crossd@gmail.com> wrote:

> On Thu, Dec 14, 2023 at 5:17 PM Clem Cole <clemc@ccc.com> wrote:
> > I don't know of a standard name.   We used to call the kernel processes
> or kernel threads also.
>
> I've heard all combinations of (system|kernel) (thread|task|process),
> all of which mean more or less the same thing: something the kernel
> can schedule and run that doesn't have a userspace component, reusing
> the basic concurrency primitives in the kernel for its own internal
> purposes. I'm not sure I've heard "kernel daemon" before, but
> intuitively I'd lump it into the same category unless I was told
> otherwise (as Noel mentioned, of course Berkeley had the "pageout
> daemon" which ran only in the kernel).
>

FreeBSD (and likely others) have extended this to allow kernel
threads that we loosely call daemons as well. FreeBSD has APIs
for creating a kernel-only threads and processes...


> > For instance, in the original Masscomp EFS code, we had a handful of
> processes that got forked after the pager using kernel code.  Since the
> basic UNIX read/write from the user space scheme is synchronous, the
> premade pool of kernel processes was dispatched as needed when we listened
> for asynchronous remote requests for I/O. This is similar to the fact that
> asynchronous devices from serial or network interfaces need a pool of
> memory to stuff things into since you never know ahead of time when it will
> come.
> > ᐧ
>
> I remember reading a paper on the design of NFS (it may have been the
> BSD paper) and there was a note about how the NFS server process ran
> mostly in the kernel; user code created it, but pretty much all it did
> was invoke a system call that implemented the server. That was kind of
> neat.


I recall discussions with the kernel people at Solbourne who were bringing
up
SunOS 4.0 on Solbourne hardware about this. nfsd was little more than an N
way
fork followed by the system call. It provided a process context to sleep
in, which
couldn't be created in the kernel at the time. I've not gone to the
available SunOS
sources to confirm this is what's going on.

I'd thought, though, that this was the second nfsd implementation. The
first one
would decode the requests off the wire and schedule the I/O. It was only
when
there were issues with this approach that it moved into the kernel. This
was mostly
a context switch thing, but as more security measures were added to the
system
that root couldn't bypass, NFS needed to move into the kernel so it could
bypass
them. See getfh and similar system calls. I'm not sure how much of this was
done in SunOS, I'm only familiar with the post 4.4BSD work...

Warner


>         - Dan C.
>
> > On Thu, Dec 14, 2023 at 4:48 PM Noel Chiappa <jnc@mercury.lcs.mit.edu>
> wrote:
> >> So Lars Brinkhoff and I were chatting about daemons:
> >>
> >>   https://gunkies.org/wiki/Talk:Daemon
> >>
> >> and I pointed out that in addition to 'standard' daemons (e.g. the
> printer
> >> spooler daemon, email daemon, etc, etc) there are some other things
> that are
> >> daemon-like, but are fundamentally different in major ways (explained
> later
> >> below). I dubbed them 'system processes', but I'm wondering if ayone
> knows if
> >> there is a standard term for them? (Or, failing that, if they have a
> >> suggestion for a better name?)
> >>
> >>
> >> Early UNIX is one of the first systems to have one (process 0, the
> "scheduling (swapping)
> >> process"), but the CACM "The UNIX Time-Sharing System" paper:
> >>
> >>   https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf
> >>
> >> doesn't even mention it, so no guidance there. Berkeley UNIX also has
> one,
> >> mentioned in "Design and Implementation of the Berkeley Virtual Memory
> >> Extensions to the UNIX Operating System":
> >>
> >>   http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf
> >>
> >> where it is called the "pageout daemon".("During system initialization,
> just
> >> before the init process is created, the bootstrapping code creates
> process 2
> >> which is known as the pageout daemon. It is this process that ..
> writ[es]
> >> back modified pages. The process leaves its normal dormant state upon
> being
> >> waken up due to the memory free list size dropping below an upper
> >> threshold.") However, I think there are good reasons to dis-favour the
> term
> >> 'daemon' for them.
> >>
> >>
> >> For one thing, typical daemons look (to the kernel) just like 'normal'
> >> processes: their object code is kept in a file, and is loaded into the
> >> daemon's process when it starts, using the same mechanism that 'normal'
> >> processes use for loading their code; daemons are often started long
> after
> >> the kernel itself is started, and there is usually not a special
> mechanism in
> >> the kernel to start daemons (on early UNIXes, /etc/rc is run by the
> 'init'
> >> process, not the kernel); daemons interact with the kernel through
> system
> >> calls, just like 'ordinary' processes; the daemon's process runs in
> 'user'
> >> CPU mode (using the same standard memory mapping mechanisms, just like
> >> blah-blah).
> >>
> >> 'System processes' do none of these things: their object code is linked
> into
> >> the monolithic kernel, and is thus loaded by the bootstrap; the kernel
> >> contains special provision for starting the system process, which start
> as
> >> the kernel is starting; they don't do system calls, just call kernel
> routines
> >> directly; they run in kernel mode, using the same memory mapping as the
> >> kernel itself; etc, etc.
> >>
> >> Another important point is that system processes are highly intertwined
> with
> >> the operation of the kernel; without the system process(es) operating
> >> correctly, the operation of the system will quickly grind to a halt.
> The loss
> >> of ordinary' daemons is usually not fatal; if the email daemon dies, the
> >> system will keep running indefinitely. Not so, for the swapping
> process, or
> >> the pageout daemon
> >>
> >>
> >> Anyway, is there a standard term for these things? If not, a better
> name than
> >> 'system process'?
> >>
> >>         Noel
>

[-- Attachment #2: Type: text/html, Size: 8210 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-14 22:09 ` Clem Cole
@ 2023-12-15 14:20   ` Dan Cross
  2023-12-15 16:25     ` Warner Losh
  2023-12-15 17:13     ` Bakul Shah
  0 siblings, 2 replies; 22+ messages in thread
From: Dan Cross @ 2023-12-15 14:20 UTC (permalink / raw)
  To: Clem Cole; +Cc: Noel Chiappa, coff

On Thu, Dec 14, 2023 at 5:17 PM Clem Cole <clemc@ccc.com> wrote:
> I don't know of a standard name.   We used to call the kernel processes or kernel threads also.

I've heard all combinations of (system|kernel) (thread|task|process),
all of which mean more or less the same thing: something the kernel
can schedule and run that doesn't have a userspace component, reusing
the basic concurrency primitives in the kernel for its own internal
purposes. I'm not sure I've heard "kernel daemon" before, but
intuitively I'd lump it into the same category unless I was told
otherwise (as Noel mentioned, of course Berkeley had the "pageout
daemon" which ran only in the kernel).

> For instance, in the original Masscomp EFS code, we had a handful of processes that got forked after the pager using kernel code.  Since the basic UNIX read/write from the user space scheme is synchronous, the premade pool of kernel processes was dispatched as needed when we listened for asynchronous remote requests for I/O. This is similar to the fact that asynchronous devices from serial or network interfaces need a pool of memory to stuff things into since you never know ahead of time when it will come.
> ᐧ

I remember reading a paper on the design of NFS (it may have been the
BSD paper) and there was a note about how the NFS server process ran
mostly in the kernel; user code created it, but pretty much all it did
was invoke a system call that implemented the server. That was kind of
neat.

        - Dan C.

> On Thu, Dec 14, 2023 at 4:48 PM Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>> So Lars Brinkhoff and I were chatting about daemons:
>>
>>   https://gunkies.org/wiki/Talk:Daemon
>>
>> and I pointed out that in addition to 'standard' daemons (e.g. the printer
>> spooler daemon, email daemon, etc, etc) there are some other things that are
>> daemon-like, but are fundamentally different in major ways (explained later
>> below). I dubbed them 'system processes', but I'm wondering if ayone knows if
>> there is a standard term for them? (Or, failing that, if they have a
>> suggestion for a better name?)
>>
>>
>> Early UNIX is one of the first systems to have one (process 0, the "scheduling (swapping)
>> process"), but the CACM "The UNIX Time-Sharing System" paper:
>>
>>   https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf
>>
>> doesn't even mention it, so no guidance there. Berkeley UNIX also has one,
>> mentioned in "Design and Implementation of the Berkeley Virtual Memory
>> Extensions to the UNIX Operating System":
>>
>>   http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf
>>
>> where it is called the "pageout daemon".("During system initialization, just
>> before the init process is created, the bootstrapping code creates process 2
>> which is known as the pageout daemon. It is this process that .. writ[es]
>> back modified pages. The process leaves its normal dormant state upon being
>> waken up due to the memory free list size dropping below an upper
>> threshold.") However, I think there are good reasons to dis-favour the term
>> 'daemon' for them.
>>
>>
>> For one thing, typical daemons look (to the kernel) just like 'normal'
>> processes: their object code is kept in a file, and is loaded into the
>> daemon's process when it starts, using the same mechanism that 'normal'
>> processes use for loading their code; daemons are often started long after
>> the kernel itself is started, and there is usually not a special mechanism in
>> the kernel to start daemons (on early UNIXes, /etc/rc is run by the 'init'
>> process, not the kernel); daemons interact with the kernel through system
>> calls, just like 'ordinary' processes; the daemon's process runs in 'user'
>> CPU mode (using the same standard memory mapping mechanisms, just like
>> blah-blah).
>>
>> 'System processes' do none of these things: their object code is linked into
>> the monolithic kernel, and is thus loaded by the bootstrap; the kernel
>> contains special provision for starting the system process, which start as
>> the kernel is starting; they don't do system calls, just call kernel routines
>> directly; they run in kernel mode, using the same memory mapping as the
>> kernel itself; etc, etc.
>>
>> Another important point is that system processes are highly intertwined with
>> the operation of the kernel; without the system process(es) operating
>> correctly, the operation of the system will quickly grind to a halt. The loss
>> of ordinary' daemons is usually not fatal; if the email daemon dies, the
>> system will keep running indefinitely. Not so, for the swapping process, or
>> the pageout daemon
>>
>>
>> Anyway, is there a standard term for these things? If not, a better name than
>> 'system process'?
>>
>>         Noel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-14 23:29 Noel Chiappa
  2023-12-14 23:54 ` Larry McVoy
  2023-12-15  1:15 ` Bakul Shah
@ 2023-12-15 13:43 ` Dan Cross
  2023-12-19 13:54 ` Derek Fawcus via COFF
  3 siblings, 0 replies; 22+ messages in thread
From: Dan Cross @ 2023-12-15 13:43 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: coff

On Thu, Dec 14, 2023 at 7:07 PM Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
>     > Now I'd probably call them kernel threads as they don't have a separate
>     > address space.
>
> Makes sense. One query about stacks, and blocking, there. Do kernel threads,
> in general, have per-thread stacks; so that they can block (and later resume
> exactly where they were when they blocked)?
>
> That was the thing that, I think, made kernel processes really attractive as
> a kernel structuring tool; you get code ike this (from V6):
>
>         swap(rp->p_addr, a, rp->p_size, B_READ);
>         mfree(swapmap, (rp->p_size+7)/8, rp->p_addr);
>
> The call to swap() blocks until the I/O operation is complete, whereupon that
> call returns, and away one goes. Very clean and simple code.

Assuming we're talking about Unix, yes, each process has two stacks:
one for userspace, one in the kernel.

The way I've always thought about it, every process has two parts: the
userspace part, and a matching thread in the kernel. When Unix is
running, it is always running in the context of _some_ process (modulo
early boot, before any processes have been created, of course).
Furthermore, when the process is running in user mode, the kernel
stack is empty. When a process traps into the kernel, it's running on
the kernel stack for the corresponding kthread.

Processes may enter the kernel in one of two ways: directly, by
invoking a system call, or indirectly, by taking an interrupt. In the
latter case, the kernel simply runs the interrupt handler within the
context of whatever process happened to be running when the interrupt
occurred. In both cases, one usually says that the process is either
"running in userspace" (ie, normal execution of whatever program is
running in the process) or "running in the kernel" (that is, the
kernel is executing in the context of that process).

Note that this affects behavior around blocking operations.
Traditionally, Unix device drivers had a notion of an "upper half" and
a "lower half." The upper half is the code that is invoked on behalf
of a process requesting services from the kernel via some system call;
the lower half is the code that runs in response to an interrupt for
the corresponding device. Since it's impossible in general to know
what process is running when an interrupt fires, it was important not
to perform operations that would cause the current process to be
unscheduled in an interrupt handler; hence the old adage, "don't sleep
in the bottom half of a device driver" (where sleep here means sleep
as in "sleep and wakeup", a la a condition variable, not "sleep for
some amount of time"): you would block some random process, which may
never be woken up again!

An interesting aside here is signals. We think of them as an
asynchronous mechanism for interrupting a process, but their delivery
must be coordinated by the kernel; in particular, if I send a signal
to a process that is running in userspace, it (typically) won't be
delivered right away; rather, it will be delivered the next time the
process is scheduled to run, as the process must enter the kernel
before delivery can be effected. Signal delivery is a synthetic event,
unlike the delivery of a hardware interrupt, and the upcall happens in
userspace.

> Use of a kernel process probably makes the BSD pageout daemon code fairly
> straightforward, too (well, as straightforward as anything done by Berzerkly
> was :-).
>
>
> Interestingly, other early systems don't seem to have thought of this
> structuring technique. I assumed that Multics used a similar technique to
> write 'dirty' pages out, to maintain a free list. However, when I looked in
> the Multics Storage System Program Logic Manual:
>
>   http://www.bitsavers.org/pdf/honeywell/large_systems/multics/AN61A_storageSysPLM_Sep78.pdf
>
> Multics just writes dirty pages as part of the page fault code: "This
> starting of writes is performed by the subroutine claim_mod_core in
> page_fault. This subroutine is invoked at the end of every page fault." (pg.
> 8-36, pg. 166 of the PDF.) (Which also increases the real-time delay to
> complete dealing with a page fault.)

Note that this says, "starting of writes." Presumably, the writes
themselves were asynchronous; this just initiates the operations. It
certainly adds latency to the page fault handler, but not as much as
waiting for the operations to complete!

> It makes sense to have a kernel process do this; having the page fault code
> do it just makes that code more complicated. (The code in V6 to swap
> processes in and out is beautifully simple.) But it's apparently only obvious
> in retrospect (like many brilliant ideas :-).

I can kinda sorta see a method in the madness of the Multics approach.
If you think that page faults are relatively rare, and initiating IO
is relatively cheap but still more expensive than executing "normal"
instructions, then it makes some sense that you might want to amortize
the cost of that by piggybacking one on the other. Of course, that's
just speculation and I don't really have a sense for how well that
worked out in Multics (which I have played around with and read about,
but still seems largely mysterious to me). In the Unix model, you've
got scheduling latency to deal with to run the pageout daemon; of
course, that all happened as part of a context switch, and in early
Unix there was no demand paging (and so I suppose page faults were
considered fatal).

That said, using threads as an organizational metaphor for structured
concurrency in the kernel is wonderful compared to many of the
alternatives (hand-coded state machines, for example).

        - Dan C.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-14 21:48 [COFF] " Noel Chiappa
  2023-12-14 22:06 ` [COFF] " Bakul Shah
  2023-12-14 22:09 ` Clem Cole
@ 2023-12-15  6:24 ` Lars Brinkhoff
  2023-12-15 18:30   ` Stuff Received
  2 siblings, 1 reply; 22+ messages in thread
From: Lars Brinkhoff @ 2023-12-15  6:24 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: coff

For the record, daring to stray outside Unix, ITS originally had a
"system job", and later split off a "core job".  Both running in monitor
mode.  Job means the same thing as process here, and monitor same as
kernel.

--- >8 --- cut and stop reading here --- >8 --- off topic --- >8 ---

In ITS terminology, "demon" means a (user space) process that is started
on demand by the system.  This could be due to an external event (say, a
network connection), or another process indicating a need for a service.
The demon may go away right after it has done its job, or linger around
for a while.

There's a separate term "dragon" which means a continuously running
background process.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-14 23:29 Noel Chiappa
  2023-12-14 23:54 ` Larry McVoy
@ 2023-12-15  1:15 ` Bakul Shah
  2023-12-15 17:51   ` Paul Winalski
  2023-12-15 13:43 ` Dan Cross
  2023-12-19 13:54 ` Derek Fawcus via COFF
  3 siblings, 1 reply; 22+ messages in thread
From: Bakul Shah @ 2023-12-15  1:15 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: coff

On Dec 14, 2023, at 3:29 PM, Noel Chiappa <jnc@mercury.lcs.mit.edu> wrote:
> 
>> From: Bakul Shah
> 
>> Now I'd probably call them kernel threads as they don't have a separate
>> address space.
> 
> Makes sense. One query about stacks, and blocking, there. Do kernel threads,
> in general, have per-thread stacks; so that they can block (and later resume
> exactly where they were when they blocked)?

Exactly! If blocking was not required, you can do the work in an
interrupt handler. If blocking is required, you can't just use the
stack of a random process (while in supervisor mode) unless you
are doing some work specifically on its behalf.

> Interestingly, other early systems don't seem to have thought of this
> structuring technique.

I suspect IBM operating systems probably did use them. At least TSO
must have. Once you start *accounting* (and charging) for cpu time,
this idea must fall out naturally. You don't want to charge a process
for kernel time used for an unrelated work!

*Accounting* is an interesting thing to think about. In a microkernel
where most of the work is done by user mode services, how to you keep
track of time and resources used by a process (or user)? This can
matter for things like latency, which may be part of your service level
agreement (SLA). This should also have a bearing on modern network
based services.

> It makes sense to have a kernel process do this; having the page fault code
> do it just makes that code more complicated. (The code in V6 to swap
> processes in and out is beautifully simple.) But it's apparently only obvious
> in retrospect (like many brilliant ideas :-).

There was a race condition in V7 swapping code. Once a colleague and I
spent two weeks of 16 hour debugging days! We were encrypting before
swapping out and decrypting before swapping in. This changed timing enough
that the bug manifested itself (in the end about 2-3 times a day when the
system was running 5 or 6 kernel builds in parallel!). This is when I
really understood "You are not expected to understand this." :-)


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-14 23:29 Noel Chiappa
@ 2023-12-14 23:54 ` Larry McVoy
  2023-12-15  1:15 ` Bakul Shah
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 22+ messages in thread
From: Larry McVoy @ 2023-12-14 23:54 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: coff

On Thu, Dec 14, 2023 at 06:29:35PM -0500, Noel Chiappa wrote:
>     > From: Bakul Shah
> 
>     > Now I'd probably call them kernel threads as they don't have a separate
>     > address space.
> 
> Makes sense. One query about stacks, and blocking, there. Do kernel threads,
> in general, have per-thread stacks; so that they can block (and later resume
> exactly where they were when they blocked)?

Yep, threads have stacks, not sure how they could work without them.
Which reminds me of some Solaris insanity.  Back when I was at Sun, they
were threading the VM and I/O system.  The kernel was pretty bloated
and a stack was 2 8K pages.  The I/O people wanted to allocate a kernel
thread *per page* that was being sent to disk/network.  I pointed out
that this means if all of memory wants to head to disk, your dirty page
cache is 1/3 of memory because the other 2/3s were thread stacks.  They
ignored me, implemented it, and it was a miserable failure and they had
to start over.  They just didn't believe in basic math.

> Use of a kernel process probably makes the BSD pageout daemon code fairly
> straightforward, too (well, as straightforward as anything done by Berzerkly
> was :-).

I have immense respect for the BSD pageout daemon code.  When I did UFS
clustering to make I/O go at the platter speed (rather than 1/2 the platter
speed), it caused a big problem because the pageout daemon could not keep
up with UFS, UFS used up page cache much faster than the pageout daemon 
could free pages.

I wrote somewhere around 13 different pageout daemons in an attempt to do
better than the BSD one.  And, in certain cases, I did do better.  All of
them did at least a little better.  But none of them did better in all
cases.  

That BSD code was subtly awesome, I was at the top of my game and couldn't
beat it.  I ended up implementing a "free behind" in UFS, I'd watch the
variable that controlled kicking the pageout code into running, and I'd
start freeing behind when it was getting close to running (but enough 
ahead that the pageout daemon wouldn't wake up).  It's a really gross
hack but it was the best that I could come up with.

--lm

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
@ 2023-12-14 23:29 Noel Chiappa
  2023-12-14 23:54 ` Larry McVoy
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Noel Chiappa @ 2023-12-14 23:29 UTC (permalink / raw)
  To: coff; +Cc: jnc

    > From: Bakul Shah

    > Now I'd probably call them kernel threads as they don't have a separate
    > address space.

Makes sense. One query about stacks, and blocking, there. Do kernel threads,
in general, have per-thread stacks; so that they can block (and later resume
exactly where they were when they blocked)?

That was the thing that, I think, made kernel processes really attractive as
a kernel structuring tool; you get code ike this (from V6):

	swap(rp->p_addr, a, rp->p_size, B_READ);
	mfree(swapmap, (rp->p_size+7)/8, rp->p_addr);

The call to swap() blocks until the I/O operation is complete, whereupon that
call returns, and away one goes. Very clean and simple code.

Use of a kernel process probably makes the BSD pageout daemon code fairly
straightforward, too (well, as straightforward as anything done by Berzerkly
was :-).


Interestingly, other early systems don't seem to have thought of this
structuring technique. I assumed that Multics used a similar technique to
write 'dirty' pages out, to maintain a free list. However, when I looked in
the Multics Storage System Program Logic Manual:

  http://www.bitsavers.org/pdf/honeywell/large_systems/multics/AN61A_storageSysPLM_Sep78.pdf

Multics just writes dirty pages as part of the page fault code: "This
starting of writes is performed by the subroutine claim_mod_core in
page_fault. This subroutine is invoked at the end of every page fault." (pg.
8-36, pg. 166 of the PDF.) (Which also increases the real-time delay to
complete dealing with a page fault.)

It makes sense to have a kernel process do this; having the page fault code
do it just makes that code more complicated. (The code in V6 to swap
processes in and out is beautifully simple.) But it's apparently only obvious
in retrospect (like many brilliant ideas :-).

	Noel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-14 22:06 ` [COFF] " Bakul Shah
@ 2023-12-14 22:12   ` Warner Losh
  0 siblings, 0 replies; 22+ messages in thread
From: Warner Losh @ 2023-12-14 22:12 UTC (permalink / raw)
  To: Bakul Shah; +Cc: jnc, coff

[-- Attachment #1: Type: text/plain, Size: 4261 bytes --]

When shutting down, FreeBSD refers to them as:
kern/kern_shutdown.c: printf("Waiting (max %d seconds) for system process
`%s' to stop... ",
kern/kern_shutdown.c: printf("Waiting (max %d seconds) for system thread
`%s' to stop... ",

However, a number of places, including the swap daemon, still refer to
things as this daemon
or that daemon (page demon being top of the list, but there's the buf
daemon, the vmdaemon
that handles swapping (as opposed to paging), the update daemon, etc.

Warner

On Thu, Dec 14, 2023 at 3:06 PM Bakul Shah <bakul@iitbombay.org> wrote:

> I remember calling them kernel processes as they had no code running in
> user mode. Not sure now of the year but sometime in ‘80s. Now I’d probably
> call them kernel threads as they don’t have a separate address space.
>
> > On Dec 14, 2023, at 1:48 PM, jnc@mercury.lcs.mit.edu wrote:
> >
> > So Lars Brinkhoff and I were chatting about daemons:
> >
> >  https://gunkies.org/wiki/Talk:Daemon
> >
> > and I pointed out that in addition to 'standard' daemons (e.g. the
> printer
> > spooler daemon, email daemon, etc, etc) there are some other things that
> are
> > daemon-like, but are fundamentally different in major ways (explained
> later
> > below). I dubbed them 'system processes', but I'm wondering if ayone
> knows if
> > there is a standard term for them? (Or, failing that, if they have a
> > suggestion for a better name?)
> >
> >
> > Early UNIX is one of the first systems to have one (process 0, the
> "scheduling (swapping)
> > process"), but the CACM "The UNIX Time-Sharing System" paper:
> >
> >  https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf
> >
> > doesn't even mention it, so no guidance there. Berkeley UNIX also has
> one,
> > mentioned in "Design and Implementation of the Berkeley Virtual Memory
> > Extensions to the UNIX Operating System":
> >
> >  http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf
> >
> > where it is called the "pageout daemon".("During system initialization,
> just
> > before the init process is created, the bootstrapping code creates
> process 2
> > which is known as the pageout daemon. It is this process that .. writ[es]
> > back modified pages. The process leaves its normal dormant state upon
> being
> > waken up due to the memory free list size dropping below an upper
> > threshold.") However, I think there are good reasons to dis-favour the
> term
> > 'daemon' for them.
> >
> >
> > For one thing, typical daemons look (to the kernel) just like 'normal'
> > processes: their object code is kept in a file, and is loaded into the
> > daemon's process when it starts, using the same mechanism that 'normal'
> > processes use for loading their code; daemons are often started long
> after
> > the kernel itself is started, and there is usually not a special
> mechanism in
> > the kernel to start daemons (on early UNIXes, /etc/rc is run by the
> 'init'
> > process, not the kernel); daemons interact with the kernel through system
> > calls, just like 'ordinary' processes; the daemon's process runs in
> 'user'
> > CPU mode (using the same standard memory mapping mechanisms, just like
> > blah-blah).
> >
> > 'System processes' do none of these things: their object code is linked
> into
> > the monolithic kernel, and is thus loaded by the bootstrap; the kernel
> > contains special provision for starting the system process, which start
> as
> > the kernel is starting; they don't do system calls, just call kernel
> routines
> > directly; they run in kernel mode, using the same memory mapping as the
> > kernel itself; etc, etc.
> >
> > Another important point is that system processes are highly intertwined
> with
> > the operation of the kernel; without the system process(es) operating
> > correctly, the operation of the system will quickly grind to a halt. The
> loss
> > of ordinary' daemons is usually not fatal; if the email daemon dies, the
> > system will keep running indefinitely. Not so, for the swapping process,
> or
> > the pageout daemon
> >
> >
> > Anyway, is there a standard term for these things? If not, a better name
> than
> > 'system process'?
> >
> >    Noel
>

[-- Attachment #2: Type: text/html, Size: 5426 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-14 21:48 [COFF] " Noel Chiappa
  2023-12-14 22:06 ` [COFF] " Bakul Shah
@ 2023-12-14 22:09 ` Clem Cole
  2023-12-15 14:20   ` Dan Cross
  2023-12-15  6:24 ` Lars Brinkhoff
  2 siblings, 1 reply; 22+ messages in thread
From: Clem Cole @ 2023-12-14 22:09 UTC (permalink / raw)
  To: Noel Chiappa; +Cc: coff

[-- Attachment #1: Type: text/plain, Size: 3929 bytes --]

I don't know of a standard name.   We used to call the kernel processes or
kernel threads also.  For instance, in the original Masscomp EFS code, we
had a handful of processes that got forked after the pager using kernel
code.  Since the basic UNIX read/write from the user space scheme is
synchronous, the premade pool of kernel processes was dispatched as needed
when we listened for asynchronous remote requests for I/O. This is similar
to the fact that asynchronous devices from serial or network interfaces
need a pool of memory to stuff things into since you never know ahead of
time when it will come.
ᐧ

On Thu, Dec 14, 2023 at 4:48 PM Noel Chiappa <jnc@mercury.lcs.mit.edu>
wrote:

> So Lars Brinkhoff and I were chatting about daemons:
>
>   https://gunkies.org/wiki/Talk:Daemon
>
> and I pointed out that in addition to 'standard' daemons (e.g. the printer
> spooler daemon, email daemon, etc, etc) there are some other things that
> are
> daemon-like, but are fundamentally different in major ways (explained later
> below). I dubbed them 'system processes', but I'm wondering if ayone knows
> if
> there is a standard term for them? (Or, failing that, if they have a
> suggestion for a better name?)
>
>
> Early UNIX is one of the first systems to have one (process 0, the
> "scheduling (swapping)
> process"), but the CACM "The UNIX Time-Sharing System" paper:
>
>   https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf
>
> doesn't even mention it, so no guidance there. Berkeley UNIX also has one,
> mentioned in "Design and Implementation of the Berkeley Virtual Memory
> Extensions to the UNIX Operating System":
>
>   http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf
>
> where it is called the "pageout daemon".("During system initialization,
> just
> before the init process is created, the bootstrapping code creates process
> 2
> which is known as the pageout daemon. It is this process that .. writ[es]
> back modified pages. The process leaves its normal dormant state upon being
> waken up due to the memory free list size dropping below an upper
> threshold.") However, I think there are good reasons to dis-favour the term
> 'daemon' for them.
>
>
> For one thing, typical daemons look (to the kernel) just like 'normal'
> processes: their object code is kept in a file, and is loaded into the
> daemon's process when it starts, using the same mechanism that 'normal'
> processes use for loading their code; daemons are often started long after
> the kernel itself is started, and there is usually not a special mechanism
> in
> the kernel to start daemons (on early UNIXes, /etc/rc is run by the 'init'
> process, not the kernel); daemons interact with the kernel through system
> calls, just like 'ordinary' processes; the daemon's process runs in 'user'
> CPU mode (using the same standard memory mapping mechanisms, just like
> blah-blah).
>
> 'System processes' do none of these things: their object code is linked
> into
> the monolithic kernel, and is thus loaded by the bootstrap; the kernel
> contains special provision for starting the system process, which start as
> the kernel is starting; they don't do system calls, just call kernel
> routines
> directly; they run in kernel mode, using the same memory mapping as the
> kernel itself; etc, etc.
>
> Another important point is that system processes are highly intertwined
> with
> the operation of the kernel; without the system process(es) operating
> correctly, the operation of the system will quickly grind to a halt. The
> loss
> of ordinary' daemons is usually not fatal; if the email daemon dies, the
> system will keep running indefinitely. Not so, for the swapping process, or
> the pageout daemon
>
>
> Anyway, is there a standard term for these things? If not, a better name
> than
> 'system process'?
>
>         Noel
>

[-- Attachment #2: Type: text/html, Size: 5141 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [COFF] Re: Terminology query - 'system process'?
  2023-12-14 21:48 [COFF] " Noel Chiappa
@ 2023-12-14 22:06 ` Bakul Shah
  2023-12-14 22:12   ` Warner Losh
  2023-12-14 22:09 ` Clem Cole
  2023-12-15  6:24 ` Lars Brinkhoff
  2 siblings, 1 reply; 22+ messages in thread
From: Bakul Shah @ 2023-12-14 22:06 UTC (permalink / raw)
  To: jnc; +Cc: coff

I remember calling them kernel processes as they had no code running in user mode. Not sure now of the year but sometime in ‘80s. Now I’d probably call them kernel threads as they don’t have a separate address space.

> On Dec 14, 2023, at 1:48 PM, jnc@mercury.lcs.mit.edu wrote:
> 
> So Lars Brinkhoff and I were chatting about daemons:
> 
>  https://gunkies.org/wiki/Talk:Daemon
> 
> and I pointed out that in addition to 'standard' daemons (e.g. the printer
> spooler daemon, email daemon, etc, etc) there are some other things that are
> daemon-like, but are fundamentally different in major ways (explained later
> below). I dubbed them 'system processes', but I'm wondering if ayone knows if
> there is a standard term for them? (Or, failing that, if they have a
> suggestion for a better name?)
> 
> 
> Early UNIX is one of the first systems to have one (process 0, the "scheduling (swapping)
> process"), but the CACM "The UNIX Time-Sharing System" paper:
> 
>  https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf
> 
> doesn't even mention it, so no guidance there. Berkeley UNIX also has one,
> mentioned in "Design and Implementation of the Berkeley Virtual Memory
> Extensions to the UNIX Operating System":
> 
>  http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf
> 
> where it is called the "pageout daemon".("During system initialization, just
> before the init process is created, the bootstrapping code creates process 2
> which is known as the pageout daemon. It is this process that .. writ[es]
> back modified pages. The process leaves its normal dormant state upon being
> waken up due to the memory free list size dropping below an upper
> threshold.") However, I think there are good reasons to dis-favour the term
> 'daemon' for them.
> 
> 
> For one thing, typical daemons look (to the kernel) just like 'normal'
> processes: their object code is kept in a file, and is loaded into the
> daemon's process when it starts, using the same mechanism that 'normal'
> processes use for loading their code; daemons are often started long after
> the kernel itself is started, and there is usually not a special mechanism in
> the kernel to start daemons (on early UNIXes, /etc/rc is run by the 'init'
> process, not the kernel); daemons interact with the kernel through system
> calls, just like 'ordinary' processes; the daemon's process runs in 'user'
> CPU mode (using the same standard memory mapping mechanisms, just like
> blah-blah).
> 
> 'System processes' do none of these things: their object code is linked into
> the monolithic kernel, and is thus loaded by the bootstrap; the kernel
> contains special provision for starting the system process, which start as
> the kernel is starting; they don't do system calls, just call kernel routines
> directly; they run in kernel mode, using the same memory mapping as the
> kernel itself; etc, etc.
> 
> Another important point is that system processes are highly intertwined with
> the operation of the kernel; without the system process(es) operating
> correctly, the operation of the system will quickly grind to a halt. The loss
> of ordinary' daemons is usually not fatal; if the email daemon dies, the
> system will keep running indefinitely. Not so, for the swapping process, or
> the pageout daemon
> 
> 
> Anyway, is there a standard term for these things? If not, a better name than
> 'system process'?
> 
>    Noel

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2023-12-31  3:51 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-20 20:35 [COFF] Re: Terminology query - 'system process'? Noel Chiappa
  -- strict thread matches above, loose matches on Subject: below --
2023-12-20 19:31 Noel Chiappa
2023-12-20 20:29 ` Paul Winalski
2023-12-31  3:51 ` steve jenkin
2023-12-14 23:29 Noel Chiappa
2023-12-14 23:54 ` Larry McVoy
2023-12-15  1:15 ` Bakul Shah
2023-12-15 17:51   ` Paul Winalski
2023-12-15 18:08     ` Warner Losh
2023-12-16  2:04     ` Greg 'groggy' Lehey
2023-12-16 19:21       ` Paul Winalski
2023-12-16 19:44         ` Paul Winalski
2023-12-15 13:43 ` Dan Cross
2023-12-19 13:54 ` Derek Fawcus via COFF
2023-12-14 21:48 [COFF] " Noel Chiappa
2023-12-14 22:06 ` [COFF] " Bakul Shah
2023-12-14 22:12   ` Warner Losh
2023-12-14 22:09 ` Clem Cole
2023-12-15 14:20   ` Dan Cross
2023-12-15 16:25     ` Warner Losh
2023-12-15 17:13     ` Bakul Shah
2023-12-15  6:24 ` Lars Brinkhoff
2023-12-15 18:30   ` Stuff Received

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).