The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
* [TUHS] Quotas - did anyone ever use them?
@ 2019-05-30 13:49 David
  2019-05-30 14:23 ` Diomidis Spinellis
                   ` (13 more replies)
  0 siblings, 14 replies; 32+ messages in thread
From: David @ 2019-05-30 13:49 UTC (permalink / raw)
  To: The Eunuchs Hysterical Society

I think it was BSD 4.1 that added quotas to the disk system, and I was just wondering if anyone ever used them, in academia or industry. As a user and an admin I never used this and, while I thought it was interesting, just figured that the users would sort it out amongst themselves. Which they mostly did.

So, anyone ever use this feature?

	David


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
@ 2019-05-30 14:23 ` Diomidis Spinellis
  2019-05-30 14:26 ` Dan Cross
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Diomidis Spinellis @ 2019-05-30 14:23 UTC (permalink / raw)
  To: The Eunuchs Hysterical Society

As a university student in the 1980s we had a cluster of Sun 3/50s using 
a Gould computer as a file server over NFS.  The system administrators 
had quotas enabled (5MB per student) to prevent students gobbling up 
disk space.

Here's an email I received when I used more disk space than what was 
allowed.

> From: Tim [...] <[...]@doc.ic.ac.uk>
> Date: Tue, 13 Feb 90 10:54:18 GMT
> X-Mailer: Mail User's Shell (7.0.3 12/22/89)
> To: zmact61@doc.ic.ac.uk
> Subject: quota
> Message-ID:  <9002131054.aa20550@tgould.doc.ic.ac.uk>
> Status: ORr
> 
> D Spinellis
> 
> I see you have taken advantage of a small admin. oversight when
> no quotas were set for your class.
> 
> On Friday 16th I shall be setting your disk quota to 5Mb, and your 
> inode quota to 500.   Please take appropriate action.
> 
> Disk quotas for (no account) (uid 1461):
> Filesystem     usage  quota  limit    timeleft  files  quota  limit    timeleft
> /home/gould/teach
>                18545  20000  21000               1602   1700   1750            
> /home/gould/staff
>                    0    200    250                  0     40     50      


Diomidis

On 30-May-19 16:49, David wrote:
> I think it was BSD 4.1 that added quotas to the disk system, and I was just wondering if anyone ever used them, in academia or industry. As a user and an admin I never used this and, while I thought it was interesting, just figured that the users would sort it out amongst themselves. Which they mostly did.
> 
> So, anyone ever use this feature?
> 
> 	David
> 
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
  2019-05-30 14:23 ` Diomidis Spinellis
@ 2019-05-30 14:26 ` Dan Cross
  2019-05-30 14:27 ` Rico Pajarola
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Dan Cross @ 2019-05-30 14:26 UTC (permalink / raw)
  To: david; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 968 bytes --]

On Thu, May 30, 2019 at 9:58 AM David <david@kdbarto.org> wrote:

> I think it was BSD 4.1 that added quotas to the disk system, and I was
> just wondering if anyone ever used them, in academia or industry. As a user
> and an admin I never used this and, while I thought it was interesting,
> just figured that the users would sort it out amongst themselves. Which
> they mostly did.
>
> So, anyone ever use this feature?
>

Oh yes. I used them in multiple places; in fact, on a public access Unix
system that I still (!!) am active on (well, not really...grex.org) we use
them; mostly to try and limit the effects of abuse.

Quotas were very useful in time shared and network environments, where you
didn't quite trust all of the users. For example, university networks where
undergrads were given accounts for course work, but could be otherwise
mischievous, and you didn't want them interfering with research happening
on the same system/network.

        - Dan C.

[-- Attachment #2: Type: text/html, Size: 1372 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
  2019-05-30 14:23 ` Diomidis Spinellis
  2019-05-30 14:26 ` Dan Cross
@ 2019-05-30 14:27 ` Rico Pajarola
  2019-05-31  7:16   ` George Ross
  2019-05-30 14:29 ` Robert Brockway
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Rico Pajarola @ 2019-05-30 14:27 UTC (permalink / raw)
  To: david; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 793 bytes --]

Every university that allowed "general" student access used disk quotas. I
remember 1MB file quota on the system I had access to. And of course we
found all kinds of tricks to get around those quotas (e.g. "giving away"
large files to someone with leftover quota), which is why on many systems
you're not allowed to chown files if you're not root.

On Thu, May 30, 2019 at 3:58 PM David <david@kdbarto.org> wrote:

> I think it was BSD 4.1 that added quotas to the disk system, and I was
> just wondering if anyone ever used them, in academia or industry. As a user
> and an admin I never used this and, while I thought it was interesting,
> just figured that the users would sort it out amongst themselves. Which
> they mostly did.
>
> So, anyone ever use this feature?
>
>         David
>
>

[-- Attachment #2: Type: text/html, Size: 1113 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (2 preceding siblings ...)
  2019-05-30 14:27 ` Rico Pajarola
@ 2019-05-30 14:29 ` Robert Brockway
  2019-05-30 14:34 ` Theodore Ts'o
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Robert Brockway @ 2019-05-30 14:29 UTC (permalink / raw)
  To: David; +Cc: The Eunuchs Hysterical Society

On Thu, 30 May 2019, David wrote:

> I think it was BSD 4.1 that added quotas to the disk system, and I was 
> just wondering if anyone ever used them, in academia or industry. As a 
> user and an admin I never used this and, while I thought it was 
> interesting, just figured that the users would sort it out amongst 
> themselves. Which they mostly did.
>
> So, anyone ever use this feature?

As an undergrad in the 90s I was most definitely subject to quotas on 
university systems.  I believe Unix filesystem quotas have been widely 
used.

FWIW I have quotas enabled on my home fileserver (without enforcement) 
just so I can see user utilisation at a glance.

Cheers,

Rob

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (3 preceding siblings ...)
  2019-05-30 14:29 ` Robert Brockway
@ 2019-05-30 14:34 ` Theodore Ts'o
  2019-05-30 14:48   ` John P. Linderman
  2019-05-30 14:55 ` Andreas Kusalananda Kähäri
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Theodore Ts'o @ 2019-05-30 14:34 UTC (permalink / raw)
  To: David; +Cc: The Eunuchs Hysterical Society

On Thu, May 30, 2019 at 06:49:05AM -0700, David wrote:
> I think it was BSD 4.1 that added quotas to the disk system, and I
> was just wondering if anyone ever used them, in academia or
> industry. As a user and an admin I never used this and, while I
> thought it was interesting, just figured that the users would sort
> it out amongst themselves. Which they mostly did.
> 
> So, anyone ever use this feature?

Back when MIT Project Athena was using Vax/750's as time sharing
machines (before the advent of the MicroVax II workstations and AFS),
students were assigned to a Vax 750, and were given a quota of I think
a megabyte, at least initially.  It could be increased by applying to
the user accounts office.  Given that there were roughly 4,000
undergraduates sharing 5 or 6 Vax/750's, it was somewhat understandable...

	       	       	      		    - Ted

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 14:34 ` Theodore Ts'o
@ 2019-05-30 14:48   ` John P. Linderman
  2019-05-30 14:57     ` Jim Geist
  0 siblings, 1 reply; 32+ messages in thread
From: John P. Linderman @ 2019-05-30 14:48 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 1360 bytes --]

We used them in an AT&T Labs research environment. The intent was less to
prevent users from selfishly grabbing (then semi-precious) disk space but
to prevent accidents from adversely affecting the user community at large.
If you *knew* you were going to need honking amounts of disk, the sysadmins
would raise the quota (probably on a partition dedicated to such
activities).

On Thu, May 30, 2019 at 10:40 AM Theodore Ts'o <tytso@mit.edu> wrote:

> On Thu, May 30, 2019 at 06:49:05AM -0700, David wrote:
> > I think it was BSD 4.1 that added quotas to the disk system, and I
> > was just wondering if anyone ever used them, in academia or
> > industry. As a user and an admin I never used this and, while I
> > thought it was interesting, just figured that the users would sort
> > it out amongst themselves. Which they mostly did.
> >
> > So, anyone ever use this feature?
>
> Back when MIT Project Athena was using Vax/750's as time sharing
> machines (before the advent of the MicroVax II workstations and AFS),
> students were assigned to a Vax 750, and were given a quota of I think
> a megabyte, at least initially.  It could be increased by applying to
> the user accounts office.  Given that there were roughly 4,000
> undergraduates sharing 5 or 6 Vax/750's, it was somewhat understandable...
>
>                                             - Ted
>

[-- Attachment #2: Type: text/html, Size: 1808 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (4 preceding siblings ...)
  2019-05-30 14:34 ` Theodore Ts'o
@ 2019-05-30 14:55 ` Andreas Kusalananda Kähäri
  2019-05-30 15:00 ` KatolaZ
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Andreas Kusalananda Kähäri @ 2019-05-30 14:55 UTC (permalink / raw)
  To: David; +Cc: The Eunuchs Hysterical Society

On Thu, May 30, 2019 at 06:49:05AM -0700, David wrote:
> I think it was BSD 4.1 that added quotas to the disk system, and I was just wondering if anyone ever used them, in academia or industry. As a user and an admin I never used this and, while I thought it was interesting, just figured that the users would sort it out amongst themselves. Which they mostly did.
> 
> So, anyone ever use this feature?
> 
> 	David

At my current workplace, we use quotas on shared systems (Linux
compute clusters and multi-user interactive login nodes).  When I was
at univeristy (early 1990's), the SunOS and Solaris systems at the
department had the home directories mounted over NFS, and there were
quotas in effect on the file server.

On VM systems that I set up privately, where /home is not on a separate
partition, I use quotas for my own account (because sometimes I want to
use a really tiny disk, and limiting the size of /home is easier with
quotas than through partitioning off the correct size on the first try).

I have access to a shared non-work related OpenBSD and Linux system
which does *not* use quotas, and it makes me slightly nervous because
I don't actually know how much disk space I'm *allowed* to use without
being told off by a human operator.  It would have been better if they
had had quotas enabled and then made it easy to ask for more space
thourgh a simple email to the admins.

-- 
Kusalananda
Sweden

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 14:48   ` John P. Linderman
@ 2019-05-30 14:57     ` Jim Geist
  0 siblings, 0 replies; 32+ messages in thread
From: Jim Geist @ 2019-05-30 14:57 UTC (permalink / raw)
  Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 1529 bytes --]

I have an account on a school system that uses them.

On Thu, May 30, 2019 at 10:49 AM John P. Linderman <jpl.jpl@gmail.com>
wrote:

> We used them in an AT&T Labs research environment. The intent was less to
> prevent users from selfishly grabbing (then semi-precious) disk space but
> to prevent accidents from adversely affecting the user community at large.
> If you *knew* you were going to need honking amounts of disk, the
> sysadmins would raise the quota (probably on a partition dedicated to such
> activities).
>
> On Thu, May 30, 2019 at 10:40 AM Theodore Ts'o <tytso@mit.edu> wrote:
>
>> On Thu, May 30, 2019 at 06:49:05AM -0700, David wrote:
>> > I think it was BSD 4.1 that added quotas to the disk system, and I
>> > was just wondering if anyone ever used them, in academia or
>> > industry. As a user and an admin I never used this and, while I
>> > thought it was interesting, just figured that the users would sort
>> > it out amongst themselves. Which they mostly did.
>> >
>> > So, anyone ever use this feature?
>>
>> Back when MIT Project Athena was using Vax/750's as time sharing
>> machines (before the advent of the MicroVax II workstations and AFS),
>> students were assigned to a Vax 750, and were given a quota of I think
>> a megabyte, at least initially.  It could be increased by applying to
>> the user accounts office.  Given that there were roughly 4,000
>> undergraduates sharing 5 or 6 Vax/750's, it was somewhat understandable...
>>
>>                                             - Ted
>>
>

[-- Attachment #2: Type: text/html, Size: 2238 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (5 preceding siblings ...)
  2019-05-30 14:55 ` Andreas Kusalananda Kähäri
@ 2019-05-30 15:00 ` KatolaZ
  2019-05-30 17:28 ` Grant Taylor via TUHS
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: KatolaZ @ 2019-05-30 15:00 UTC (permalink / raw)
  To: tuhs

[-- Attachment #1: Type: text/plain, Size: 1543 bytes --]

On Thu, May 30, 2019 at 06:49:05AM -0700, David wrote:
> I think it was BSD 4.1 that added quotas to the disk system, and I was just wondering if anyone ever used them, in academia or industry. As a user and an admin I never used this and, while I thought it was interesting, just figured that the users would sort it out amongst themselves. Which they mostly did.
> 
> So, anyone ever use this feature?
> 

As others have said already, disk (or better, filesystem) quotas have
been used widely in any environment with more than a few users. I
remember a 5MB quota at uni when I was an undergrad, and I definitely
remember when it was increased to 10MB :)

Filesystem quotas are currently used extensively in large computing
facilities (clusters and distributed computing systems of different
sort), and in virtually all pubnix systems (we have been using it for
"medialab" at freaknet for more than 20 years now...).

Over the years I have learnt that if unix has something that I think
is useless, then almost surely I have not bumped into the use case,
and the use case is normally important enough to explain why that
feature was made part of unix ;)

My2Cents

KatolaZ

-- 
[ ~.,_  Enzo Nicosia aka KatolaZ - Devuan -- Freaknet Medialab  ]  
[     "+.  katolaz [at] freaknet.org --- katolaz [at] yahoo.it  ]
[       @)   http://kalos.mine.nu ---  Devuan GNU + Linux User  ]
[     @@)  http://maths.qmul.ac.uk/~vnicosia --  GPG: 0B5F062F  ] 
[ (@@@)  Twitter: @KatolaZ - skype: katolaz -- github: KatolaZ  ]

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (6 preceding siblings ...)
  2019-05-30 15:00 ` KatolaZ
@ 2019-05-30 17:28 ` Grant Taylor via TUHS
  2019-05-30 19:19 ` Michael Kjörling
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Grant Taylor via TUHS @ 2019-05-30 17:28 UTC (permalink / raw)
  To: tuhs

[-- Attachment #1: Type: text/plain, Size: 911 bytes --]

On 5/30/19 7:49 AM, David wrote:
> So, anyone ever use this feature?

Yes.  I have been on both sides of disk / file system quota, 
occasionally at the same time.

I remember them being applied when I was in school.

I applied them to mail servers I administered (50 MB quota).

I think I even used a combination of soft (50 MB) and hard (100 MB) 
quotas.  I really liked the soft/hard pair because someone could go over 
their quota for up to 7 days (?) until they hit the hard quota.  This 
gave some buffer for busy weekends and vacation time.

I also applied a quota to myself (and the other admins) specifically so 
that we couldn't accidentally fill the file system and DoS our users.  I 
think I gave us a 1 or 2 GB quota.  I do remember raising it once or 
twice for specific things.  It did save our collected back sides 
multiple times too.



-- 
Grant. . . .
unix || die


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4008 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (7 preceding siblings ...)
  2019-05-30 17:28 ` Grant Taylor via TUHS
@ 2019-05-30 19:19 ` Michael Kjörling
  2019-05-30 20:42 ` Warner Losh
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Michael Kjörling @ 2019-05-30 19:19 UTC (permalink / raw)
  To: tuhs

On 30 May 2019 06:49 -0700, from david@kdbarto.org (David):
> I think it was BSD 4.1 that added quotas to the disk system, and I
> was just wondering if anyone ever used them, in academia or
> industry. As a user and an admin I never used this and, while I
> thought it was interesting, just figured that the users would sort
> it out amongst themselves. Which they mostly did.
> 
> So, anyone ever use this feature?

Don't forget probably every ISP under the sun.

My first Internet account in 1995 (or possibly 1996) came with *nix
shell access (please don't ask me what variant; I used it mostly to
run pine, and occasionally pico, and it was dial-up which was charged
per minute by the phone company) and a 4 MB quota, which you could pay
to have increased. That quota covered everything in your $HOME; as I
recall, including e-mail, and definitely including ~/public_html.

These days, it seems that with the exception of _really_ cheap
accounts, web host quotas are big enough that they for all intents and
purposes might as well not be there, even with today's bloated
content. Back then, even 4 MB for everything felt on the tight side,
and you certainly had to think about what you put there.

-- 
Michael Kjörling • https://michael.kjorling.se • michael@kjorling.se
  “The most dangerous thought that you can have as a creative person
              is to think you know what you’re doing.” (Bret Victor)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (8 preceding siblings ...)
  2019-05-30 19:19 ` Michael Kjörling
@ 2019-05-30 20:42 ` Warner Losh
  2019-05-30 22:23   ` George Michaelson
  2019-05-31  1:36 ` alan
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Warner Losh @ 2019-05-30 20:42 UTC (permalink / raw)
  To: david; +Cc: The Eunuchs Hysterical Society

[-- Attachment #1: Type: text/plain, Size: 756 bytes --]

On Thu, May 30, 2019 at 7:58 AM David <david@kdbarto.org> wrote:

> I think it was BSD 4.1 that added quotas to the disk system, and I was
> just wondering if anyone ever used them, in academia or industry. As a user
> and an admin I never used this and, while I thought it was interesting,
> just figured that the users would sort it out amongst themselves. Which
> they mostly did.

So, anyone ever use this feature?


The PDP-11 running RSTS/E 7.2 we had in high school had a quota of 100
blocks. It was almost enough to store three BASIC programs...

At college, our VAX running 4.3BSD used the quota system, but it was weird
in ways I can't recall now.... Maybe I couldn't logout when I was over
quota, but I worked around that with telnet...

Warner

[-- Attachment #2: Type: text/html, Size: 1262 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 20:42 ` Warner Losh
@ 2019-05-30 22:23   ` George Michaelson
  0 siblings, 0 replies; 32+ messages in thread
From: George Michaelson @ 2019-05-30 22:23 UTC (permalink / raw)
  To: The Eunuchs Hysterical Society

Unless it's been replaced, Robert Elz. Melbourne Uni wrote this. He's
still puttering along in Southern Thailand, doing his thing. (they had
the Australian dictionary of Aboriginal words and every host at
Melbourne Uni started MU hence munnari.oz.au) - The same operations
unit wrote one of the  first Apple File System codebases for BSD too.

Robert had a bizarre theory about body heat. He wore a jersey in
Hawaii mid-summer on the principle the insulation kept the external
heat OUT.

He also is a cricket tragic. B&W tv by the console, in season, you had
to stop talking while the bowler was running and could engage during
the advert breaks.

(and yes, I used them in uni systems admin to constrain the wild
behaviours of the userbase. Imagine letting all those pristine ones
and zeros be .. tainted by data)

-G

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (9 preceding siblings ...)
  2019-05-30 20:42 ` Warner Losh
@ 2019-05-31  1:36 ` alan
  2019-05-31 19:07 ` Pete Wright
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: alan @ 2019-05-31  1:36 UTC (permalink / raw)
  To: The Eunuchs Hysterical Society


My story is similar to most others here.  University of Arkansas in the 
early 90s enabled user disk quotas on the public access UNIX system - a 
SPARCserver 2000 with 10 CPUs and gobs of disk.  Each quota limit was 
far higher than (total disk space) / (# user accounts).  But each of the 
17K students at the time was automatically provisioned an account during 
registration.  So it became a bit necessary to impose a limit - even if 
one 98% of users would never come close to.

The dedicated college of engineering UNIX system was also a similarly 
equipped SPARCserver 2000 and did not enforce any quotas.  Unless the 
sys-admin yelling in the hallway at the povray guy eating 100% x 10 CPUs 
was technically a throttle / quota system.  He was efficient too.  
Zero'd in on the lab and workstation number by IP address and would be 
over your shoulder in less than 10K ms if you were acting a fool!

-Alan



On 2019-05-30 09:49, David wrote:
> I think it was BSD 4.1 that added quotas to the disk system, and I was
> just wondering if anyone ever used them, in academia or industry. As a
> user and an admin I never used this and, while I thought it was
> interesting, just figured that the users would sort it out amongst
> themselves. Which they mostly did.
> 
> So, anyone ever use this feature?
> 
> 	David

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 14:27 ` Rico Pajarola
@ 2019-05-31  7:16   ` George Ross
  0 siblings, 0 replies; 32+ messages in thread
From: George Ross @ 2019-05-31  7:16 UTC (permalink / raw)
  To: Rico Pajarola; +Cc: The Eunuchs Hysterical Society

>  And of course we
> found all kinds of tricks to get around those quotas (e.g. "giving away"
> large files to someone with leftover quota), which is why on many systems
> you're not allowed to chown files if you're not root.

When I took over running a labful of SunOS machines that were used by our 
CS students, one of the first thing I did was turn off a few things in the 
system call vector for UIDs higher than a "staff" cutoff.  That included 
things like chown, chmod and setting the umask.  As well as making it 
harder for them to share solutions, it also had the advantage of making 
other systems more tempting targets...
-- 
George D M Ross MSc PhD CEng MBCS CITP
University of Edinburgh, School of Informatics,
Appleton Tower, 11 Crichton Street, Edinburgh, Scotland, EH8 9LE
Mail: gdmr@inf.ed.ac.uk   Voice: 0131 650 5147 
PGP: 1024D/AD758CC5  B91E D430 1E0D 5883 EF6A  426C B676 5C2B AD75 8CC5

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (10 preceding siblings ...)
  2019-05-31  1:36 ` alan
@ 2019-05-31 19:07 ` Pete Wright
  2019-05-31 20:43   ` Grant Taylor via TUHS
  2019-06-01  0:30 ` reed
  2019-06-04 13:50 ` Tony Finch
  13 siblings, 1 reply; 32+ messages in thread
From: Pete Wright @ 2019-05-31 19:07 UTC (permalink / raw)
  To: david, The Eunuchs Hysterical Society



On 5/30/19 6:49 AM, David wrote:
> I think it was BSD 4.1 that added quotas to the disk system, and I was just wondering if anyone ever used them, in academia or industry. As a user and an admin I never used this and, while I thought it was interesting, just figured that the users would sort it out amongst themselves. Which they mostly did.
>
> So, anyone ever use this feature?

Lots of interesting insights/stories on this thread so figured i'd throw 
my hat in the ring and share a business anecdote...

For quite a while i worked in the special effects/animation industry 
where fortunately (for me) unix has a long and interesting history. One 
secret abut the VFX world is that it's tremendously expensive with 
relatively little financial upside.  in my experience it is the studios 
who get most of the residual income from a blockbuster feature.  also a 
crew for a AAA feature requires lots of human power, computers and 
storage.  my shop frequently had 3-5 features in full production mode at 
the same time so we were redlining all of our systems 24/7.

So aside from the cost of maintaining a large renderfarm, unix/linux 3d 
workstations, editing bays etc we also had an enormous NetApp footprint 
to support all these systems.  Now artists love creating lots and lots 
of high resolution images, and if they had their way there were be 
unlimited storage so they'd never have to archive a shot to tape in the 
event they need to reference it later.  But obviously that's not 
reasonable from a financial perspective.

Our solution was to make heavy use of storage quotas in our environment, 
and then leverage quotas to provide per-department billing.  An 
individual user was given something like 1GB by default (here they had 
their mailbox, source-code, scripts etc), then the show they were booked 
on was then given an allocation of say 1TB of storage.  The show would 
then carve out this allocation in a per-shot basis.  This allowed us as 
an organization to actually keep pretty detailed records on our costs 
and unfortunately isn't something I've seen replicated well at lots of 
startups flush with cash these days.

I was briefly on the team responsible for managing these quotas for a 
show and it was seriously an around the clock operation to keep our 
disks from filling up.  One of the tricks was to figure out how much 
space a rendered sequence of images would consume, factor in the 
time-to-render a frame and attempt to line up your backup jobs to free 
up enough space so the render nodes could write out the images to NFS.

-pete

-- 

Pete Wright
pete@nomadlogic.org
@nomadlogicLA


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-31 19:07 ` Pete Wright
@ 2019-05-31 20:43   ` Grant Taylor via TUHS
  2019-05-31 20:59     ` Pete Wright
  0 siblings, 1 reply; 32+ messages in thread
From: Grant Taylor via TUHS @ 2019-05-31 20:43 UTC (permalink / raw)
  To: tuhs

[-- Attachment #1: Type: text/plain, Size: 972 bytes --]

On 5/31/19 1:07 PM, Pete Wright wrote:
> An individual user was given something like 1GB by default (here they 
> had their mailbox, source-code, scripts etc), then the show they were 
> booked on was then given an allocation of say 1TB of storage.

It sounds like you are talking about group quotas in addition to each 
individual user's /user/ quota.  Is that correct?  Or was this more an 
imposed file system limit in lieu of group quotas?

> I was briefly on the team responsible for managing these quotas for a 
> show and it was seriously an around the clock operation to keep our disks 
> from filling up.  One of the tricks was to figure out how much space a 
> rendered sequence of images would consume, factor in the time-to-render 
> a frame and attempt to line up your backup jobs to free up enough space 
> so the render nodes could write out the images to NFS.

Intriguing.

Thank you for sharing.



-- 
Grant. . . .
unix || die


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4008 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-31 20:43   ` Grant Taylor via TUHS
@ 2019-05-31 20:59     ` Pete Wright
  0 siblings, 0 replies; 32+ messages in thread
From: Pete Wright @ 2019-05-31 20:59 UTC (permalink / raw)
  To: Grant Taylor, tuhs



On 5/31/19 1:43 PM, Grant Taylor via TUHS wrote:
> On 5/31/19 1:07 PM, Pete Wright wrote:
>> An individual user was given something like 1GB by default (here they 
>> had their mailbox, source-code, scripts etc), then the show they were 
>> booked on was then given an allocation of say 1TB of storage.
>
> It sounds like you are talking about group quotas in addition to each 
> individual user's /user/ quota.  Is that correct?  Or was this more an 
> imposed file system limit in lieu of group quotas?

it was a mix, $HOME was tied to a specific UID.

for show data we leveraged per-volume quotas.  our directory structure 
was setup in such a way that a a path would be a series of symlinks 
pointing for specific NFS volumes.  so /shot/sequence/render for example 
could reference multiple volumes. /shot/sequence would live on a 
separate filer than "render".  we could also move around where "render" 
would live and not have to worry about creating orphan paths and such.  
this was a common practice to manage hotspots, or for maint windows etc.

the net result was we used a mix of volume quotas that netapp managed in 
addition to higher level shot/show quotas which were calculated out of 
band by some in-house services.


-pete

-- 
Pete Wright
pete@nomadlogic.org
@nomadlogicLA


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (11 preceding siblings ...)
  2019-05-31 19:07 ` Pete Wright
@ 2019-06-01  0:30 ` reed
  2019-06-04 13:50 ` Tony Finch
  13 siblings, 0 replies; 32+ messages in thread
From: reed @ 2019-06-01  0:30 UTC (permalink / raw)
  To: The Eunuchs Hysterical Society

(Sharing some from my book in regards to at Berkeley slowly being 
written ... some questions below too.)

% prior to March 19, 1976 \cite{unix-news-19760319}
In addition to teaching and writing Unix Pascal, Thompson
coded or advised on various other system modifications.
He put in disk space quotas to prevent runaways.\cite{kenthompson1}
% (inode.flags & 060000) == 020000
A special file named ``.q'' would track (in its inode) the
maximum number of blocks that may be used by files in the
directory and its descendents and a count of the number of used blocks.
A new ``quot'' system call was added to make directories with
quotas.
In addition, a modified link(2) allowed quotas to be exceeded 
temporarily
so a move/rename operation could work on a near full
quota.\cite{unix-news-19760319}

% NOTE: cptree source for system call use example

% I cannot find this quot program 

A new quot command was used to define the quotas.
% CITE:
Later, Kurt Shoens, a student in Thompson's
operating systems course\cite{arnold1}, wrote a tool called pq that 
would
search up from your current working directory to find the nearest quota 
file
and display what is used, the defined maximum quota, and its percentage 
used.
% CITE: pq manpage and source
Also a custom ls command identified if an entry was a quota file,
file command could identify quota files,
ex could warn if the ``Quota exceeded'',
and cptree and lntree could copy quota files, 

This quotas implementation was not integrated back into Unix.  So
back to ``threatening messages of the day and personal letters.''
Also a different quot tool was added a couple years later in the
Seventh Edition of Unix (and also shipped with later BSDs) to
display (but not restrict) the disk usage for each
user.\cite{ritchie-7th-edition-setting-up}
(A new quotas implementation was written and introduced to
Berkeley years later. This story is in \autoref{chapter:4BSD}.)

---------------

At first I thought that a side-effect of quotas was that users couldn't 
chown files to others, but wrong since already is documented that chown 
is for super-user only in V6. Any thoughts on that?

What is the unused pw_quota in v7 getpwent? Is that related at all to 
disk quotas.




^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
                   ` (12 preceding siblings ...)
  2019-06-01  0:30 ` reed
@ 2019-06-04 13:50 ` Tony Finch
  13 siblings, 0 replies; 32+ messages in thread
From: Tony Finch @ 2019-06-04 13:50 UTC (permalink / raw)
  To: David; +Cc: The Eunuchs Hysterical Society

A couple of Cambridge quota-related anecdotes:

In its early days the Debian bug tracker was run out of Ian Jackson's
personal account on our central Unix service. He had to delete bugs when
they were closed to keep within his disk quota.

Our mail software Exim has a two-level quota system: it handles quota
errors from the OS as a hard limit, with some annoying portability hacks
https://github.com/Exim/exim/blob/master/src/src/transports/appendfile.c#L1208
Exim it also has its own per-mailbox quota implementation. This can help
control memory usage (mainly by Pine, back in the day...) as well as
providing soft limit warnings and other bells and whistles.

Tony.
-- 
f.anthony.n.finch  <dot@dotat.at>  http://dotat.at/
Southeast Viking, North Utsire, South Utsire, Northeast Forties: Southerly 4
or 5, becoming cyclonic 5 to 7, perhaps gale 8 later. Slight or moderate,
becoming moderate or rough later. Fog patches, thundery rain later. Moderate
or good, occasionally very poor.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-31 16:15       ` Grant Taylor via TUHS
@ 2019-05-31 16:38         ` Michael Kjörling
  0 siblings, 0 replies; 32+ messages in thread
From: Michael Kjörling @ 2019-05-31 16:38 UTC (permalink / raw)
  To: tuhs

On 31 May 2019 10:15 -0600, from tuhs@minnie.tuhs.org (Grant Taylor via TUHS):
>>>       * snapshots are readonly, and thus, immune to ransomware
>>>           attacks;
>>
>> Let's hope said ransomware isn't smart enough to run "zfs list X -t
>> snapshot" and "zfs destroy X@Y".
> 
> (Baring any local privilege escalation....)  I think that ZFS would protect
> (snapshots) against ransomware running as an unprivileged user that can't
> run zfs / zpool commands.

Yes, and that's the point I was (trying to) make: snapshots are only
immune to ransomware as long as (a) said ransomware isn't running as
root, and (b) said ransomware can't escalate to having root access (or
whatever capabilities might be required to poke around ZFS snapshots),
and of course (c) said ransomware doesn't know about ZFS snapshots.

Snapshots definitely raise the bar, which is a good thing, not to
mention how useful they are for bona fide "oh carp" moments. I do
however feel that "immune" is a bit too strong a word.


>> And while "zfs list" is Mostly Harmless, let's hope the sysadmin is
>> smart enough to not let arbitrary users run "zfs destroy" anything
>> important.
> 
> I have found the zfs and zpool command sufficiently easy to allow limited
> access via appropriate sudoers entries.

I'm pretty sure at least ZoL for Debian comes packages with a sudoers
file where all you need to do to allow read-only ZFS sudo access to
normal users is uncomment one or a few lines. It's been a while since
I set it up.

-- 
Michael Kjörling • https://michael.kjorling.se • michael@kjorling.se
  “The most dangerous thought that you can have as a creative person
              is to think you know what you’re doing.” (Bret Victor)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-31 16:06     ` Michael Kjörling
@ 2019-05-31 16:15       ` Grant Taylor via TUHS
  2019-05-31 16:38         ` Michael Kjörling
  0 siblings, 1 reply; 32+ messages in thread
From: Grant Taylor via TUHS @ 2019-05-31 16:15 UTC (permalink / raw)
  To: tuhs

[-- Attachment #1: Type: text/plain, Size: 652 bytes --]

On 5/31/19 10:06 AM, Michael Kjörling wrote:
> Let's hope said ransomware isn't smart enough to run "zfs list X -t 
> snapshot" and "zfs destroy X@Y".

(Baring any local privilege escalation....)  I think that ZFS would 
protect (snapshots) against ransomware running as an unprivileged user 
that can't run zfs / zpool commands.

> And while "zfs list" is Mostly Harmless, let's hope the sysadmin is smart 
> enough to not let arbitrary users run "zfs destroy" anything important.

I have found the zfs and zpool command sufficiently easy to allow 
limited access via appropriate sudoers entries.



-- 
Grant. . . .
unix || die


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4008 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-31 15:05   ` Nelson H. F. Beebe
@ 2019-05-31 16:06     ` Michael Kjörling
  2019-05-31 16:15       ` Grant Taylor via TUHS
  0 siblings, 1 reply; 32+ messages in thread
From: Michael Kjörling @ 2019-05-31 16:06 UTC (permalink / raw)
  To: tuhs

On 31 May 2019 09:05 -0600, from beebe@math.utah.edu (Nelson H. F. Beebe):
> Some of the key ZFS features are:
> /snip/
> 	* snapshots are readonly, and thus, immune to ransomware
>           attacks;

Let's hope said ransomware isn't smart enough to run "zfs list X -t
snapshot" and "zfs destroy X@Y".

And while "zfs list" is Mostly Harmless, let's hope the sysadmin is
smart enough to not let arbitrary users run "zfs destroy" anything
important.

-- 
Michael Kjörling • https://michael.kjorling.se • michael@kjorling.se
  “The most dangerous thought that you can have as a creative person
              is to think you know what you’re doing.” (Bret Victor)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-31  0:42 ` Arthur Krewat
  2019-05-31 15:05   ` Nelson H. F. Beebe
@ 2019-05-31 15:55   ` Rico Pajarola
  1 sibling, 0 replies; 32+ messages in thread
From: Rico Pajarola @ 2019-05-31 15:55 UTC (permalink / raw)
  To: Arthur Krewat; +Cc: TUHS main list

[-- Attachment #1: Type: text/plain, Size: 2449 bytes --]

On Fri, May 31, 2019 at 2:50 AM Arthur Krewat <krewat@kilonet.net> wrote:

> On 5/30/2019 8:21 PM, Nelson H. F. Beebe wrote:
> > Several list members report having used, or suffered under, filesystem
> > quotas.
> >
> > At the University Utah, in the College of Science, and later, the
> > Department of Mathematics, we have always had an opposing view:
> >
> >       Disk quotas are magic meaningless numbers imposed by some bozo
> >       ignorant system administrator in order to prevent users from
> >       getting their work done.
>
> You've never had people like me on your systems ;) - But yeah...
>
> > For the last 15+ years, our central fileservers have run ZFS on
> > Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months,
> > on GNU/Linux CentOS 7.
> >
> I do the same with ZFS - limit the individual filesystems with "zfs set
> quota=xxx" so the entire pool can't be filled. I assign a zfs filesystem
> to an individual user in /export/home and when they need more, they let
> me know. Various monitoring scripts tell me when a filesystem is
> approaching 80%, and I either just expand it on my own because of the
> user's usage, or let them know they are approaching the limit.
>
> Same thing with Netbackup Basic Disk pools in a common ZFS pool. I can
> adjust them as needed, and Netbackup sees the change almost immediately.
>
> At home, I did this with my kids ;) - Samba and zfs quota on the
> filesystem let them know how much room they had.
>
> art k.
>
> PS: I'm starting to move to FreeBSD and ZFS for VMware datastores, the
> performance is outstanding over iSCSI on 10Gbe - (which Solaris 11's
> COMSTAR is not apparently very good at especially with small block
> sizes). I have yet to play with Linux and ZFS but would appreciate to
> hear (privately, if it's not appropriate for the list) your experiences
> with it.
>
At home I use ZFS (on Linux) exclusively for all data I care about (and
also for data I don't care about). I have a bunch of pools ranging from 5TB
to 45TB with RAIDZ2 (overall about 50 drives), in various hardware setups
(SATA, SAS, some even via iSCSI). Performance is not what I'm used to on
Solaris, but in this case, convenience wins over speed. I never lost any
data, even though with that amount of disks, there's always a broken disk
somewhere. The on-disk format is compatible with FreeBSD and Solaris (I
have successfully moved disks between OSes), so you're not "locked in".

[-- Attachment #2: Type: text/html, Size: 3015 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-31  0:42 ` Arthur Krewat
@ 2019-05-31 15:05   ` Nelson H. F. Beebe
  2019-05-31 16:06     ` Michael Kjörling
  2019-05-31 15:55   ` Rico Pajarola
  1 sibling, 1 reply; 32+ messages in thread
From: Nelson H. F. Beebe @ 2019-05-31 15:05 UTC (permalink / raw)
  To: Arthur Krewat; +Cc: tuhs

[This is another essay-length post about O/Ses and ZFS...]

Arthur Krewat <krewat@kilonet.net> asks on Thu, 30 May 2019 20:42:33
-0400 about my TUHS list posting about running a large computing
facility without user disk quotas, and our experiences with ZFS:

>> I have yet to play with Linux and ZFS but would appreciate to
>> hear your experiences with it.

First, ZFS on the Solaris family (including DilOS, Dyson, Hipster,
Illumian, Omnios, Omnitribblix, OpenIndiana, Tribblix, Unleashed, and
XStreamOS), the FreeBSD family (including ClonOS, FreeNAS, GhostBSD,
HardenedBSD, MidnightBSD, PCBSD, Trident, and TrueOS), and on
GNU/Linux (1000+ distributions, due to theological differences) offers
important data safety features, and ease of management.

There are lots of details about ZFS that you can find in the slides of
a talk that we have given several times:

	http://www.math.utah.edu/~beebe/talks/2017/zfs/zfs.pdf

The slides at the end of that file contain pointers to ZFS resources,
including recent books.

Some of the key ZFS features are:

	* all disks form a dynamic shared pool from which space can be
	  drawn for datasets, on top of which filesystems can be
	  created;

	* the pool can exploit data redundancy via various RAID Zn
	  choices to survive loss of individual disks, and optionally,
	  provide hot spares shared across the pool, and available to
	  all datasets;

	* hardware RAID controllers are unneeded, and discouraged ---
	  a JBOD (just a bunch of disks) array is quite satisfactory

	* all metadata, and all file data blocks, have checksums that
	  are replicated elsewhere in the pool, and checked on EVERY
	  read and write, allowing automatic silent recovery (via data
	  redundancy) from transient or permanent errors in disk
	  blocks --- ZFS is self healing;

	* ZFS filesystems can have unlimited numbers of snapshots;

	* snapshots are extremely fast, typically less than one
	  second, even in multi-terabyte filesystems;

	* snapshots are readonly, and thus, immune to ransomware
          attacks;

	* ZFS send and receive operations allow propagation of copies
	  of filesystems by transferring only data blocks that have
	  changed since the last send operation;

	* the ZFS copy-on-write policy means that in-use blocks are
	  never changed, and that block updates are guaranteed to be
	  atomic;

	* quotas can optionally be enabled on datasets, and grown as
	  needed (quota shrink is not yet possible, but is in ZFS
	  development plans).

	* ZFS optionally supports encryption, data compression, block
	  deduplication, and n-way disk replication;

	* Unlike traditional fsck, which requires disks to be offline
	  during the checks, ZFS scrub operations can be run (usually
	  by cron jobs, and at lower priority) to go through datasets
	  to verify data integrity and filesystem sanity while normal
	  services continue.

ZFS likes to cache metadata, and active data blocks, in memory.  Most
of our VMs that have other filesystems, like EXT{2,3,4}, FFS, JFS,
MFS, ReiserFS, UFS, and XFS, run quite happily with 1GB of DRAM.  The
ZFS, DragonFly BSD Hammer, and BTRFS ones are happier with 2GB to 4GB
of DRAM.  Our central fileservers have 256GB to 768GB of DRAM.

The major drawback of copy-on-write and snapshots is that once a
snapshot has been taken, a filesystem-full condition cannot be
ameliorated by removing a few large files.  Instead, you have to
either increase the dataset quota (our normal practice), or you have
to free older snapshots.

Our view is that the benefits of snapshots for recovery of earlier
file versions far outweigh that one drawback: I myself did such a
recovery yesterday when I accidentally clobbered a critical file full
of digital signature keys.

On Solaris and FreeBSD families, snapshots are visible to users as
read-only filesystems, like this (for ftp://ftp.math.utah.edu/pub/texlive 
and http://www.math.utah.edu/pub/texlive):

	% df /u/ftp/pub/texlive
	Filesystem             1K-blocks      Used Available Use% Mounted on
	tank:/export/home/2001 518120448 410762240 107358208  80% /home/2001

	% ls /home/2001/.zfs/snapshot
	AMANDA           auto-2019-05-21  auto-2019-05-25  auto-2019-05-29
	auto-2019-05-18  auto-2019-05-22  auto-2019-05-26  auto-2019-05-30
	auto-2019-05-19  auto-2019-05-23  auto-2019-05-27  auto-2019-05-31
	auto-2019-05-20  auto-2019-05-24  auto-2019-05-28

	% ls /home/2001/.zfs/snapshot/auto-2019-05-21/ftp/pub/texlive
	Contents  Images  Source  historic  protext  tlcritical  tldump  tlnet  tlpretest

That is, you first use the df command to find the source of the
current mount point, then use ls to examine the contents of
.zfs/snapshot under that source, and finally follow your pathname
downward to locate a file that you want to recover, or compare with a
current copy, or another snapshot copy.

On Network Appliance systems with the WAFL filesystem design (see

	https://en.wikipedia.org/wiki/Write_Anywhere_File_Layout

), snapshots are instead mapped to hidden directories inside each
directory, which is more convenient for human users, and is a feature
that we would really like to see on ZFS.

A nuisance for us is that the current ZFS implementation on CentOS 7
(a subset of the pay-for-service Red Hat Enterprise Linux 7) does not
show any files under the .zfs/snapshot/auto-YYYY-MM-DD directories,
except on the fileserver itself.

When we used Solaris ZFS for 15+ years, our users could themselves
recover previous file versions following instructions at

	http://www.math.utah.edu/faq/files/files.html#FAQ-8

Since our move to a GNU/Linux fileserver, they no longer can; instead,
they have to contact systems management to access such files.

We sincerely hope that CentOS 8 will resolve that serious deficiency:
see

	http://www.math.utah.edu/pub/texlive-utah/README.html#rhel-8

for comments on the production of that O/S release from the recent
major new Red Hat EL8 release.

We have a large machine-room UPS, and outside diesel generator, so our
physical servers are immune to power outages and power surges, the
latter being a common problem in Utah during summer lightning storms.
Thus, unplanned fileserver outages should never happen.

A second issue for us is that on Solaris and FreeBSD, we have never
seen a fileserver crash due to ZFS issues, and on Solaris, our servers
have sometimes been up for one to three years before we took them down
for software updates.  However, with ZFS on CentOS 7, we have seen 13
unexplained reboots in the last year.  Each has happened late at
night, or in the early morning, while backups to our tape robot, and
ZFS send/receive operations to a remote datacenter, are in progress.
The crash times suggest to us that heavy ZFS activity is exposing a
kernel or Linux ZFS bug.  We hope that CentOS 8 will resolve that
issue.

We have ZFS on about 70 physical and virtual machines, and GNU/Linux
BTRFS on about 30 systems.  With ZFS, freeing a snapshot moves its
blocks to the free list within seconds.  With BTRFS, freeing snapshots
often takes tens of minutes, and sometimes, hours, before space
recovery is complete.  That can be aggravating when it stops your work
on that system.

By contrast, snapshots on both BTRFS and ZFS are fast.  However, they
appear to be far smaller on ZFS than on BTRFS.  We have VMs and
physical machines with ZFS that have 300 to 1000 daily snapshots with
little noticeable reduction in free space, whereas those with BTRFS
seem to lose about a gigabyte a day.  My home TrueOS system has
sufficient space for about 25 years of ZFS dailies.  Consequently, I
run nightly reports of free space on all of our systems, and manually
intervene on the BTRFS ones when space hits a critical level (I try to
keep 10GB free).

On both ZFS and BTRFS, packages are available to trim old snapshots,
and we run the ZFS trimmer via cron jobs on our main fileservers.

In the GNU/Linux world, however, only openSUSE comes by default with a
cron-enabled BTRFS snapshot trimmer, so intervention is unnecessary on
that O/S flavor.  I have never installed snapshot trimmer packages on
any of our other VMs, because it just means more management work to
deal with variants in trimmer packages, configuration files, and cron
jobs.

Teams of ZFS developers from FreeBSD and GNU/Linux are working on
merging divergent features back into a common OpenZFS code base that
all O/Ses that support ZFS can use; that merger is expected to happen
within the next few months.  ZFS has been ported by third parties to
Apple macOS and Microsoft Windows, so it has the potential of becoming
a universal filesystem available on all common desktop environments.
Then we could use ZFS send/receive instead of .iso, .dmg, and .img
files to copy entire filesystems between different O/Ses.

-------------------------------------------------------------------------------
- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
- University of Utah                    FAX: +1 801 581 4148                  -
- Department of Mathematics, 110 LCB    Internet e-mail: beebe@math.utah.edu  -
- 155 S 1400 E RM 233                       beebe@acm.org  beebe@computer.org -
- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-31  0:21 Nelson H. F. Beebe
  2019-05-31  0:37 ` Larry McVoy
@ 2019-05-31  0:42 ` Arthur Krewat
  2019-05-31 15:05   ` Nelson H. F. Beebe
  2019-05-31 15:55   ` Rico Pajarola
  1 sibling, 2 replies; 32+ messages in thread
From: Arthur Krewat @ 2019-05-31  0:42 UTC (permalink / raw)
  To: tuhs

On 5/30/2019 8:21 PM, Nelson H. F. Beebe wrote:
> Several list members report having used, or suffered under, filesystem
> quotas.
>
> At the University Utah, in the College of Science, and later, the
> Department of Mathematics, we have always had an opposing view:
>
> 	Disk quotas are magic meaningless numbers imposed by some bozo
> 	ignorant system administrator in order to prevent users from
> 	getting their work done.

You've never had people like me on your systems ;) - But yeah...

> For the last 15+ years, our central fileservers have run ZFS on
> Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months,
> on GNU/Linux CentOS 7.
>
I do the same with ZFS - limit the individual filesystems with "zfs set 
quota=xxx" so the entire pool can't be filled. I assign a zfs filesystem 
to an individual user in /export/home and when they need more, they let 
me know. Various monitoring scripts tell me when a filesystem is 
approaching 80%, and I either just expand it on my own because of the 
user's usage, or let them know they are approaching the limit.

Same thing with Netbackup Basic Disk pools in a common ZFS pool. I can 
adjust them as needed, and Netbackup sees the change almost immediately.

At home, I did this with my kids ;) - Samba and zfs quota on the 
filesystem let them know how much room they had.

art k.

PS: I'm starting to move to FreeBSD and ZFS for VMware datastores, the 
performance is outstanding over iSCSI on 10Gbe - (which Solaris 11's 
COMSTAR is not apparently very good at especially with small block 
sizes). I have yet to play with Linux and ZFS but would appreciate to 
hear (privately, if it's not appropriate for the list) your experiences 
with it.



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-31  0:21 Nelson H. F. Beebe
@ 2019-05-31  0:37 ` Larry McVoy
  2019-05-31  0:42 ` Arthur Krewat
  1 sibling, 0 replies; 32+ messages in thread
From: Larry McVoy @ 2019-05-31  0:37 UTC (permalink / raw)
  To: Nelson H. F. Beebe; +Cc: tuhs

On Thu, May 30, 2019 at 06:21:02PM -0600, Nelson H. F. Beebe wrote:
> Several list members report having used, or suffered under, filesystem
> quotas.
> 
> At the University Utah, in the College of Science, and later, the
> Department of Mathematics, we have always had an opposing view:
> 
> 	Disk quotas are magic meaningless numbers imposed by some bozo
> 	ignorant system administrator in order to prevent users from
> 	getting their work done.
> 
> Thus, in my 41 years of systems management at Utah, we have not had a
> SINGLE SYSTEM with user disk quotas enabled.

I can see both sides of this, but man, do I love the attitude of just
get out of their way and let them do their work.  I get the reasons 
that other people did disk quotas, it's reasonable, but I like Nelson's
approach better.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
@ 2019-05-31  0:21 Nelson H. F. Beebe
  2019-05-31  0:37 ` Larry McVoy
  2019-05-31  0:42 ` Arthur Krewat
  0 siblings, 2 replies; 32+ messages in thread
From: Nelson H. F. Beebe @ 2019-05-31  0:21 UTC (permalink / raw)
  To: tuhs

Several list members report having used, or suffered under, filesystem
quotas.

At the University Utah, in the College of Science, and later, the
Department of Mathematics, we have always had an opposing view:

	Disk quotas are magic meaningless numbers imposed by some bozo
	ignorant system administrator in order to prevent users from
	getting their work done.

Thus, in my 41 years of systems management at Utah, we have not had a
SINGLE SYSTEM with user disk quotas enabled.

We have run PDP-11s with RT-11, RSX, and RSTS, PDP-10s with TOPS-20,
VAXes with VMS and BSD Unix, an Ardent Titan, a Stardent, a Cray
EL/94, and hundreds of Unix workstations from Apple, DEC, Dell, HP,
IBM, NeXT, SGI, and Sun with numerous CPU families (Alpha, Arm, MC68K,
SPARC, MIPS, NS 88000, PowerPC, x86, x86_64, and maybe others that I
forget at the moment).

For the last 15+ years, our central fileservers have run ZFS on
Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months,
on GNU/Linux CentOS 7.

Each ZFS dataset gets its space from a large shared pool of disks, and
each dataset has a quota: thus, space CAN fill up in a given dataset,
so that some users might experience a disk-full situation.  In
practice, that rarely happens, because a cron job runs every 20
minutes, looking for datasets that are nearly full, and giving them a
few extra GB if needed.  Affected users in a average of 10 minutes or
so will no longer see disk-full problems.  If we see serious imbalance
in the sizes of previously similar-sized datasets, we manually move
directory trees between datasets to achieve a reasonable balance, and
reset the dataset quotas.

We make nightly ZFS snapshots (hourly for user home directories), and
send the nightlies to an off-campus server in a large datacenter, and
we write nightly filesystem backs to a tape robot. The tape technology
generations have evolved through 9-track, QIC, 4mm DAT, 8mm DAT, DLT,
LTO-4, LTO-6, and perhaps soon, LTO-8.

Our main fileserver talks through a live SAN FibreChannel mirror to
independent storage arrays in two different buildings.

Thus, we always have two live copies of all data, and third far-away
live copy that is no more than 24 hours old.

Yes, we do see runaway output files from time to time, and an
occasional student (among currently more than 17,000 accounts) who
uses an unreasonable amount of space.  In such cases, we deal with the
job, or user, involved, and get space freed up; other users remain
largely remain unaware of the temporary space crisis.

The result of our no-quotas policy is that few of our users have ever
seen a disk-full condition; they just get on with their work, as they,
and we, expect them to do.

-------------------------------------------------------------------------------
- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
- University of Utah                    FAX: +1 801 581 4148                  -
- Department of Mathematics, 110 LCB    Internet e-mail: beebe@math.utah.edu  -
- 155 S 1400 E RM 233                       beebe@acm.org  beebe@computer.org -
- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 16:48 ` KatolaZ
@ 2019-05-30 17:42   ` ron
  0 siblings, 0 replies; 32+ messages in thread
From: ron @ 2019-05-30 17:42 UTC (permalink / raw)
  To: 'KatolaZ', tuhs

Hopkins had a 11/45 system with several RK05's (2.4M).   One was / and /usr,
one was /sys1 (one set of users)  and the other /sys2 (the other users).
My initial quota for class was 8 blocks and that went up to 30 when I went
on staff there.

We also had two additional RK05 drives and I thought I was in fat city when
I invested $65 in my own cartridge for that.
Before that, I had spent a few bucks on DecTapes.

The system swapped on an RF11 disk (1024 blocks).

> -----Original Message-----
> From: TUHS <tuhs-bounces@minnie.tuhs.org> On Behalf Of KatolaZ
> Sent: Thursday, May 30, 2019 12:48 PM
> To: tuhs@minnie.tuhs.org
> Subject: Re: [TUHS] Quotas - did anyone ever use them?
> 
> On Thu, May 30, 2019 at 12:04:49PM -0400, Noel Chiappa wrote:
> >     > From: KatolaZ
> >
> >     > I remember a 5MB quota at uni when I was an undergrad, and I
> definitely
> >     > remember when it was increased to 10MB :)
> >
> > Light your cigar with disk blocks!
> >
> > When I was in high school, I had an account on the school's computer,
> > a
> > PDP-11/20 running RSTS, with a single RF11 disk (well, technically, an
> > RS11 drive on an RF11 controller). For those whose jaw didn't bounce
> > off the floor, reading that, the RS11 was a fixed-head disk with a
> > total capacity of 512KB
> > (1024 512-byte blocks).
> >
> > IIRC, my disk quota was 5 blocks. :-)
> 
> Yep, I am not that "experienced" ;P
> 
> HND
> 
> --
> [ ~.,_  Enzo Nicosia aka KatolaZ - Devuan -- Freaknet Medialab  ]
> [     "+.  katolaz [at] freaknet.org --- katolaz [at] yahoo.it  ]
> [       @)   http://kalos.mine.nu ---  Devuan GNU + Linux User  ]
> [     @@)  http://maths.qmul.ac.uk/~vnicosia --  GPG: 0B5F062F  ]
> [ (@@@)  Twitter: @KatolaZ - skype: katolaz -- github: KatolaZ  ]


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
  2019-05-30 16:04 Noel Chiappa
@ 2019-05-30 16:48 ` KatolaZ
  2019-05-30 17:42   ` ron
  0 siblings, 1 reply; 32+ messages in thread
From: KatolaZ @ 2019-05-30 16:48 UTC (permalink / raw)
  To: tuhs

[-- Attachment #1: Type: text/plain, Size: 1046 bytes --]

On Thu, May 30, 2019 at 12:04:49PM -0400, Noel Chiappa wrote:
>     > From: KatolaZ
> 
>     > I remember a 5MB quota at uni when I was an undergrad, and I definitely
>     > remember when it was increased to 10MB :)
> 
> Light your cigar with disk blocks!
> 
> When I was in high school, I had an account on the school's computer, a
> PDP-11/20 running RSTS, with a single RF11 disk (well, technically, an RS11
> drive on an RF11 controller). For those whose jaw didn't bounce off the floor,
> reading that, the RS11 was a fixed-head disk with a total capacity of 512KB
> (1024 512-byte blocks).
> 
> IIRC, my disk quota was 5 blocks. :-)

Yep, I am not that "experienced" ;P

HND

-- 
[ ~.,_  Enzo Nicosia aka KatolaZ - Devuan -- Freaknet Medialab  ]  
[     "+.  katolaz [at] freaknet.org --- katolaz [at] yahoo.it  ]
[       @)   http://kalos.mine.nu ---  Devuan GNU + Linux User  ]
[     @@)  http://maths.qmul.ac.uk/~vnicosia --  GPG: 0B5F062F  ] 
[ (@@@)  Twitter: @KatolaZ - skype: katolaz -- github: KatolaZ  ]

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [TUHS] Quotas - did anyone ever use them?
@ 2019-05-30 16:04 Noel Chiappa
  2019-05-30 16:48 ` KatolaZ
  0 siblings, 1 reply; 32+ messages in thread
From: Noel Chiappa @ 2019-05-30 16:04 UTC (permalink / raw)
  To: tuhs; +Cc: jnc

    > From: KatolaZ

    > I remember a 5MB quota at uni when I was an undergrad, and I definitely
    > remember when it was increased to 10MB :)

Light your cigar with disk blocks!

When I was in high school, I had an account on the school's computer, a
PDP-11/20 running RSTS, with a single RF11 disk (well, technically, an RS11
drive on an RF11 controller). For those whose jaw didn't bounce off the floor,
reading that, the RS11 was a fixed-head disk with a total capacity of 512KB
(1024 512-byte blocks).

IIRC, my disk quota was 5 blocks. :-)

	Noel

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2019-06-04 14:05 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-30 13:49 [TUHS] Quotas - did anyone ever use them? David
2019-05-30 14:23 ` Diomidis Spinellis
2019-05-30 14:26 ` Dan Cross
2019-05-30 14:27 ` Rico Pajarola
2019-05-31  7:16   ` George Ross
2019-05-30 14:29 ` Robert Brockway
2019-05-30 14:34 ` Theodore Ts'o
2019-05-30 14:48   ` John P. Linderman
2019-05-30 14:57     ` Jim Geist
2019-05-30 14:55 ` Andreas Kusalananda Kähäri
2019-05-30 15:00 ` KatolaZ
2019-05-30 17:28 ` Grant Taylor via TUHS
2019-05-30 19:19 ` Michael Kjörling
2019-05-30 20:42 ` Warner Losh
2019-05-30 22:23   ` George Michaelson
2019-05-31  1:36 ` alan
2019-05-31 19:07 ` Pete Wright
2019-05-31 20:43   ` Grant Taylor via TUHS
2019-05-31 20:59     ` Pete Wright
2019-06-01  0:30 ` reed
2019-06-04 13:50 ` Tony Finch
2019-05-30 16:04 Noel Chiappa
2019-05-30 16:48 ` KatolaZ
2019-05-30 17:42   ` ron
2019-05-31  0:21 Nelson H. F. Beebe
2019-05-31  0:37 ` Larry McVoy
2019-05-31  0:42 ` Arthur Krewat
2019-05-31 15:05   ` Nelson H. F. Beebe
2019-05-31 16:06     ` Michael Kjörling
2019-05-31 16:15       ` Grant Taylor via TUHS
2019-05-31 16:38         ` Michael Kjörling
2019-05-31 15:55   ` Rico Pajarola

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).