From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham autolearn_force=no version=3.4.2 Received: from minnie.tuhs.org (minnie.tuhs.org [45.79.103.53]) by inbox.vuxu.org (OpenSMTPD) with ESMTP id 2a10595a for ; Fri, 31 May 2019 15:56:08 +0000 (UTC) Received: by minnie.tuhs.org (Postfix, from userid 112) id BE4BD9B835; Sat, 1 Jun 2019 01:56:07 +1000 (AEST) Received: from minnie.tuhs.org (localhost [127.0.0.1]) by minnie.tuhs.org (Postfix) with ESMTP id A42CC9B680; Sat, 1 Jun 2019 01:55:40 +1000 (AEST) Authentication-Results: minnie.tuhs.org; dkim=pass (1024-bit key; unprotected) header.d=servium.ch header.i=@servium.ch header.b="Ph3eQcJ9"; dkim-atps=neutral Received: by minnie.tuhs.org (Postfix, from userid 112) id 65AD19B680; Sat, 1 Jun 2019 01:55:38 +1000 (AEST) Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by minnie.tuhs.org (Postfix) with ESMTPS id 9E8169B67F for ; Sat, 1 Jun 2019 01:55:37 +1000 (AEST) Received: by mail-pl1-f176.google.com with SMTP id e5so2435713pls.13 for ; Fri, 31 May 2019 08:55:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=servium.ch; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=xmY0xitw9f5fLmZqUo0VQHQouj4RctEgi1IYKLYYjO4=; b=Ph3eQcJ9S1x2UcqZBPScI/N+aCmqYOO80l4Q1H1g0p89XnggH6wMIQdyRp+c787FTx 2xDeTpxF2fpre58YWhe6UUNLiiXYpz4jVsknHfmD1vci9Ove/ToD8KwAD4lzX30CsNtv VErS+2kyBiE1dnKo/BMXSsWse1bwkHguXwllc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=xmY0xitw9f5fLmZqUo0VQHQouj4RctEgi1IYKLYYjO4=; b=o8SqJXFG9RoR+0bcW4L+muJped5sgroLZoujSYdkfvf8FMNZKyv33Op6UELjnN2qN9 e/6NJGjky+pDTGcCN6WMMcL1UG6zxeOJJG7JcKjUdlc0OyK12FD3DGttn8vXhQxSRK18 4zBu5MAWEMrkzMU5w4a3XRrUjrJk5kDkCFQpAYEHHgtNXk2W7A/wTaE23kTxloWPJrr3 xGWoTgCW/ulWRcyblwLQGuw50cNf4FwWIbtKuBV4tsdJlGOvw7LlPUxj9GRWjNuBNLcu cq+Wiym/KBS6xLqhF+HW7DbB++AtnOW6cDrScXAckkJacoWrhPEjw7DxtN21tundP+cp e8Vw== X-Gm-Message-State: APjAAAUhWVlxRIIKmAvdGqmOHnaRyiPMfDwdBLKHhOtYH3ey3lniP6ql g8e8Gb143xNxiX49H3aTp345Ch/rQIVp6cfgZqRiCw== X-Google-Smtp-Source: APXvYqwig8SnaTPaD0e0sBSeauZdfZS4pwYtU2vOZJt3bVSziSBHeVGYJ+KrIgls3mzMBF8CtfaS857MnmHcklj6KSo= X-Received: by 2002:a17:902:a9cc:: with SMTP id b12mr10090422plr.96.1559318136714; Fri, 31 May 2019 08:55:36 -0700 (PDT) MIME-Version: 1.0 References: <8c0e17fb-41e3-753c-9678-04e410825dce@kilonet.net> In-Reply-To: <8c0e17fb-41e3-753c-9678-04e410825dce@kilonet.net> From: Rico Pajarola Date: Fri, 31 May 2019 17:55:25 +0200 Message-ID: To: Arthur Krewat Content-Type: multipart/alternative; boundary="000000000000a192c5058a310b96" Subject: Re: [TUHS] Quotas - did anyone ever use them? X-BeenThere: tuhs@minnie.tuhs.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: The Unix Heritage Society mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: TUHS main list Errors-To: tuhs-bounces@minnie.tuhs.org Sender: "TUHS" --000000000000a192c5058a310b96 Content-Type: text/plain; charset="UTF-8" On Fri, May 31, 2019 at 2:50 AM Arthur Krewat wrote: > On 5/30/2019 8:21 PM, Nelson H. F. Beebe wrote: > > Several list members report having used, or suffered under, filesystem > > quotas. > > > > At the University Utah, in the College of Science, and later, the > > Department of Mathematics, we have always had an opposing view: > > > > Disk quotas are magic meaningless numbers imposed by some bozo > > ignorant system administrator in order to prevent users from > > getting their work done. > > You've never had people like me on your systems ;) - But yeah... > > > For the last 15+ years, our central fileservers have run ZFS on > > Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months, > > on GNU/Linux CentOS 7. > > > I do the same with ZFS - limit the individual filesystems with "zfs set > quota=xxx" so the entire pool can't be filled. I assign a zfs filesystem > to an individual user in /export/home and when they need more, they let > me know. Various monitoring scripts tell me when a filesystem is > approaching 80%, and I either just expand it on my own because of the > user's usage, or let them know they are approaching the limit. > > Same thing with Netbackup Basic Disk pools in a common ZFS pool. I can > adjust them as needed, and Netbackup sees the change almost immediately. > > At home, I did this with my kids ;) - Samba and zfs quota on the > filesystem let them know how much room they had. > > art k. > > PS: I'm starting to move to FreeBSD and ZFS for VMware datastores, the > performance is outstanding over iSCSI on 10Gbe - (which Solaris 11's > COMSTAR is not apparently very good at especially with small block > sizes). I have yet to play with Linux and ZFS but would appreciate to > hear (privately, if it's not appropriate for the list) your experiences > with it. > At home I use ZFS (on Linux) exclusively for all data I care about (and also for data I don't care about). I have a bunch of pools ranging from 5TB to 45TB with RAIDZ2 (overall about 50 drives), in various hardware setups (SATA, SAS, some even via iSCSI). Performance is not what I'm used to on Solaris, but in this case, convenience wins over speed. I never lost any data, even though with that amount of disks, there's always a broken disk somewhere. The on-disk format is compatible with FreeBSD and Solaris (I have successfully moved disks between OSes), so you're not "locked in". --000000000000a192c5058a310b96 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
On Fri, May 31, 2019 at 2:50 AM Arthur Kr= ewat <krewat@kilonet.net> w= rote:
On 5/30/2019 8:21 PM, Nelson H. F. Beebe wrote:
> Several list members report having used, or suffered under, filesystem=
> quotas.
>
> At the University Utah, in the College of Science, and later, the
> Department of Mathematics, we have always had an opposing view:
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0Disk quotas are magic meaningless numbers im= posed by some bozo
>=C2=A0 =C2=A0 =C2=A0 =C2=A0ignorant system administrator in order to pr= event users from
>=C2=A0 =C2=A0 =C2=A0 =C2=A0getting their work done.

You've never had people like me on your systems ;) - But yeah...

> For the last 15+ years, our central fileservers have run ZFS on
> Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months,<= br> > on GNU/Linux CentOS 7.
>
I do the same with ZFS - limit the individual filesystems with "zfs se= t
quota=3Dxxx" so the entire pool can't be filled. I assign a zfs fi= lesystem
to an individual user in /export/home and when they need more, they let me know. Various monitoring scripts tell me when a filesystem is
approaching 80%, and I either just expand it on my own because of the
user's usage, or let them know they are approaching the limit.

Same thing with Netbackup Basic Disk pools in a common ZFS pool. I can
adjust them as needed, and Netbackup sees the change almost immediately.
At home, I did this with my kids ;) - Samba and zfs quota on the
filesystem let them know how much room they had.

art k.

PS: I'm starting to move to FreeBSD and ZFS for VMware datastores, the =
performance is outstanding over iSCSI on 10Gbe - (which Solaris 11's COMSTAR is not apparently very good at especially with small block
sizes). I have yet to play with Linux and ZFS but would appreciate to
hear (privately, if it's not appropriate for the list) your experiences=
with it.
At home I use ZFS (on Linux) exclusively for = all data I care about (and also for data I don't care about). I have a = bunch of pools ranging from 5TB to 45TB with RAIDZ2 (overall about 50 drive= s), in various hardware setups (SATA, SAS, some even via iSCSI). Performanc= e is not what I'm used to on Solaris, but in this case, convenience win= s over speed. I never lost any data, even though with that amount of disks,= there's always a broken disk somewhere. The on-disk format is compatib= le with FreeBSD and Solaris (I have successfully moved disks between OSes),= so you're not "locked in".




--000000000000a192c5058a310b96--