zsh-workers
 help / color / mirror / code / Atom feed
* unlimited file descripters causes problems for zsh-4.0.2
@ 2001-10-18 23:44 Matthew Braun
  2001-10-19 16:40 ` Bart Schaefer
  0 siblings, 1 reply; 5+ messages in thread
From: Matthew Braun @ 2001-10-18 23:44 UTC (permalink / raw)
  To: zsh-workers

Hi Folks-

I ran across a machine that has 'ulimit -n unlimited' in the ksh startup
files and when I run zsh-4.0.2 from that shell, it causes problems.
Even though no one should do this, it shouldn't cause zsh to not
function properly.

On Solaris 2.6 with less than 2GB swap, this causes a seg fault.  On
Solaris 8 with plenty of swap (4GB), the program runs, but won't fork
pipe'd commands properly (but will run simple commands).  This is all
caused by zsh-4.0.2 doing an memory alloc for the max number of file
descriptors, which in this case is (2^32)-1 (2GB).

I found between two copies of the zsh source I had easily available that
the following code changed somewhere between these two versions:

zsh-3.0.8/Src/init.c:
    fdtable_size = OPEN_MAX;
    fdtable = zcalloc(fdtable_size);

zsh-4.0.2/Src/init.c:
    fdtable_size = zopenmax();
    fdtable = zcalloc(fdtable_size);

where zsh-4.0.2/Src/compat.c:zopenmax() does:
    long openmax = sysconf(_SC_OPEN_MAX);
        return openmax;
(sysconf returns the current "Max open files per process")

Normally, if zsh needs to track more file descriptors than allocated at
initialization, it just doubles the size of the fdtable by doing a
realloc in the zsh-4.0.2/Src/utils.c code.  I'm not sure all of what the
zsh-4.0.2/Src/exec.c code does with fdtable, but after a quick glance,
it doesn't look like it would try to use more file descriptors than
would exist already in the fdtable.

One thing I did notice was the global int "max_zsh_fd" is never
initialized before first using its value.  It appears we are currently
counting on it being automatically initialized to zero (by the compiler
or linker).  This variable should get initialized in the code I assume.

I'm wondering what folks opinions are on making zsh work even when the
number of file descriptors is unlimited.  Options I see:

1. Change the code back to "fdtable_size = OPEN_MAX;"
   problem I see: fascist

2. Change zopenmax() function to query and honor sysconf up to a either
   OPEN_MAX or some other hard coded max like 1024.
   problems I see: fascist and...
     - what high end limit should we use?
     - if we limit the high end, should we limit the low end?  zsh
       doesn't work if you do 'ulimit -n 1', what should the low end be?
       32, 64 or what?

3. other ideas?

My personal thought is to ALWAYS make zsh work, even when people set the
file descriptors to something bogus.  Nothing worse than having your
shell not function, especially if it is your login shell.  Although, I
also hate fascist code, so I'm torn, but probably in this case I'd
rather see zsh work.

Let me know what you think and if you change the zsh code.

Thanks,

Matthew.

ps. please include me directly on replies, I'm not on the mailing list.

=====

Solaris 8, with 4GB swap:

compass:/# ulimit -n
256
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh --version
zsh 4.0.2 (sparc-sun-solaris2.8)
compass:/# ulimit -HSn unlimited
compass:/# ulimit -n
unlimited
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh
compass:/# /bin/ls -l /bin
lrwxrwxrwx   1 root     root           9 Dec  5  2000 /bin -> ./usr/bin
compass:/# echo hi | cat
zsh: fork failed: not enough space
compass:/# 
[1]    done       echo hi
compass:/# 

and new zsh process won't even fork when you have small number of file
descriptors (and the current zsh won't even recover to fork any more zsh
processes even after resetting the ulimit -n higher, which is really a
separate problem):

compass:/# ulimit -n
256
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh
compass:/# ulimit -n 5
compass:/# echo $$
7486
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh
compass:/# echo $$
7486
compass:/# ulimit -n 256
compass:/# /usr/local/pkg/zsh-4.0.2/bin/zsh
compass:/# echo $$
7486


Solaris 2.6, less than 2GB swap:

root:/# ulimit -n
unlimited
root:/# zsh
Segmentation Fault(coredump)
root:/# ulimit -n 64
root:/# zsh --version
zsh 4.0.2 (sparc-sun-solaris2.6)

rebuilt zsh with debugging and created core dump on Solaris 2.6 machine,
here is backtrace and value of fdtable_size:

sa:zsh-4.0.2/Src> gdb /tmp/zsh-4.0.2-debug/bin/zsh /tmp/core
GNU gdb 4.18
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "sparc-sun-solaris2.8"...
Core was generated by `/usr/local/pkg/zsh-4.0.2-debug/bin/zsh'.
Program terminated with signal 11, Segmentation Fault.
Reading symbols from /usr/lib/libsocket.so.1...done.
Reading symbols from /usr/lib/libdl.so.1...done.
Reading symbols from /usr/lib/libnsl.so.1...done.
Reading symbols from /usr/lib/libm.so.1...done.
Reading symbols from /usr/lib/libc.so.1...done.
Reading symbols from /usr/lib/libmp.so.2...done.
Reading symbols from /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1...done.
#0  nicezputs (s=0x0, stream=0xed5b0) at utils.c:2878
2878        while ((c = *s++)) {
(gdb) bt
#0  nicezputs (s=0x0, stream=0xed5b0) at utils.c:2878
#1  0x7e43c in zwarn (fmt=0xc6e98 "fatal error: out of memory", str=0x0, num=0)
    at utils.c:78
#2  0x7e314 in zerr (fmt=0xc6e98 "fatal error: out of memory", str=0x0, num=0)
    at utils.c:49
#3  0x591a8 in zcalloc (size=2147483647) at mem.c:509
#4  0x498c4 in zsh_main (argc=1, argv=0xeffffa5c) at init.c:1189
#5  0x22df0 in main (argc=1, argv=0xeffffa5c) at ./main.c:37
(gdb) p fdtable_size
$1 = 2147483647
(gdb) quit


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: unlimited file descripters causes problems for zsh-4.0.2
  2001-10-18 23:44 unlimited file descripters causes problems for zsh-4.0.2 Matthew Braun
@ 2001-10-19 16:40 ` Bart Schaefer
  2001-10-19 17:56   ` Bart Schaefer
  0 siblings, 1 reply; 5+ messages in thread
From: Bart Schaefer @ 2001-10-19 16:40 UTC (permalink / raw)
  To: Matthew Braun, zsh-workers

On Oct 18,  7:44pm, Matthew Braun wrote:
}
} caused by zsh-4.0.2 doing an memory alloc for the max number of file
} descriptors, which in this case is (2^32)-1 (2GB).
[...]
} 1. Change the code back to "fdtable_size = OPEN_MAX;"
}    problem I see: fascist

The only time anything is weritten into fdtable, the code first checks
whether it needs to make it larger, so this is hardly "facist".  It was
probably a bad idea to use zopenmax() in that context in the first place.

What I'm more concerned about is closeallelse() in exec.c, which is going
to make up to several billion close() calls (plus a lot of unnecessary
looping) every time a process with a redirection is started; but which I
think could be caused to leak descriptors if it doesn't scan all the way
to the actual maximum fd.

} One thing I did notice was the global int "max_zsh_fd" is never
} initialized before first using its value.  It appears we are currently
} counting on it being automatically initialized to zero (by the compiler
} or linker).  This variable should get initialized in the code I assume.

No, it's perfectly valid to assume that the compiler will initialize a
variable to zero.  That's required by all definitions of the C language.

-- 
Bart Schaefer                                 Brass Lantern Enterprises
http://www.well.com/user/barts              http://www.brasslantern.com

Zsh: http://www.zsh.org | PHPerl Project: http://phperl.sourceforge.net   


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: unlimited file descripters causes problems for zsh-4.0.2
  2001-10-19 16:40 ` Bart Schaefer
@ 2001-10-19 17:56   ` Bart Schaefer
  2001-10-21 19:07     ` Bart Schaefer
  0 siblings, 1 reply; 5+ messages in thread
From: Bart Schaefer @ 2001-10-19 17:56 UTC (permalink / raw)
  To: zsh-workers

On Oct 19,  4:40pm, Bart Schaefer wrote:
}
} What I'm more concerned about is closeallelse() in exec.c, which is going
} to make up to several billion close() calls (plus a lot of unnecessary
} looping) every time a process with a redirection is started; but which I
} think could be caused to leak descriptors if it doesn't scan all the way
} to the actual maximum fd.

Here's a suggestion:  Once, at startup, we scan all the way to zopenmax()
looking for open descriptors, and set a global to the largest number we
find.  (We still use a constant on the order of 1024 for fdtable_size.)
Thereafter zsh itself is in control of the descriptor numbers, so we
should never have to scan beyond that number again, so we use that global
in closeallelse() instead of calling zopenmax() again.

-- 
Bart Schaefer                                 Brass Lantern Enterprises
http://www.well.com/user/barts              http://www.brasslantern.com

Zsh: http://www.zsh.org | PHPerl Project: http://phperl.sourceforge.net   


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: unlimited file descripters causes problems for zsh-4.0.2
  2001-10-19 17:56   ` Bart Schaefer
@ 2001-10-21 19:07     ` Bart Schaefer
  2001-10-21 20:17       ` Bart Schaefer
  0 siblings, 1 reply; 5+ messages in thread
From: Bart Schaefer @ 2001-10-21 19:07 UTC (permalink / raw)
  To: zsh-workers

On Oct 19,  5:56pm, Bart Schaefer wrote:
}
} } What I'm more concerned about is closeallelse() in exec.c, which is going
} } to make up to several billion close() calls
} 
} Here's a suggestion:  Once, at startup, we scan all the way to zopenmax()
} looking for open descriptors, and set a global to the largest number we
} find.

The following patch approximately implements the above.  It does not yet
change init.c to use a constant for fdtable_size, which is still needed
to prevent zsh from allocating huge amounts of memory if a descriptor
with a very large number is already open when the shell starts up.

I'm not able to test this on a machine that really implements a huge max.
number of file descriptors, so I won't commit it until someone else has
tried it.

Index: compat.c
===================================================================
RCS file: /extra/cvsroot/zsh/zsh-4.0/Src/compat.c,v
retrieving revision 1.2
diff -c -r1.2 compat.c
--- compat.c	2001/07/10 09:05:20	1.2
+++ compat.c	2001/10/21 19:00:17
@@ -198,12 +198,32 @@
 mod_export long
 zopenmax(void)
 {
-    long openmax = sysconf(_SC_OPEN_MAX);
+    static long openmax = 0;
 
-    if (openmax < 1)
-	return OPEN_MAX;
-    else
-	return openmax;
+    if (openmax < 1) {
+	if ((openmax = sysconf(_SC_OPEN_MAX)) < 1) {
+	    openmax = OPEN_MAX;
+	} else if (openmax > OPEN_MAX) {
+	    /* On some systems, "limit descriptors unlimited" or the  *
+	     * equivalent will set openmax to a huge number.  Unless  *
+	     * there actually is a file descriptor > OPEN_MAX already *
+	     * open, nothing in zsh requires the true maximum, and in *
+	     * fact it causes inefficiency elsewhere if we report it. *
+	     * So, report the maximum of OPEN_MAX or the largest open *
+	     * descriptor (is there a better way to find that?).      */
+	    long i, j = OPEN_MAX;
+	    for (i = j; i < openmax; i += (errno != EINTR)) {
+		errno = 0;
+		if (fcntl(i, F_GETFL, 0) < 0 &&
+		    (errno == EBADF || errno == EINTR))
+		    continue;
+		j = i;
+	    }
+	    openmax = j;
+	}
+    }
+
+    return (max_zsh_fd > openmax) ? max_zsh_fd : openmax;
 }
 #endif
 


-- 
Bart Schaefer                                 Brass Lantern Enterprises
http://www.well.com/user/barts              http://www.brasslantern.com

Zsh: http://www.zsh.org | PHPerl Project: http://phperl.sourceforge.net   


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: unlimited file descripters causes problems for zsh-4.0.2
  2001-10-21 19:07     ` Bart Schaefer
@ 2001-10-21 20:17       ` Bart Schaefer
  0 siblings, 0 replies; 5+ messages in thread
From: Bart Schaefer @ 2001-10-21 20:17 UTC (permalink / raw)
  To: zsh-workers

On Oct 21,  7:07pm, Bart Schaefer wrote:
}
} } Here's a suggestion:  Once, at startup, we scan all the way to zopenmax()
} } looking for open descriptors, and set a global to the largest number we
} } find.

I should point out that it's possible for sysconf(_SC_OPEN_MAX) to return
zero even when the actual descriptor limit is greater than OPEN_MAX.  So
there is still potential for zsh to leave high-numbered descriptors open
in closeallelse() et al., should the OS have a particularly brain-damaged
POSIX implementation and someone really wishes to force it.

-- 
Bart Schaefer                                 Brass Lantern Enterprises
http://www.well.com/user/barts              http://www.brasslantern.com

Zsh: http://www.zsh.org | PHPerl Project: http://phperl.sourceforge.net   


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2001-10-21 20:57 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-10-18 23:44 unlimited file descripters causes problems for zsh-4.0.2 Matthew Braun
2001-10-19 16:40 ` Bart Schaefer
2001-10-19 17:56   ` Bart Schaefer
2001-10-21 19:07     ` Bart Schaefer
2001-10-21 20:17       ` Bart Schaefer

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/zsh/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).