zsh-workers
 help / color / mirror / code / Atom feed
* segmentation fault with {1..1234567}
@ 2014-07-04 17:25 Vincent Lefevre
  2014-07-05  1:40 ` Bart Schaefer
  0 siblings, 1 reply; 17+ messages in thread
From: Vincent Lefevre @ 2014-07-04 17:25 UTC (permalink / raw)
  To: zsh-workers

Hi,

With zsh 5.0.5 (Debian/unstable):

$ zsh -c 'echo {1..1234567}'
zsh: segmentation fault (core dumped)  zsh -c 'echo {1..1234567}'

If a failure is expected for any reason, it shouldn't be a crash.

-- 
Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-04 17:25 segmentation fault with {1..1234567} Vincent Lefevre
@ 2014-07-05  1:40 ` Bart Schaefer
  2014-07-05 11:12   ` Vincent Lefevre
  0 siblings, 1 reply; 17+ messages in thread
From: Bart Schaefer @ 2014-07-05  1:40 UTC (permalink / raw)
  To: zsh-workers

On Jul 4,  7:25pm, Vincent Lefevre wrote:
}
} With zsh 5.0.5 (Debian/unstable):
} 
} $ zsh -c 'echo {1..1234567}'
} zsh: segmentation fault (core dumped)  zsh -c 'echo {1..1234567}'
} 
} If a failure is expected for any reason, it shouldn't be a crash.

Whether that expression fails depends entirely on the memory allocation
limitations of the local host.  It works fine for me.

What's the "expected" behavior of the shell when the user requests a
vast amount of memory?  How should the shell recover from running out of
memory at any arbitrary point during execution of a command or script?
Is it really helpful to trap the signal and exit *without* dumping core?


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-05  1:40 ` Bart Schaefer
@ 2014-07-05 11:12   ` Vincent Lefevre
  2014-07-05 16:57     ` Bart Schaefer
  0 siblings, 1 reply; 17+ messages in thread
From: Vincent Lefevre @ 2014-07-05 11:12 UTC (permalink / raw)
  To: zsh-workers

On 2014-07-04 18:40:36 -0700, Bart Schaefer wrote:
> On Jul 4,  7:25pm, Vincent Lefevre wrote:
> }
> } With zsh 5.0.5 (Debian/unstable):
> } 
> } $ zsh -c 'echo {1..1234567}'
> } zsh: segmentation fault (core dumped)  zsh -c 'echo {1..1234567}'
> } 
> } If a failure is expected for any reason, it shouldn't be a crash.
> 
> Whether that expression fails depends entirely on the memory allocation
> limitations of the local host.  It works fine for me.

That's not a memory allocation problem: it fails on a machine
with 24GB!

> What's the "expected" behavior of the shell when the user requests a
> vast amount of memory?  How should the shell recover from running out of
> memory at any arbitrary point during execution of a command or script?
> Is it really helpful to trap the signal and exit *without* dumping core?

No, the signal means that it is already too late. You should really
check the return code of malloc and so on. If this comes from a
command like here, the best thing is to make it fail, e.g. in an
interactive shell:

$ some_command
zsh: out of memory
$

with $? being non-zero, and possibly with some details in the message.

-- 
Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-05 11:12   ` Vincent Lefevre
@ 2014-07-05 16:57     ` Bart Schaefer
  2014-07-05 23:39       ` Vincent Lefevre
  0 siblings, 1 reply; 17+ messages in thread
From: Bart Schaefer @ 2014-07-05 16:57 UTC (permalink / raw)
  To: zsh-workers

On Jul 5,  1:12pm, Vincent Lefevre wrote:
}
} > Whether that expression fails depends entirely on the memory allocation
} > limitations of the local host.  It works fine for me.
} 
} That's not a memory allocation problem: it fails on a machine
} with 24GB!

Then it's probably a per-process resource limit problem.
 
} > What's the "expected" behavior of the shell when the user requests a
} > vast amount of memory?  How should the shell recover from running out of
} > memory at any arbitrary point during execution of a command or script?
} > Is it really helpful to trap the signal and exit *without* dumping core?
} 
} No, the signal means that it is already too late.

Running out of memory at all means it's already too late, is my point.
There's no way to know what allocated memory can be unwound/reclaimed,
so even if you discover that e.g. malloc() has returned zero there is
(in the general case) no safe way to report the error and return to a
prompt without ending up in a state where every action at that prompt
uselessly reports the same error again.

Anyway, that's not [directly] the problem here.  In this particular case
the shell is running out of memory on the stack, attempting to declare a
variable length array via the VARARR() macro.  There is no well-defined
way to recover from that, e.g., "man 3 alloca":

RETURN VALUE
       The alloca function returns a pointer to the beginning of the
       allocated space.  If the allocation causes stack overflow,
       program behaviour is undefined.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-05 16:57     ` Bart Schaefer
@ 2014-07-05 23:39       ` Vincent Lefevre
  2014-07-06  0:09         ` Vincent Lefevre
  2014-07-06 16:16         ` Bart Schaefer
  0 siblings, 2 replies; 17+ messages in thread
From: Vincent Lefevre @ 2014-07-05 23:39 UTC (permalink / raw)
  To: zsh-workers

On 2014-07-05 09:57:03 -0700, Bart Schaefer wrote:
> On Jul 5,  1:12pm, Vincent Lefevre wrote:
> }
> } > Whether that expression fails depends entirely on the memory allocation
> } > limitations of the local host.  It works fine for me.
> } 
> } That's not a memory allocation problem: it fails on a machine
> } with 24GB!
> 
> Then it's probably a per-process resource limit problem.

No, I don't have any per-process limitation on the memory.

> } > What's the "expected" behavior of the shell when the user requests a
> } > vast amount of memory?  How should the shell recover from running out of
> } > memory at any arbitrary point during execution of a command or script?
> } > Is it really helpful to trap the signal and exit *without* dumping core?
> } 
> } No, the signal means that it is already too late.
> 
> Running out of memory at all means it's already too late, is my point.

No, it isn't, as there may remain plenty of memory. What sometimes
happens is that for some command, a malloc will be attempted on
several GB's. The failure is easy to detect, and one can go on
handling the failure in a normal way.

> Anyway, that's not [directly] the problem here.  In this particular case
> the shell is running out of memory on the stack, attempting to declare a
> variable length array via the VARARR() macro.  There is no well-defined
> way to recover from that, e.g., "man 3 alloca":
> 
> RETURN VALUE
>        The alloca function returns a pointer to the beginning of the
>        allocated space.  If the allocation causes stack overflow,
>        program behaviour is undefined.

NEVER use alloca to allocate a lot of memory. That's a well-known bug!

-- 
Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-05 23:39       ` Vincent Lefevre
@ 2014-07-06  0:09         ` Vincent Lefevre
  2014-07-06 19:46           ` Bart Schaefer
  2014-07-06 16:16         ` Bart Schaefer
  1 sibling, 1 reply; 17+ messages in thread
From: Vincent Lefevre @ 2014-07-06  0:09 UTC (permalink / raw)
  To: zsh-workers

On 2014-07-06 01:39:16 +0200, Vincent Lefevre wrote:
> On 2014-07-05 09:57:03 -0700, Bart Schaefer wrote:
> > On Jul 5,  1:12pm, Vincent Lefevre wrote:
> > }
> > } > Whether that expression fails depends entirely on the memory allocation
> > } > limitations of the local host.  It works fine for me.
> > } 
> > } That's not a memory allocation problem: it fails on a machine
> > } with 24GB!
> > 
> > Then it's probably a per-process resource limit problem.
> 
> No, I don't have any per-process limitation on the memory.

Note also that the bug is specific to zsh. Other shells work fine:

$ dash -c 'echo `seq 1 1234567` | wc'
      1 1234567 8765432
$ ksh93 -c 'echo `seq 1 1234567` | wc'
      1 1234567 8765432
$ mksh -c 'echo `seq 1 1234567` | wc'
      1 1234567 8765432
$ bash -c 'echo `seq 1 1234567` | wc'
      1 1234567 8765432
$ posh -c 'echo `seq 1 1234567` | wc'
      1 1234567 8765432

while with zsh:

$ zsh -c 'echo `seq 1 1234567` | wc'
      0       0       0

Removing the "| wc" shows that this is due to a segmentation fault.

I've reported the bug on the Debian BTS:

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=753906

-- 
Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-05 23:39       ` Vincent Lefevre
  2014-07-06  0:09         ` Vincent Lefevre
@ 2014-07-06 16:16         ` Bart Schaefer
  2014-07-06 18:30           ` Peter Stephenson
  1 sibling, 1 reply; 17+ messages in thread
From: Bart Schaefer @ 2014-07-06 16:16 UTC (permalink / raw)
  To: zsh-workers

On Jul 6,  1:39am, Vincent Lefevre wrote:
} Subject: Re: segmentation fault with {1..1234567}
}
} On 2014-07-05 09:57:03 -0700, Bart Schaefer wrote:
} > 
} > Then it's probably a per-process resource limit problem.
} 
} No, I don't have any per-process limitation on the memory.

Apparently, though, you may have one on stack space.

} > Running out of memory at all means it's already too late, is my point.
} 
} No, it isn't, as there may remain plenty of memory. What sometimes
} happens is

"Sometimes."

In any case I'm not in a position to continue debating this.  Zsh treats
out of memory as a fatal error at nearly the lowest levels of its memory
management and that's not changing without some major rewriting.

} NEVER use alloca to allocate a lot of memory. That's a well-known bug!

IIRC the use of VARARR() was introduced in order to more evenly split the
memory use between the heap and the stack on systems that had limited RAM.
Obviously more recent hardware is likely to have divided process address
space differently, and this former optimization has become a liability.

All tests still pass when VARARR() is redefined to use zhalloc(), so it
should be straightforward to add a configure switch for this, if we can
decide what to call it and what its default setting should be.  However,
I didn't check with valgrind to see if memory is left unfreed in this
circumstance; we may need additional heap management calls as well.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-06 16:16         ` Bart Schaefer
@ 2014-07-06 18:30           ` Peter Stephenson
  2014-07-06 19:46             ` Bart Schaefer
  0 siblings, 1 reply; 17+ messages in thread
From: Peter Stephenson @ 2014-07-06 18:30 UTC (permalink / raw)
  To: zsh-workers

On Sun, 06 Jul 2014 09:16:09 -0700
Bart Schaefer <schaefer@brasslantern.com> wrote:
> In any case I'm not in a position to continue debating this.  Zsh treats
> out of memory as a fatal error at nearly the lowest levels of its memory
> management and that's not changing without some major rewriting.

This is generally true, but it's not quite the problem in this case.  Here
we don't actually need to be out of memory if we allocate it appropriately.

> } NEVER use alloca to allocate a lot of memory. That's a well-known bug!
> 
> IIRC the use of VARARR() was introduced in order to more evenly split the
> memory use between the heap and the stack on systems that had limited RAM.
> Obviously more recent hardware is likely to have divided process address
> space differently, and this former optimization has become a liability.

Well, all I can say is that without the following change it crashes on
my system (where I have not tweaked any limits) and with the following
change it doesn't.

Maybe I'm just selfish but I prefer the latter.

diff --git a/Src/builtin.c b/Src/builtin.c
index 42354b9..3af5fb9 100644
--- a/Src/builtin.c
+++ b/Src/builtin.c
@@ -280,8 +280,8 @@ execbuiltin(LinkList args, Builtin bn)
 	 * after option processing, but it makes XTRACE output
 	 * much simpler.
 	 */
-	VARARR(char *, argarr, argc + 1);
 	char **argv;
+	char **argarr = zhalloc((argc + 1)*sizeof(char *));
 
 	/*
 	 * Get the actual arguments, into argv.  Remember argarr


-- 
Peter Stephenson <p.w.stephenson@ntlworld.com>
Web page now at http://homepage.ntlworld.com/p.w.stephenson/


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-06  0:09         ` Vincent Lefevre
@ 2014-07-06 19:46           ` Bart Schaefer
  2014-07-07  1:12             ` Vincent Lefevre
  0 siblings, 1 reply; 17+ messages in thread
From: Bart Schaefer @ 2014-07-06 19:46 UTC (permalink / raw)
  To: zsh-workers

On Jul 6,  2:09am, Vincent Lefevre wrote:
}
} I've reported the bug on the Debian BTS:
} 
}   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=753906

Does that actually help?  Will they fix it without an upstream patch?


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-06 18:30           ` Peter Stephenson
@ 2014-07-06 19:46             ` Bart Schaefer
  2014-07-06 22:23               ` Mikael Magnusson
  2014-07-07 19:33               ` Peter Stephenson
  0 siblings, 2 replies; 17+ messages in thread
From: Bart Schaefer @ 2014-07-06 19:46 UTC (permalink / raw)
  To: zsh-workers

On Jul 6,  7:30pm, Peter Stephenson wrote:
} Subject: Re: segmentation fault with {1..1234567}
}
} On Sun, 06 Jul 2014 09:16:09 -0700
} Bart Schaefer <schaefer@brasslantern.com> wrote:
} > IIRC the use of VARARR() was introduced in order to more evenly split the
} > memory use between the heap and the stack on systems that had limited RAM.
} > Obviously more recent hardware is likely to have divided process address
} > space differently, and this former optimization has become a liability.
} 
} Well, all I can say is that without the following change it crashes on
} my system (where I have not tweaked any limits) and with the following
} change it doesn't.
} 
} Maybe I'm just selfish but I prefer the latter.

I do not disagree.

The question is, do we fix this one instance (your patch), do we redefine
VARARR() [possibly conditionally] to change all the usages at once, or do
we individually evaluate the 50-odd uses of VARARR()?

What constitutes "a lot" to avoid using alloca() upon?


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-06 19:46             ` Bart Schaefer
@ 2014-07-06 22:23               ` Mikael Magnusson
  2014-07-07 19:33               ` Peter Stephenson
  1 sibling, 0 replies; 17+ messages in thread
From: Mikael Magnusson @ 2014-07-06 22:23 UTC (permalink / raw)
  To: Bart Schaefer; +Cc: zsh workers

On 6 July 2014 21:46, Bart Schaefer <schaefer@brasslantern.com> wrote:
> On Jul 6,  7:30pm, Peter Stephenson wrote:
> } Subject: Re: segmentation fault with {1..1234567}
> }
> } On Sun, 06 Jul 2014 09:16:09 -0700
> } Bart Schaefer <schaefer@brasslantern.com> wrote:
> } > IIRC the use of VARARR() was introduced in order to more evenly split the
> } > memory use between the heap and the stack on systems that had limited RAM.
> } > Obviously more recent hardware is likely to have divided process address
> } > space differently, and this former optimization has become a liability.
> }
> } Well, all I can say is that without the following change it crashes on
> } my system (where I have not tweaked any limits) and with the following
> } change it doesn't.
> }
> } Maybe I'm just selfish but I prefer the latter.
>
> I do not disagree.
>
> The question is, do we fix this one instance (your patch), do we redefine
> VARARR() [possibly conditionally] to change all the usages at once, or do
> we individually evaluate the 50-odd uses of VARARR()?
>
> What constitutes "a lot" to avoid using alloca() upon?

I believe the default stack size on linux is 8MB per thread, at least
it is on my system. Setting ulimit -s 65535 makes the original command
work for me while by default it crashes.

-- 
Mikael Magnusson


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-06 19:46           ` Bart Schaefer
@ 2014-07-07  1:12             ` Vincent Lefevre
  0 siblings, 0 replies; 17+ messages in thread
From: Vincent Lefevre @ 2014-07-07  1:12 UTC (permalink / raw)
  To: zsh-workers

On 2014-07-06 12:46:00 -0700, Bart Schaefer wrote:
> On Jul 6,  2:09am, Vincent Lefevre wrote:
> }
> } I've reported the bug on the Debian BTS:
> } 
> }   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=753906
> 
> Does that actually help?  Will they fix it without an upstream patch?

Since you couldn't reproduce the problem, it could have been dependent
on configure options (another problem is that zsh doesn't have a BTS,
so that it becomes difficult to track bugs).

On 2014-07-06 09:16:09 -0700, Bart Schaefer wrote:
> On Jul 6,  1:39am, Vincent Lefevre wrote:
> } Subject: Re: segmentation fault with {1..1234567}
> }
> } On 2014-07-05 09:57:03 -0700, Bart Schaefer wrote:
> } > 
> } > Then it's probably a per-process resource limit problem.
> } 
> } No, I don't have any per-process limitation on the memory.
> 
> Apparently, though, you may have one on stack space.

There is always a limit. Since it is not possible to detect
allocation failures for the stack, it means that to avoid
random crashes, processes should pre-allocate memory there
and be written in such a way that they don't use more than
pre-allocated. The pre-allocated memory should not be too small,
but should not be too large either, otherwise the memory will
be exhausted too easily.

So, the 8MB default limit for GNU/Linux is very reasonable,
in particular taking into account the fact that a small part
is pre-allocated.

-- 
Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-06 19:46             ` Bart Schaefer
  2014-07-06 22:23               ` Mikael Magnusson
@ 2014-07-07 19:33               ` Peter Stephenson
  2014-07-08  1:08                 ` Bart Schaefer
  1 sibling, 1 reply; 17+ messages in thread
From: Peter Stephenson @ 2014-07-07 19:33 UTC (permalink / raw)
  To: zsh-workers

On Sun, 06 Jul 2014 12:46:29 -0700
Bart Schaefer <schaefer@brasslantern.com> wrote:
> The question is, do we fix this one instance (your patch), do we redefine
> VARARR() [possibly conditionally] to change all the usages at once, or do
> we individually evaluate the 50-odd uses of VARARR()?

Probably the third option is the right one, but it's the most work.  The
other possibilities are the pragmatic option of only fixing things when
they turn out to be buggy, or changing everything to heap allocation (or
something else known not to be problematic).  The best argument for the
last one is that it removes quite a lot of variant behaviour between
systems --- even if using zhalloc() isn't particularly efficient we can
all see whatever effects there are and are in a better position to fix
them (e.g. with appropriate heap manipulation commands).  For that
reason, I'm vaguely wondering if trying out that might not be a
reasonable start.

pws


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-07 19:33               ` Peter Stephenson
@ 2014-07-08  1:08                 ` Bart Schaefer
  2014-07-08 10:38                   ` Peter Stephenson
  0 siblings, 1 reply; 17+ messages in thread
From: Bart Schaefer @ 2014-07-08  1:08 UTC (permalink / raw)
  To: zsh-workers

On Jul 7,  8:33pm, Peter Stephenson wrote:
} Subject: Re: segmentation fault with {1..1234567}
}
} On Sun, 06 Jul 2014 12:46:29 -0700
} Bart Schaefer <schaefer@brasslantern.com> wrote:
} > The question is, do we fix this one instance (your patch), do we redefine
} > VARARR() [possibly conditionally] to change all the usages at once, or do
} > we individually evaluate the 50-odd uses of VARARR()?
} 
} Probably the third option is the right one, but it's the most work.  The
} other possibilities are the pragmatic option of only fixing things when
} they turn out to be buggy, or changing everything to heap allocation (or
} something else known not to be problematic).  The best argument for the
} last one is that it removes quite a lot of variant behaviour between
} systems --- even if using zhalloc() isn't particularly efficient we can
} all see whatever effects there are and are in a better position to fix
} them (e.g. with appropriate heap manipulation commands).  For that
} reason, I'm vaguely wondering if trying out that might not be a
} reasonable start.

Well, that one is easy, as I mentioned earlier; see patch below.  One added
hunk in mem.c for a place where we didn't check the return of malloc, but I
did not attempt to change the usual fatal-error behavior.


diff --git a/Src/mem.c b/Src/mem.c
index a8f0c37..7e0667a 100644
--- a/Src/mem.c
+++ b/Src/mem.c
@@ -950,7 +950,10 @@ zrealloc(void *ptr, size_t size)
 	ptr = NULL;
     } else {
 	/* If ptr is NULL, then behave like malloc */
-	ptr = malloc(size);
+        if (!(ptr = (void *) malloc(size))) {
+            zerr("fatal error: out of memory");
+            exit(1);
+        }
     }
     unqueue_signals();
 
diff --git a/Src/zsh_system.h b/Src/zsh_system.h
index 601de69..811340d 100644
--- a/Src/zsh_system.h
+++ b/Src/zsh_system.h
@@ -286,11 +286,15 @@ struct timezone {
 # include <limits.h>
 #endif
 
+#ifdef USE_STACK_ALLOCATION
 #ifdef HAVE_VARIABLE_LENGTH_ARRAYS
 # define VARARR(X,Y,Z)	X (Y)[Z]
 #else
 # define VARARR(X,Y,Z)	X *(Y) = (X *) alloca(sizeof(X) * (Z))
 #endif
+#else
+# define VARARR(X,Y,Z)	X *(Y) = (X *) zhalloc(sizeof(X) * (Z))
+#endif
 
 /* we should handle unlimited sizes from pathconf(_PC_PATH_MAX) */
 /* but this is too much trouble                                 */
diff --git a/configure.ac b/configure.ac
index 0f87a6c..37f3585 100644
--- a/configure.ac
+++ b/configure.ac
@@ -140,6 +140,16 @@ AC_HELP_STRING([--enable-zsh-hash-debug], [turn on debugging of internal hash ta
   AC_DEFINE(ZSH_HASH_DEBUG)
 fi])
 
+dnl Do you want to dynamically allocate memory on the stack where possible?
+ifdef([stack-allocation],[undefine([stack-allocation])])dnl
+AH_TEMPLATE([USE_STACK_ALLOCATION],
+[Define to 1 if you want to allocate stack memory e.g. with `alloca'.])
+AC_ARG_ENABLE(stack-allocation,
+AC_HELP_STRING([--enable-stack-allocation], [allocate stack memory e.g. with `alloca']),
+[if test x$enableval = xyes; then
+  AC_DEFINE(USE_STACK_ALLOCATION)
+fi])
+
 dnl Pathnames for global zsh scripts
 ifdef([etcdir],[undefine([etcdir])])dnl
 AC_ARG_ENABLE(etcdir,


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-08  1:08                 ` Bart Schaefer
@ 2014-07-08 10:38                   ` Peter Stephenson
  2014-07-24  9:44                     ` Peter Stephenson
  0 siblings, 1 reply; 17+ messages in thread
From: Peter Stephenson @ 2014-07-08 10:38 UTC (permalink / raw)
  To: zsh-workers

On Mon, 07 Jul 2014 18:08:06 -0700
Bart Schaefer <schaefer@brasslantern.com> wrote:
> On Jul 7,  8:33pm, Peter Stephenson wrote:
> } Subject: Re: segmentation fault with {1..1234567}
> }
> } On Sun, 06 Jul 2014 12:46:29 -0700
> } Bart Schaefer <schaefer@brasslantern.com> wrote:
> } > The question is ... do we redefine
> } > VARARR() [possibly conditionally] to change all the usages at once ...?
> 
> Well, that one is easy, as I mentioned earlier; see patch below.  One added
> hunk in mem.c for a place where we didn't check the return of malloc, but I
> did not attempt to change the usual fatal-error behavior.

I'm trying this out, but I'm not sure I'm likely to prod the most likely
problem areas with my usage.  I guess that would be things that can
allocate large chunks repeatedly before there's a popheap() or a
freeheap().  We've previously fixed this up for syntactic loops,
i.e. while, for, repeat, but there may be relevant internal loops.

Most if not all of the allocations are individually amenable to turning
into malloc and free if it turns out to be the easiest option following
experimentation.

pws


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-08 10:38                   ` Peter Stephenson
@ 2014-07-24  9:44                     ` Peter Stephenson
  2014-07-24 15:35                       ` Bart Schaefer
  0 siblings, 1 reply; 17+ messages in thread
From: Peter Stephenson @ 2014-07-24  9:44 UTC (permalink / raw)
  To: zsh-workers

On Tue, 08 Jul 2014 11:38:42 +0100
Peter Stephenson <p.stephenson@samsung.com> wrote:
> On Mon, 07 Jul 2014 18:08:06 -0700
> Bart Schaefer <schaefer@brasslantern.com> wrote:
> > On Jul 7,  8:33pm, Peter Stephenson wrote:
> > } Subject: Re: segmentation fault with {1..1234567}
> > }
> > } On Sun, 06 Jul 2014 12:46:29 -0700
> > } Bart Schaefer <schaefer@brasslantern.com> wrote:
> > } > The question is ... do we redefine
> > } > VARARR() [possibly conditionally] to change all the usages at once ...?
> > 
> > Well, that one is easy, as I mentioned earlier; see patch below.  One added
> > hunk in mem.c for a place where we didn't check the return of malloc, but I
> > did not attempt to change the usual fatal-error behavior.
> 
> I'm trying this out, but I'm not sure I'm likely to prod the most likely
> problem areas with my usage.

Not seen any problems.  Should we commit it and see what happens?  I
ought to be making a new release before too long.

pws


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: segmentation fault with {1..1234567}
  2014-07-24  9:44                     ` Peter Stephenson
@ 2014-07-24 15:35                       ` Bart Schaefer
  0 siblings, 0 replies; 17+ messages in thread
From: Bart Schaefer @ 2014-07-24 15:35 UTC (permalink / raw)
  To: zsh-workers

On Jul 24, 10:44am, Peter Stephenson wrote:
} Subject: Re: segmentation fault with {1..1234567}
}
} Not seen any problems.  Should we commit it and see what happens?  I
} ought to be making a new release before too long.

Sure, I'll commit it.


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2014-07-24 15:35 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-04 17:25 segmentation fault with {1..1234567} Vincent Lefevre
2014-07-05  1:40 ` Bart Schaefer
2014-07-05 11:12   ` Vincent Lefevre
2014-07-05 16:57     ` Bart Schaefer
2014-07-05 23:39       ` Vincent Lefevre
2014-07-06  0:09         ` Vincent Lefevre
2014-07-06 19:46           ` Bart Schaefer
2014-07-07  1:12             ` Vincent Lefevre
2014-07-06 16:16         ` Bart Schaefer
2014-07-06 18:30           ` Peter Stephenson
2014-07-06 19:46             ` Bart Schaefer
2014-07-06 22:23               ` Mikael Magnusson
2014-07-07 19:33               ` Peter Stephenson
2014-07-08  1:08                 ` Bart Schaefer
2014-07-08 10:38                   ` Peter Stephenson
2014-07-24  9:44                     ` Peter Stephenson
2014-07-24 15:35                       ` Bart Schaefer

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/zsh/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).