On Mon, May 25, 2015 at 06:46:29PM -0400, Rich Felker wrote: > On Mon, May 25, 2015 at 05:45:12PM -0400, Rich Felker wrote: > > @@ -74,6 +77,16 @@ void _dlstart_c(size_t *sp, size_t *dynv) > > *rel_addr = (size_t)base + rel[2]; > > } > > > > + /* Prepare storage for stages 2 to save clobbered REL > > + * addends so they can be reused in stage 3. There should > > + * be very few. If something goes wrong and there are a > > + * huge number, pass a null pointer to trigger stage 2 > > + * to abort instead of risking stack overflow. */ > > + int too_many_addends = symbolic_rel_cnt > 4096; > > + size_t naddends = too_many_addends ? 1 : symbolic_rel_cnt; > > + size_t addends[naddends]; > > + size_t *paddends = too_many_addends ? 0 : addends; > > + > > const char *strings = (void *)(base + dyn[DT_STRTAB]); > > const Sym *syms = (void *)(base + dyn[DT_SYMTAB]); > > This logic could lead to a zero-sized VLA (thus UB); instead, trying: > > int too_many_addends = symbolic_rel_cnt > 4096; > size_t naddends = too_many_addends ? 0 : symbolic_rel_cnt; > size_t addends[naddends+1]; > size_t *paddends = too_many_addends ? 0 : addends; > > Avoiding the wasteful +1 would involve more conditionals so I think > it's best just avoiding it. Alternatively this might be > simpler/smaller: > > size_t addends[symbolic_rel_cnt & LIMIT-1 | 1]; > size_t *paddends = symbolic_rel_cnt >= LIMIT ? 0 : addends; Attached is an updated version of the patch with much simpler logic and the addend buffer moved into stage 2 which is now possible thanks to commit 768b82c6de24e480267c4c251c440edfc71800e3. Rich