mailing list of musl libc
 help / color / mirror / code / Atom feed
* Request for volunteers
@ 2013-06-30  5:52 Rich Felker
  2013-06-30 10:48 ` Szabolcs Nagy
                   ` (5 more replies)
  0 siblings, 6 replies; 30+ messages in thread
From: Rich Felker @ 2013-06-30  5:52 UTC (permalink / raw)
  To: musl

Hi all,

With us nearing musl 1.0, there are a lot of things I could use some
help with. Here are a few specific tasks/roles I'm looking for:

1. Put together a list of relevant conferences one or more of us could
   attend in the next 4-6 months. I'm in the US and travelling outside
   the country would probably be prohibitive unless we also find
   funding, but anything in the US is pretty easy for me to get to,
   and other people involved in the project could perhaps attend
   other conferences outside the US.

2. Organize patches from sabotage, musl-cross, musl pkgsrc, etc. into
   suitable form for upstream, and drafting appropriate emails or bug
   reports to send.

3. Check status of musl support with build systems and distros
   possibly adopting it as an option or their main libc. This would
   include OpenWRT, Aboriginal, crosstool-ng, buildroot, Alpine Linux,
   Gentoo, etc. If their people doing the support seem to have gotten
   stuck or need help, offer assistance. Make sure the wiki is kept
   updated with info on other projects using musl we we can send folks
   their way too.

4. Wikimastering. The wiki could use a lot of organizational
   improvement, additional information, and monitoring for outdated
   information that needs to be updated or removed.

5. Rigorous testing. My ideal vision of this role is having somebody
   who takes a look at each bug fix committed and writes test cases
   for the bug and extrapolates tests for possible related bugs that
   haven't yet been found. And who reads the glibc bug tracker so we
   can take advantage of their bug reports too.

Anyone up for volunteering? :-)

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30  5:52 Request for volunteers Rich Felker
@ 2013-06-30 10:48 ` Szabolcs Nagy
  2013-07-01  3:51   ` Rich Felker
  2013-06-30 11:02 ` Daniel Cegiełka
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 30+ messages in thread
From: Szabolcs Nagy @ 2013-06-30 10:48 UTC (permalink / raw)
  To: musl

* Rich Felker <dalias@aerifal.cx> [2013-06-30 01:52:02 -0400]:
> 5. Rigorous testing. My ideal vision of this role is having somebody
>    who takes a look at each bug fix committed and writes test cases
>    for the bug and extrapolates tests for possible related bugs that
>    haven't yet been found. And who reads the glibc bug tracker so we
>    can take advantage of their bug reports too.

i can look into this one


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30  5:52 Request for volunteers Rich Felker
  2013-06-30 10:48 ` Szabolcs Nagy
@ 2013-06-30 11:02 ` Daniel Cegiełka
  2013-06-30 12:13   ` Rich Felker
  2013-06-30 15:25 ` Nathan McSween
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 30+ messages in thread
From: Daniel Cegiełka @ 2013-06-30 11:02 UTC (permalink / raw)
  To: musl

Rich, you missed something:

6. Man pages for musl. We need to describe the functions and
namespaces in header files.

Daniel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30 11:02 ` Daniel Cegiełka
@ 2013-06-30 12:13   ` Rich Felker
  2013-06-30 22:29     ` idunham
  2013-07-01  3:13     ` Rob Landley
  0 siblings, 2 replies; 30+ messages in thread
From: Rich Felker @ 2013-06-30 12:13 UTC (permalink / raw)
  To: musl

On Sun, Jun 30, 2013 at 01:02:03PM +0200, Daniel Cegiełka wrote:
> Rich, you missed something:
> 
> 6. Man pages for musl. We need to describe the functions and
> namespaces in header files.

This is a good topic for discussion. My documentation goal for 1.0 has
been aligned with the earlier docs outline proposal I sent to the list
a while back. Full man pages would be a much bigger task, and it's not
even something a volunteer could do without some major collaboration
with people who have a detailed understanding of every function in
musl. (Sadly, wrong man pages are probably worse than no man pages.)

What might be better for the near future is to get the POSIX man pages
project updated to match POSIX-2008+TC1 so that users of musl who want
man pages for libc functions can install them and have them match the
current version. Separate man pages could then be made for nonstandard
functions or functions that require significant implementation
specific documentation, possibly based on the Linux man pages project,
but with glibc-specific information just removed (for functions that
are predominantly kernel-level) or changed (where documenting musl
semantics matters).

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30  5:52 Request for volunteers Rich Felker
  2013-06-30 10:48 ` Szabolcs Nagy
  2013-06-30 11:02 ` Daniel Cegiełka
@ 2013-06-30 15:25 ` Nathan McSween
  2013-06-30 22:59   ` Luca Barbato
  2013-07-01 11:42 ` LM
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 30+ messages in thread
From: Nathan McSween @ 2013-06-30 15:25 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 574 bytes --]

> 5. Rigorous testing. My ideal vision of this role is having somebody
>    who takes a look at each bug fix committed and writes test cases
>    for the bug and extrapolates tests for possible related bugs that
>    haven't yet been found. And who reads the glibc bug tracker so we
>    can take advantage of their bug reports too.

This could be found using something like llvms Klee or any other smt bug
finder. IMO conformance tests should be written with the project otherwise
what is currently happening with musl (conformance tests don't exist or
have bugs) happens.

[-- Attachment #2: Type: text/html, Size: 668 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30 12:13   ` Rich Felker
@ 2013-06-30 22:29     ` idunham
  2013-07-01  3:31       ` Rich Felker
  2013-07-01  3:13     ` Rob Landley
  1 sibling, 1 reply; 30+ messages in thread
From: idunham @ 2013-06-30 22:29 UTC (permalink / raw)
  To: musl

> On Sun, Jun 30, 2013 at 01:02:03PM +0200, Daniel CegieÅ*ka wrote:
>> Rich, you missed something:
>>
>> 6. Man pages for musl. We need to describe the functions and
>> namespaces in header files.
>
> This is a good topic for discussion. My documentation goal for 1.0 has
> been aligned with the earlier docs outline proposal I sent to the list
> a while back. Full man pages would be a much bigger task, and it's not
> even something a volunteer could do without some major collaboration
> with people who have a detailed understanding of every function in
> musl. (Sadly, wrong man pages are probably worse than no man pages.)
>
> What might be better for the near future is to get the POSIX man pages
> project updated to match POSIX-2008+TC1 so that users of musl who want

I seem to recall running across the request for this; IIRC, the
maintainer said he'd wait until TOG uploaded an nroff tarball for
POSIX2008.

> man pages for libc functions can install them and have them match the
> current version. Separate man pages could then be made for nonstandard
> functions or functions that require significant implementation
> specific documentation, possibly based on the Linux man pages project,
> but with glibc-specific information just removed (for functions that
> are predominantly kernel-level) or changed (where documenting musl
> semantics matters).
>
> Rich

The linux man pages project may focus on glibc, but they document
differences in other Linux libc versions; it might make sense to try
sending them patches that document musl differences.
Besides avoiding a new project, this would be more convenient for those
targetting multiple systems (eventually, a developer on Debian or RHEL
would  see it while reading the man page...).
Of course, this doesn't cover the man pages for BSD-specific functions.






^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30 15:25 ` Nathan McSween
@ 2013-06-30 22:59   ` Luca Barbato
  0 siblings, 0 replies; 30+ messages in thread
From: Luca Barbato @ 2013-06-30 22:59 UTC (permalink / raw)
  To: musl

On 06/30/2013 05:25 PM, Nathan McSween wrote:
>> 5. Rigorous testing. My ideal vision of this role is having somebody
>>    who takes a look at each bug fix committed and writes test cases
>>    for the bug and extrapolates tests for possible related bugs that
>>    haven't yet been found. And who reads the glibc bug tracker so we
>>    can take advantage of their bug reports too.
> 
> This could be found using something like llvms Klee or any other smt bug
> finder. IMO conformance tests should be written with the project otherwise
> what is currently happening with musl (conformance tests don't exist or
> have bugs) happens.
> 
I suggest sparse[1], it is less known but decent.

[1]https://sparse.wiki.kernel.org/index.php/Main_Page


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30 12:13   ` Rich Felker
  2013-06-30 22:29     ` idunham
@ 2013-07-01  3:13     ` Rob Landley
  2013-07-01  3:43       ` Rich Felker
  1 sibling, 1 reply; 30+ messages in thread
From: Rob Landley @ 2013-07-01  3:13 UTC (permalink / raw)
  To: musl; +Cc: musl

On 06/30/2013 07:13:45 AM, Rich Felker wrote:
> On Sun, Jun 30, 2013 at 01:02:03PM +0200, Daniel Cegiełka wrote:
> > Rich, you missed something:
> >
> > 6. Man pages for musl. We need to describe the functions and
> > namespaces in header files.
> 
> This is a good topic for discussion. My documentation goal for 1.0 has
> been aligned with the earlier docs outline proposal I sent to the list
> a while back. Full man pages would be a much bigger task, and it's not
> even something a volunteer could do without some major collaboration
> with people who have a detailed understanding of every function in
> musl. (Sadly, wrong man pages are probably worse than no man pages.)

Michael Kerrisk does man pages. The best thing to do is feed him  
information about musl-specific stuff. He can probably do some kind of  
inline notation in his (docbook?) masters to make musl versions and  
glibc versions.

Reinventing this wheel would suck.

> What might be better for the near future is to get the POSIX man pages
> project updated to match POSIX-2008+TC1 so that users of musl who want
> man pages for libc functions can install them and have them match the
> current version.

I note that the guy who did the posix man pages ten years ago was:  
Michael Kerrisk.

(Honestly, posix seems to be slipping into some kind of dotage. One if  
its driving forces these days is Jorg Schilling. Let that sink in for a  
bit.)

> Separate man pages could then be made for nonstandard
> functions or functions that require significant implementation
> specific documentation, possibly based on the Linux man pages project,
> but with glibc-specific information just removed (for functions that
> are predominantly kernel-level) or changed (where documenting musl
> semantics matters).

Interface with the linux man pages project. They don't have strong  
glibc loyalty, they're just trying to document what people actually use.

Rob


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30 22:29     ` idunham
@ 2013-07-01  3:31       ` Rich Felker
  2013-07-01 17:42         ` Isaac
  0 siblings, 1 reply; 30+ messages in thread
From: Rich Felker @ 2013-07-01  3:31 UTC (permalink / raw)
  To: musl

On Sun, Jun 30, 2013 at 03:29:14PM -0700, idunham@lavabit.com wrote:
> > What might be better for the near future is to get the POSIX man pages
> > project updated to match POSIX-2008+TC1 so that users of musl who want
> 
> I seem to recall running across the request for this; IIRC, the
> maintainer said he'd wait until TOG uploaded an nroff tarball for
> POSIX2008.

This sounds unlikely. My understanding was that the people who
released the "3p" man pages generated them from the published POSIX
spec (Issue 6) with the blessing of the Open Group to license them
acceptably for distribution. I don't think the Open Group gave them
nroff files though...

> The linux man pages project may focus on glibc, but they document
> differences in other Linux libc versions; it might make sense to try
> sending them patches that document musl differences.
> Besides avoiding a new project, this would be more convenient for those
> targetting multiple systems (eventually, a developer on Debian or RHEL
> would  see it while reading the man page...).

I agree this would be ideal, but I think it would require convincing
them that musl is noteworthy for inclusion in the man pages (where,
for example, uClibc, dietlibc, klibc, etc. do not seem to be
considered noteworthy). I'm not sure if we would face political
obstacles here, but we could certainly try.

> Of course, this doesn't cover the man pages for BSD-specific functions.

I think there are only a very few BSD functions in musl which are not
also in glibc. Writing man pages for these, or copying and editing the
BSD ones, would not be a huge task.

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-01  3:13     ` Rob Landley
@ 2013-07-01  3:43       ` Rich Felker
  0 siblings, 0 replies; 30+ messages in thread
From: Rich Felker @ 2013-07-01  3:43 UTC (permalink / raw)
  To: musl

On Sun, Jun 30, 2013 at 10:13:23PM -0500, Rob Landley wrote:
> On 06/30/2013 07:13:45 AM, Rich Felker wrote:
> >On Sun, Jun 30, 2013 at 01:02:03PM +0200, Daniel Cegiełka wrote:
> >> Rich, you missed something:
> >>
> >> 6. Man pages for musl. We need to describe the functions and
> >> namespaces in header files.
> >
> >This is a good topic for discussion. My documentation goal for 1.0 has
> >been aligned with the earlier docs outline proposal I sent to the list
> >a while back. Full man pages would be a much bigger task, and it's not
> >even something a volunteer could do without some major collaboration
> >with people who have a detailed understanding of every function in
> >musl. (Sadly, wrong man pages are probably worse than no man pages.)
> 
> Michael Kerrisk does man pages. The best thing to do is feed him
> information about musl-specific stuff. He can probably do some kind
> of inline notation in his (docbook?) masters to make musl versions
> and glibc versions.
> 
> Reinventing this wheel would suck.

Well for the standard functions, I really like the 3p versions better
than the "Linux" versions. The Linux man pages tend to have a lot of
historical cruft -- things like recommending the wrong headers to get
the function, the wrong error codes for certain conditions, etc. --
and I think auditing them for agreement with POSIX and with musl would
be a fairly major task in itself.

> >What might be better for the near future is to get the POSIX man pages
> >project updated to match POSIX-2008+TC1 so that users of musl who want
> >man pages for libc functions can install them and have them match the
> >current version.
> 
> I note that the guy who did the posix man pages ten years ago was:
> Michael Kerrisk.

Maybe someone should contact him about all the stuff we're discussing.

> (Honestly, posix seems to be slipping into some kind of dotage. One

Overall POSIX is going way up in quality, adopting extensions that are
widespread and very useful, and fixing issues that make it impossible
to write correct programs or libraries using certain features. I don't
agree with every single decision made, but that's the way the world
works.

> if its driving forces these days is Jorg Schilling. Let that sink in
> for a bit.)

I haven't seen any abuse by Schilly of his role in the standards
process. The behavior I would call abuse of power (mainly, the way the
C locale issue was treated with knee-jerk reacionary attitudes and
shut-down of rational discussion) has come from others but not him.
I'm not a fan of his fandom of Solaris, be he's not even been pushing
a Solaris agenda as far as I can tell.

Anyway, saying to beware of POSIX because Schilly likes POSIX is like
saying not to eat donuts because Schilly likes donuts...

> >Separate man pages could then be made for nonstandard
> >functions or functions that require significant implementation
> >specific documentation, possibly based on the Linux man pages project,
> >but with glibc-specific information just removed (for functions that
> >are predominantly kernel-level) or changed (where documenting musl
> >semantics matters).
> 
> Interface with the linux man pages project. They don't have strong
> glibc loyalty, they're just trying to document what people actually
> use.

Yes, I think the hardest part will be convincing them that people use
musl, at least one the scale that makes it worth noting and including
in man pages that are installed on every single Linux system. But
hopefully my concern ends up being unfounded. :-)

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30 10:48 ` Szabolcs Nagy
@ 2013-07-01  3:51   ` Rich Felker
  2013-07-01 20:58     ` Szabolcs Nagy
  0 siblings, 1 reply; 30+ messages in thread
From: Rich Felker @ 2013-07-01  3:51 UTC (permalink / raw)
  To: musl

On Sun, Jun 30, 2013 at 12:48:35PM +0200, Szabolcs Nagy wrote:
> * Rich Felker <dalias@aerifal.cx> [2013-06-30 01:52:02 -0400]:
> > 5. Rigorous testing. My ideal vision of this role is having somebody
> >    who takes a look at each bug fix committed and writes test cases
> >    for the bug and extrapolates tests for possible related bugs that
> >    haven't yet been found. And who reads the glibc bug tracker so we
> >    can take advantage of their bug reports too.
> 
> i can look into this one

Excellent. If anybody on the team right now is the perfect person for
this task, it's you; I just wasn't clear on whether you'd have any
hope of having time for it.

If it's borderline whether you would or not, perhaps we could find
someone else to be a "research assistant" for the testing project.
This could involve tasks like:

- Reading the git log and making a list of noteworthy bugs to add
  regression tests for.

- Determining (perhaps via automated coverage tools) major code we
  lack any testing for.

- Cross-checking glibc bug tracker and/or glibc tests for issues that
  should be checked in musl too.

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30  5:52 Request for volunteers Rich Felker
                   ` (2 preceding siblings ...)
  2013-06-30 15:25 ` Nathan McSween
@ 2013-07-01 11:42 ` LM
  2013-07-04 18:05 ` Rob Landley
  2013-07-08 14:12 ` Anthony G. Basile
  5 siblings, 0 replies; 30+ messages in thread
From: LM @ 2013-07-01 11:42 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 1299 bytes --]

On Sun, Jun 30, 2013 at 1:52 AM, Rich Felker <dalias@aerifal.cx> wrote:

>
> 1. Put together a list of relevant conferences one or more of us could
>    attend in the next 4-6 months. I'm in the US and travelling outside
>    the country would probably be prohibitive unless we also find
>    funding, but anything in the US is pretty easy for me to get to,
>    and other people involved in the project could perhaps attend
>    other conferences outside the US.
>

Software Freedom Day is coming up.  If list members could mention it at
their local Software Freedom Day events that might help get the word out.
When I mentioned musl on my local Linux Users Group's mailing list, the
reaction I got was "wasn't that something that was used with embedded
systems".  Might help to let people know about some of the Linux
distributions out there that can be run with musl.  Ubuntu gave out free
DVDs of their distribution for various Software Freedom Day events.  Maybe
one of the musl based distributions could come up with an ISO for Software
Freedom Day and let the people involved with the event know about it.

Also think it would be useful to make sure musl is listed in the various
Open Source software listing web sites like http://alternativeto.net/ and
http://ostatic.com.

Sincerely,
Laura

[-- Attachment #2: Type: text/html, Size: 1678 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-01  3:31       ` Rich Felker
@ 2013-07-01 17:42         ` Isaac
  2013-07-01 17:46           ` Alex Caudill
  2013-07-01 21:12           ` Isaac
  0 siblings, 2 replies; 30+ messages in thread
From: Isaac @ 2013-07-01 17:42 UTC (permalink / raw)
  To: musl

On Sun, Jun 30, 2013 at 11:31:13PM -0400, Rich Felker wrote:
> On Sun, Jun 30, 2013 at 03:29:14PM -0700, idunham@lavabit.com wrote:
> > > What might be better for the near future is to get the POSIX man pages
> > > project updated to match POSIX-2008+TC1 so that users of musl who want
> > 
> > I seem to recall running across the request for this; IIRC, the
> > maintainer said he'd wait until TOG uploaded an nroff tarball for
> > POSIX2008.
> 
> This sounds unlikely. My understanding was that the people who
> released the "3p" man pages generated them from the published POSIX
> spec (Issue 6) with the blessing of the Open Group to license them
> acceptably for distribution. I don't think the Open Group gave them
> nroff files though...

OK, it was the Debian maintainer, and I'm not sure exactly 
what he meant:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=622918

I think it should be simple to convert the text to a man page once it's
in plain text format, and plan to write a shell script to do that for 
my own use shortly; I could provide the script to others, though I'm 
not sure about distributing the output myself.

Isaac Dunham



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-01 17:42         ` Isaac
@ 2013-07-01 17:46           ` Alex Caudill
  2013-07-01 21:12           ` Isaac
  1 sibling, 0 replies; 30+ messages in thread
From: Alex Caudill @ 2013-07-01 17:46 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 1500 bytes --]

I don't have a ton of free time but I'd be happy to help manage wiki
content and maybe help out a bit with build system testing.

If anyone wants to delegate some tasks in these areas, feel free to mail me
directly.

Thanks!


On Mon, Jul 1, 2013 at 12:42 PM, Isaac <idunham@lavabit.com> wrote:

> On Sun, Jun 30, 2013 at 11:31:13PM -0400, Rich Felker wrote:
> > On Sun, Jun 30, 2013 at 03:29:14PM -0700, idunham@lavabit.com wrote:
> > > > What might be better for the near future is to get the POSIX man
> pages
> > > > project updated to match POSIX-2008+TC1 so that users of musl who
> want
> > >
> > > I seem to recall running across the request for this; IIRC, the
> > > maintainer said he'd wait until TOG uploaded an nroff tarball for
> > > POSIX2008.
> >
> > This sounds unlikely. My understanding was that the people who
> > released the "3p" man pages generated them from the published POSIX
> > spec (Issue 6) with the blessing of the Open Group to license them
> > acceptably for distribution. I don't think the Open Group gave them
> > nroff files though...
>
> OK, it was the Debian maintainer, and I'm not sure exactly
> what he meant:
>
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=622918
>
> I think it should be simple to convert the text to a man page once it's
> in plain text format, and plan to write a shell script to do that for
> my own use shortly; I could provide the script to others, though I'm
> not sure about distributing the output myself.
>
> Isaac Dunham
>
>

[-- Attachment #2: Type: text/html, Size: 2197 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-01  3:51   ` Rich Felker
@ 2013-07-01 20:58     ` Szabolcs Nagy
  2013-07-01 23:59       ` Rich Felker
  0 siblings, 1 reply; 30+ messages in thread
From: Szabolcs Nagy @ 2013-07-01 20:58 UTC (permalink / raw)
  To: musl

* Rich Felker <dalias@aerifal.cx> [2013-06-30 23:51:17 -0400]:
> This could involve tasks like:
> 
> - Reading the git log and making a list of noteworthy bugs to add
>   regression tests for.
> 
> - Determining (perhaps via automated coverage tools) major code we
>   lack any testing for.
> 
> - Cross-checking glibc bug tracker and/or glibc tests for issues that
>   should be checked in musl too.


i was thinking about this and a few category of tests:

functional:
	black box testing of libc interfaces
	(eg input-output test vectors)
	tries to achieve good coverage
regression:
	tests for bugs we found
	sometimes the bugs are musl specific or arch specific so
	i think these are worth keeping separately from the general
	functional tests (eg same repo but different dir)
static:
	testing without executing code
	eg check the symbols and types in headers
	(i have tests that cover all posix headers only using cc)
	may be the binaries can be checked in some way as well
static-src:
	code analyzis can be done on the source of musl
	(eg sparse, cppcheck, clang-analyzer were tried earlier)
	(this is different from the other tests and probably should
	be set up separately)
metrics:
	benchmarks and quality of implementation metrics
	(eg performance, memory usage the usual checks but even
	ulp error measurements may be in this category)

other tools:
	a coverage tool would be useful, i'm not sure if anyone
	set up one with musl yet (it's not just good for testing,
	but also for determining what interfaces are actually in
	use in the libc)

	clever fuzzer tools would be nice as well, but i don't know
	anything that can be used directly on a library (maybe with
	small effort they can be used to get better coverage)

as a first step i guess we need to start with the functional
and regression tests

design goals of the test system:
- tests should be easy to run even a single test in isolation
(so test should be self contained if possible)
- output is a report, failure cause should be clear
- the system should not have external dependencies
(other than libc, posix sh, gnu make: so tests are in .c files with
simple buildsystem or .sh wrapper)
- the failure of one test should not interfere with other tests
(so tests should be in separate .c files each with main() and
narrow scope, otherwise a build failure can affect a lot of tests)
- the test system should run on all archs
(so arch specific and implementation defined things should be treated
carefully)
- the test results should be robust
(failures are always reported, deterministically if possible)
- tests should leave the system in clean state
(or easily cleanable state)

some difficulties:
- some "test framework" functionality would be nice, but can be
problematic: eg using nice error reporting function on top of stdio
may cause loss of info because of buffering in case of a crash
- special compiler or linker flags (can be maintained in makefile
or in the .c files as comments)
- tests may require special environment, filesystem access, etc
i'm not sure what's the best way to manage that
(and some tests may need two different uid or other capabilities)
- i looked at the bug history and many bugs are in hard to
trigger cornercases (eg various races) or internally invoke ub
in a way that may be hard to verify in a robust way
- some tests may need significant support code to achieve good
coverage (printf, math, string handling close to 2G,..)
(in such cases we can go with simple self-contained tests without
much coverage, but easy maintainance, or with something
sophisticated)

does that sound right?

i think i can reorganize my libc tests to be more "scalabe"
in these directions..


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-01 17:42         ` Isaac
  2013-07-01 17:46           ` Alex Caudill
@ 2013-07-01 21:12           ` Isaac
  1 sibling, 0 replies; 30+ messages in thread
From: Isaac @ 2013-07-01 21:12 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 921 bytes --]

On Mon, Jul 01, 2013 at 10:42:45AM -0700, Isaac wrote:
> 
> I think it should be simple to convert the text to a man page once it's
> in plain text format, and plan to write a shell script to do that for 
> my own use shortly; I could provide the script to others, though I'm 
> not sure about distributing the output myself.

I have a shell script using mksh that does part of the work.
Currently, the useage would be like this:

for m in *.html
  do
   lynx -dump $m | posix2nroff $m > `basename $m .html`.3posix
  done
(I intend to change it to call lynx and add the extension, plus 
symlink where the HTML pages do so.)

IF YOU CHANGE THE SHELL, YOU WILL NEED TO DEBUG IT!
Quotes and escaping can vary subtly in ways that break the output (I know 
because I tested...); additionally, bash resulted in several places where
the output of printf was something like "N^HNA^HAM^HME^HE" (not sure why).


HTH,
Isaac Dunham

[-- Attachment #2: posix2nroff --]
[-- Type: text/plain, Size: 1255 bytes --]

#!/bin/mksh

#use read -r throughout; '\' cannot break this.
#discard <n>: advance position of read by n lines
discard(){
for i in `seq $1`
  do read -r header
  done
}

#use in pipes to delete leading whitespace
chompws(){
sed -e 's/^[[:space:]]*//'
}


mainparser(){
while read -r line; do
  case "$line" in
   ( NAME | SYNOPSIS | DESCRIPTION | RETURN*VALUE | ERRORS | EXAMPLES | APPLICATION*USAGE | RATIONALE | FUTURE*DIRECTIONS | CHANGE*HISTORY )
     printf ".SH $line\n.LP\n"
     ;;
   (SEE*ALSO)
     printf ".SH $line\n.LP\n"
     ;;
    ('['*']')
     echo '.TP 7'
     echo -n '.B '
     echo "$line" |tr -d [:punct:]
     ;;
    (*#include*)
     printf "$line\n" |sed -e 's/.*#include/#include/'
     ;;
    (----*)
     ;;
    (The*following*sections*informative*)
     echo -n '\\fI'
     echo -n "$line"
     echo '\\fP'
     ;;
    (End*informative*)
     break
     ;;
    (*)
     echo "$line"
     ;;
  esac
done
}
nameof(){
echo `basename $1 .html`
}

discard 4
for line in 1 2 3
  do read -r line
  echo -n '.\"'
  printf " $line\n"
  done

echo -n '.TH '`nameof $1 |tr [a-z] [A-Z]`' P 2013 "IEEE/The Open Group" "POSIX Programmers Manual"'

discard 2

mainparser
discard 11
echo '.SH COPYRIGHT'
echo '.LP'
read -r line
echo "$line"

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-01 20:58     ` Szabolcs Nagy
@ 2013-07-01 23:59       ` Rich Felker
  2013-07-02  2:19         ` Szabolcs Nagy
  0 siblings, 1 reply; 30+ messages in thread
From: Rich Felker @ 2013-07-01 23:59 UTC (permalink / raw)
  To: musl

On Mon, Jul 01, 2013 at 10:58:57PM +0200, Szabolcs Nagy wrote:
> i was thinking about this and a few category of tests:
> 
> functional:
> 	black box testing of libc interfaces
> 	(eg input-output test vectors)
> 	tries to achieve good coverage
> regression:
> 	tests for bugs we found
> 	sometimes the bugs are musl specific or arch specific so
> 	i think these are worth keeping separately from the general
> 	functional tests (eg same repo but different dir)
> static:
> 	testing without executing code
> 	eg check the symbols and types in headers
> 	(i have tests that cover all posix headers only using cc)
> 	may be the binaries can be checked in some way as well
> static-src:
> 	code analyzis can be done on the source of musl
> 	(eg sparse, cppcheck, clang-analyzer were tried earlier)
> 	(this is different from the other tests and probably should
> 	be set up separately)
> metrics:
> 	benchmarks and quality of implementation metrics
> 	(eg performance, memory usage the usual checks but even
> 	ulp error measurements may be in this category)

One thing we could definitely measure here is "given a program that
just uses interface X, how much crap gets pulled in when static
linking?" I think this could easily be automated to cover tons of
interfaces, but in the interest of signal-to-noise ratio, I think we
would want to manually select interesting interfaces to have it
performed on.

> other tools:
> 	a coverage tool would be useful, i'm not sure if anyone
> 	set up one with musl yet (it's not just good for testing,
> 	but also for determining what interfaces are actually in
> 	use in the libc)

Yes. Coverage from real-world apps can also tell us which interfaces
need tests.

> 	clever fuzzer tools would be nice as well, but i don't know
> 	anything that can be used directly on a library (maybe with
> 	small effort they can be used to get better coverage)

Yes, automatically generating meaningful inputs that meet the
interface contracts is non-trivial.

> as a first step i guess we need to start with the functional
> and regression tests

Agreed, these are the highest priority.

> design goals of the test system:
> - tests should be easy to run even a single test in isolation
> (so test should be self contained if possible)

Agreed. This is particularly important when trying to fix something
that broke a test, or when testing a new port (since it may be hard to
get the port working enough to test anything if failure of one test
prevents seeing the results of others).

> - output is a report, failure cause should be clear

This would be really nice.

> - the system should not have external dependencies
> (other than libc, posix sh, gnu make: so tests are in .c files with
> simple buildsystem or .sh wrapper)

Agreed.

> - the failure of one test should not interfere with other tests
> (so tests should be in separate .c files each with main() and
> narrow scope, otherwise a build failure can affect a lot of tests)

How do you delineate what constitutes a single test? For example, we
have hundreds of test cases for scanf, and it seems silly for each
input/format combination to be a separate .c file. On the other hand,
my current scanf tests automatically test both byte and wide versions
of both string and stdio versions of scanf; it may be desirable in
principle to separate these into 4 separate files.

My naive feeling would be that deciding "how much can go in one test"
is not a simple rule we can follow, but requires considering what's
being tested, how "low-level" it is, and whether the expected failures
might interfere with other tests. For instance a test that's looking
for out-of-bounds accesses would not be a candidate for doing a lot in
a single test file, but a test that's merely looking for correct
parsing could possibly get away with testing lots of assertions in a
single file.

> - the test system should run on all archs
> (so arch specific and implementation defined things should be treated
> carefully)

It should also run on all libcs, I think, with tests for unsupported
functionality possibly failing at build time.

> - the test results should be robust
> (failures are always reported, deterministically if possible)

I would merely add that this is part of the requirement of minimal
dependency. For example, if you have a fancy test framework that uses
stdio and malloc all over the place in the same process as the test,
it's pretty hard to test stdio and malloc robustly...

> - tests should leave the system in clean state
> (or easily cleanable state)

Yes, I think this mainly pertains to temp files, named POSIX IPC or
XSI IPC objects, etc.

> 
> some difficulties:
> - some "test framework" functionality would be nice, but can be
> problematic: eg using nice error reporting function on top of stdio
> may cause loss of info because of buffering in case of a crash

I think any fancy "framework" stuff could be purely in the controlling
and reporting layer, outside the address space of the actual tests. We
may however need a good way for the test to communicate its results to
the framework...

> - special compiler or linker flags (can be maintained in makefile
> or in the .c files as comments)

One thing that comes to mind where tests may need a lot of "build
system" help is testing the dynamic linker.

> - tests may require special environment, filesystem access, etc
> i'm not sure what's the best way to manage that
> (and some tests may need two different uid or other capabilities)

If possible, I think we should make such tests use Linux containers,
so that they can be tested without elevated privileges. I'm not
experienced with containers, but my feeling is that this is the only
reasonable way to get a controlled environment for tests that need
that sort of thing, without having the user/admin do a lot of sketchy
stuff to prepare for a test.

Fortunately these are probably low-priority and could be deferred
until later.

> - i looked at the bug history and many bugs are in hard to
> trigger cornercases (eg various races) or internally invoke ub
> in a way that may be hard to verify in a robust way

Test cases for race conditions make one of the most interesting types
of test writing. :-) The main key is that you need to have around a
copy of the buggy version to test against. Such tests would not have
FAILED or PASSED as possible results, but rather FAILED, or FAILED TO
FAIL. :-)

> - some tests may need significant support code to achieve good
> coverage (printf, math, string handling close to 2G,..)
> (in such cases we can go with simple self-contained tests without
> much coverage, but easy maintainance, or with something
> sophisticated)

I don't follow.

> does that sound right?
> 
> i think i can reorganize my libc tests to be more "scalabe"
> in these directions..

:)

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-01 23:59       ` Rich Felker
@ 2013-07-02  2:19         ` Szabolcs Nagy
  2013-07-02  7:49           ` Rich Felker
  0 siblings, 1 reply; 30+ messages in thread
From: Szabolcs Nagy @ 2013-07-02  2:19 UTC (permalink / raw)
  To: musl

* Rich Felker <dalias@aerifal.cx> [2013-07-01 19:59:55 -0400]:
> On Mon, Jul 01, 2013 at 10:58:57PM +0200, Szabolcs Nagy wrote:
> > - the failure of one test should not interfere with other tests
> > (so tests should be in separate .c files each with main() and
> > narrow scope, otherwise a build failure can affect a lot of tests)
> 
> How do you delineate what constitutes a single test? For example, we
> have hundreds of test cases for scanf, and it seems silly for each
> input/format combination to be a separate .c file. On the other hand,
> my current scanf tests automatically test both byte and wide versions
> of both string and stdio versions of scanf; it may be desirable in
> principle to separate these into 4 separate files.
> 
> My naive feeling would be that deciding "how much can go in one test"
> is not a simple rule we can follow, but requires considering what's
> being tested, how "low-level" it is, and whether the expected failures
> might interfere with other tests. For instance a test that's looking
> for out-of-bounds accesses would not be a candidate for doing a lot in
> a single test file, but a test that's merely looking for correct
> parsing could possibly get away with testing lots of assertions in a
> single file.

yes the boundary is not clear, but eg the current pthread
test does too many kinds of things in one file

if the 'hundreds of test cases' can be represented as
a simple array of test vectors then that should go into
one file

if many functions want to use the same test vectors then
at some point it's worth moving the vectors out to a
header file and write separate tests for the different
functions

> > some difficulties:
> > - some "test framework" functionality would be nice, but can be
> > problematic: eg using nice error reporting function on top of stdio
> > may cause loss of info because of buffering in case of a crash
> 
> I think any fancy "framework" stuff could be purely in the controlling
> and reporting layer, outside the address space of the actual tests. We
> may however need a good way for the test to communicate its results to
> the framework...
> 

the simple approach is to make each test a standalone process that
exits with 0 on success

in the failure case it can use dprintf to print error messages to
stdout and the test system collects the exit status and the messages

> > - special compiler or linker flags (can be maintained in makefile
> > or in the .c files as comments)
> 
> One thing that comes to mind where tests may need a lot of "build
> system" help is testing the dynamic linker.

yes

and we need to compile with -lpthread -lm -lrt -l...
if the tests should work on other libcs

my current solution is using wildcard rules for building
*_dso.c into .so and *.c into executables and then
add extra rules and target specific make variables:

foo: LDFLAGS+=-ldl -rdynamic
foo: foo_dso.so

the other solution i've seen is to put all the build commands
into the .c file as comments:

//RUN cc -c -o $name.o $name.c
//RUN cc -o $name $name.o
...

and use simple shell scripts as the build system
(dependencies are harder to track this way, but the tests
are more self-contained)

> > - tests may require special environment, filesystem access, etc
> > i'm not sure what's the best way to manage that
> > (and some tests may need two different uid or other capabilities)
> 
> If possible, I think we should make such tests use Linux containers,
> so that they can be tested without elevated privileges. I'm not
> experienced with containers, but my feeling is that this is the only
> reasonable way to get a controlled environment for tests that need
> that sort of thing, without having the user/admin do a lot of sketchy
> stuff to prepare for a test.
> 
> Fortunately these are probably low-priority and could be deferred
> until later.

ok, skip these for now

> > - i looked at the bug history and many bugs are in hard to
> > trigger cornercases (eg various races) or internally invoke ub
> > in a way that may be hard to verify in a robust way
> 
> Test cases for race conditions make one of the most interesting types
> of test writing. :-) The main key is that you need to have around a
> copy of the buggy version to test against. Such tests would not have
> FAILED or PASSED as possible results, but rather FAILED, or FAILED TO
> FAIL. :-)

hm we can introduce a third result for tests that try to trigger
some bug but are not guaranteed to do so
(eg failed,passed,inconclusive)
but probably that's more confusing than useful

> > - some tests may need significant support code to achieve good
> > coverage (printf, math, string handling close to 2G,..)
> > (in such cases we can go with simple self-contained tests without
> > much coverage, but easy maintainance, or with something
> > sophisticated)
> 
> I don't follow.

i mean for many small functions there is not much difference between
a simple sanity check and full coverage (eg basename can be thoroughly
tested by about 10 input-output pairs)

but there can be a huge difference: eg detailed testing of getaddrinfo
requires non-trivial setup with dns server etc, it's much easier to do
some sanity checks like gnulib would do, or a different example is
rand: a real test would be like the diehard test suit while the sanity
check is trivial

so i'm not sure how much engineering should go into the tests:
go for a small maintainable set that touch as many areas in libc
as possible, or go for extensive coverage and develop various tools
and libs that help setting up the environment or generate large set
of test cases (eg my current math tests are closer to this later one)

if the goal is to execute the test-suit as a post-commit hook
then there should be a reasonable limit on resource usage, build and
execution time etc and this limit affects how the code may be
organized, how errors are reported..
(most test systems i've seen are for simple unit tests: they allow
checking a few constraints and then report errors in a nice way,
however in case of libc i'd assume that you want to enumerate the
weird corner-cases to find bugs more effectively)


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-02  2:19         ` Szabolcs Nagy
@ 2013-07-02  7:49           ` Rich Felker
  2013-07-16 16:20             ` Szabolcs Nagy
  0 siblings, 1 reply; 30+ messages in thread
From: Rich Felker @ 2013-07-02  7:49 UTC (permalink / raw)
  To: musl

On Tue, Jul 02, 2013 at 04:19:37AM +0200, Szabolcs Nagy wrote:
> > My naive feeling would be that deciding "how much can go in one test"
> > is not a simple rule we can follow, but requires considering what's
> > being tested, how "low-level" it is, and whether the expected failures
> > might interfere with other tests. For instance a test that's looking
> > for out-of-bounds accesses would not be a candidate for doing a lot in
> > a single test file, but a test that's merely looking for correct
> > parsing could possibly get away with testing lots of assertions in a
> > single file.
> 
> yes the boundary is not clear, but eg the current pthread
> test does too many kinds of things in one file

I agree completely. By the way, your mentioning pthread tests reminds
me that we need a reliable way to fail tests that have deadlocked or
otherwise hung. The standard "let it run for N seconds then kill it"
approach is rather uncivilized. I wonder if we could come up with a
nice way with a mix of realtime and cputime timers to observe complete
lack of forward progress.

> if the 'hundreds of test cases' can be represented as
> a simple array of test vectors then that should go into
> one file
> 
> if many functions want to use the same test vectors then
> at some point it's worth moving the vectors out to a
> header file and write separate tests for the different
> functions

Indeed, that is probably the way I should have factored my scanf
tests, but there is something to be said for getting the 4 errors for
the 4 functions with the same vector collated together in the output.

> > I think any fancy "framework" stuff could be purely in the controlling
> > and reporting layer, outside the address space of the actual tests. We
> > may however need a good way for the test to communicate its results to
> > the framework...
> 
> the simple approach is to make each test a standalone process that
> exits with 0 on success
> 
> in the failure case it can use dprintf to print error messages to
> stdout and the test system collects the exit status and the messages

Agreed. And if any test is trying to avoid stdio entirely, it can use
write() directly to generate the output.

> > One thing that comes to mind where tests may need a lot of "build
> > system" help is testing the dynamic linker.
> 
> yes
> 
> and we need to compile with -lpthread -lm -lrt -l...
> if the tests should work on other libcs

Indeed. If we were testing other libcs, we might even want to run some
non-multithreaded tests with and without -lpthread in case the
override symbols in libpthread break something. Of course that can't
happen for musl; the equivalent test for musl would be static linking
and including or excluding references to certain otherwise-irrelevant
functions that might affect which version of another function gets
linked.

BTW, to use or not to use -static is also a big place we need build
system help.

> my current solution is using wildcard rules for building
> *_dso.c into .so and *.c into executables and then
> add extra rules and target specific make variables:
> 
> foo: LDFLAGS+=-ldl -rdynamic
> foo: foo_dso.so

I wasn't aware of this makefile trick to customizer flags for
different files. This could be very useful for customized optimization
levels in musl:

  ifdef $(OPTIMIZE)
  $(OPTIMIZE_OBJS) $(OPTIMIZE_OBJS:%.o=%.lo): CFLAGS+=-O3
  endif

> the other solution i've seen is to put all the build commands
> into the .c file as comments:
> 
> //RUN cc -c -o $name.o $name.c
> //RUN cc -o $name $name.o
> ....
> 
> and use simple shell scripts as the build system
> (dependencies are harder to track this way, but the tests
> are more self-contained)

What about a mix? Have the makefile include another makefile fragment
with a rule to generate that fragment, where the fragment is generated
from comments in the source files. Then you have full dependency
tracking via make, and self-contained tests.

> > > - i looked at the bug history and many bugs are in hard to
> > > trigger cornercases (eg various races) or internally invoke ub
> > > in a way that may be hard to verify in a robust way
> > 
> > Test cases for race conditions make one of the most interesting types
> > of test writing. :-) The main key is that you need to have around a
> > copy of the buggy version to test against. Such tests would not have
> > FAILED or PASSED as possible results, but rather FAILED, or FAILED TO
> > FAIL. :-)
> 
> hm we can introduce a third result for tests that try to trigger
> some bug but are not guaranteed to do so
> (eg failed,passed,inconclusive)
> but probably that's more confusing than useful

Are you aware of any such cases?

> > > - some tests may need significant support code to achieve good
> > > coverage (printf, math, string handling close to 2G,..)
> > > (in such cases we can go with simple self-contained tests without
> > > much coverage, but easy maintainance, or with something
> > > sophisticated)
> > 
> > I don't follow.
> 
> i mean for many small functions there is not much difference between
> a simple sanity check and full coverage (eg basename can be thoroughly
> tested by about 10 input-output pairs)
> 
> but there can be a huge difference: eg detailed testing of getaddrinfo
> requires non-trivial setup with dns server etc, it's much easier to do
> some sanity checks like gnulib would do, or a different example is
> rand: a real test would be like the diehard test suit while the sanity
> check is trivial

By the way, getaddrinfo (the dns resolver core of it) had a nasty bug
at one point in the past that randomly smashed the stack based on the
timing of dns responses. This would be a particularly hard thing to
test, but if we do eventually want to have regression tests for
timing-based bugs, it might make sense to use debuglib
(https://github.com/rofl0r/debuglib) and set breakpoints at key
functions to control the timing.

> so i'm not sure how much engineering should go into the tests:
> go for a small maintainable set that touch as many areas in libc
> as possible, or go for extensive coverage and develop various tools
> and libs that help setting up the environment or generate large set
> of test cases (eg my current math tests are closer to this later one)

I think something in between is what we should aim for, tuned for
where we expect to find bugs that matter. For functions like printf,
scanf, strtol, etc. that have a lot of complex logic and exact
behavior they must deliver or risk introducing serious application
bugs, high coverage is critical to delivering a libc we can be
confident in. But for other functions, a simple sanity check migh
suffice. Sanity checks are very useful for new ports, since failure to
pass can quickly show that we have syscall conventions or struct
definitions wrong, alignment bugs, bad asm, etc. They would also
probably have caught the recent embarassing mbsrtowcs bug I fixed.

Here are some exhaustive tests we could easily perform:

- rand_r: period and bias
- all multibyte to wide operations: each valid UTF-8 character and
  each invalid prefix. for functions that behave differently based on
  whether output pointer is null, testing both ways.
- all wide to multibyte functions: each valid and invald wchar_t.

And some functions that would probably be fine with just sanity
checks:

- dirent interfaces
- network address conversions
- basename/dirname
- signal operations
- search interfaces

And things that can't be tested exhaustively but which I would think
need serious tests:

- stdio (various combinations of buffering, use of unget buffer, scanf
  pushback, seeking, file position, flushing, switching
  reading/writing, eof and error flags, ...)
- AIO (it will probably fail now tho)
- threads (synchronization primitives, cancellation, TSD dtors, ...)
- regex (sanity-check all features, longest-match rule, ...)
- fnmatch, glob, and wordexp
- string functions

> if the goal is to execute the test-suit as a post-commit hook

I think that's a little too resource-heavy for a full test, but
perhaps reasonable for a subset of tests.

> then there should be a reasonable limit on resource usage, build and
> execution time etc and this limit affects how the code may be
> organized, how errors are reported..
> (most test systems i've seen are for simple unit tests: they allow
> checking a few constraints and then report errors in a nice way,
> however in case of libc i'd assume that you want to enumerate the
> weird corner-cases to find bugs more effectively)

Yes, I think so.

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30  5:52 Request for volunteers Rich Felker
                   ` (3 preceding siblings ...)
  2013-07-01 11:42 ` LM
@ 2013-07-04 18:05 ` Rob Landley
  2013-07-08  7:40   ` Szabolcs Nagy
  2013-07-08 14:45   ` Rich Felker
  2013-07-08 14:12 ` Anthony G. Basile
  5 siblings, 2 replies; 30+ messages in thread
From: Rob Landley @ 2013-07-04 18:05 UTC (permalink / raw)
  To: musl; +Cc: musl

On 06/30/2013 12:52:02 AM, Rich Felker wrote:
> Hi all,
> 
> With us nearing musl 1.0, there are a lot of things I could use some
> help with. Here are a few specific tasks/roles I'm looking for:
> 
> 1. Put together a list of relevant conferences one or more of us could
>    attend in the next 4-6 months. I'm in the US and travelling outside
>    the country would probably be prohibitive unless we also find
>    funding, but anything in the US is pretty easy for me to get to,
>    and other people involved in the project could perhaps attend
>    other conferences outside the US.

CELF turned into the "Linux Foundation Embedded Linux Conference" and  
they squished it together with some Android thing. I've never been to  
the Plumber's conference half my twitter list seems to go.

Ohio LinuxFest's call for papers is open until Monday:

---

Sender: news.olf@gmail.com
Received: by 10.223.36.69 with HTTP; Thu, 27 Jun 2013 07:03:14 -0700  
(PDT)
Date: Thu, 27 Jun 2013 10:03:14 -0400
Message-ID:  
<CAJtFCaq7J9v3QfjCZ+tRD2-zna_Y5J5FrmJ-gEZ_mGRFSUgUJQ@mail.gmail.com>
Subject: Oho LinuxFest Call for Talks closing soon
From: Kevin O'Brien <news@ohiolinux.org>

Hello, we are asking for a little help from you in getting the word out
that the Ohio LinuxFest Call for Talks will be closing 7/8/13. Obviously
the more proposals we get the better the program we can put together,  
so we
are asking if you could help us out by posting something to your  
followers
on places like Facebook, Google+, LinkedIn, Twitter, and Identi.ca (or  
any
other social media you like). The page with the submission is
https://ohiolinux.org/cfp.

Thanks for any help you can give us.

--
Kevin O'Brien
Publicity Director, Ohio LinuxFest
news@ohiolinux.org
https://ohiolinux.org

> 5. Rigorous testing. My ideal vision of this role is having somebody
>    who takes a look at each bug fix committed and writes test cases
>    for the bug and extrapolates tests for possible related bugs that
>    haven't yet been found. And who reads the glibc bug tracker so we
>    can take advantage of their bug reports too.

Is the Linux Test Project relevant?

Rob

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-04 18:05 ` Rob Landley
@ 2013-07-08  7:40   ` Szabolcs Nagy
  2013-07-09  2:14     ` Rob Landley
  2013-07-08 14:45   ` Rich Felker
  1 sibling, 1 reply; 30+ messages in thread
From: Szabolcs Nagy @ 2013-07-08  7:40 UTC (permalink / raw)
  To: musl

* Rob Landley <rob@landley.net> [2013-07-04 13:05:06 -0500]:
> >5. Rigorous testing. My ideal vision of this role is having somebody
> >   who takes a look at each bug fix committed and writes test cases
> >   for the bug and extrapolates tests for possible related bugs that
> >   haven't yet been found. And who reads the glibc bug tracker so we
> >   can take advantage of their bug reports too.
> 
> Is the Linux Test Project relevant?
> 

most of their tests are for kernel related features
some of them are pretty outdated (stress testing floppy io..)
and not very high quality (mostly written by ibm folks)
there is a large set of 'openhpi' tests and the entire
posix_testsuit (already audited), there is a fair amount
of network tests, mostly sctp and nfs

it seems a bit messy and not quite what we want


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-06-30  5:52 Request for volunteers Rich Felker
                   ` (4 preceding siblings ...)
  2013-07-04 18:05 ` Rob Landley
@ 2013-07-08 14:12 ` Anthony G. Basile
  5 siblings, 0 replies; 30+ messages in thread
From: Anthony G. Basile @ 2013-07-08 14:12 UTC (permalink / raw)
  To: musl

On 06/30/2013 01:52 AM, Rich Felker wrote:
> Hi all,
>
> With us nearing musl 1.0, there are a lot of things I could use some
> help with. Here are a few specific tasks/roles I'm looking for:
>
> 1. Put together a list of relevant conferences one or more of us could
>     attend in the next 4-6 months. I'm in the US and travelling outside
>     the country would probably be prohibitive unless we also find
>     funding, but anything in the US is pretty easy for me to get to,
>     and other people involved in the project could perhaps attend
>     other conferences outside the US.
>
> 2. Organize patches from sabotage, musl-cross, musl pkgsrc, etc. into
>     suitable form for upstream, and drafting appropriate emails or bug
>     reports to send.
>
> 3. Check status of musl support with build systems and distros
>     possibly adopting it as an option or their main libc. This would
>     include OpenWRT, Aboriginal, crosstool-ng, buildroot, Alpine Linux,
>     Gentoo, etc. If their people doing the support seem to have gotten
>     stuck or need help, offer assistance. Make sure the wiki is kept
>     updated with info on other projects using musl we we can send folks
>     their way too.
>
> 4. Wikimastering. The wiki could use a lot of organizational
>     improvement, additional information, and monitoring for outdated
>     information that needs to be updated or removed.
>
> 5. Rigorous testing. My ideal vision of this role is having somebody
>     who takes a look at each bug fix committed and writes test cases
>     for the bug and extrapolates tests for possible related bugs that
>     haven't yet been found. And who reads the glibc bug tracker so we
>     can take advantage of their bug reports too.
>
> Anyone up for volunteering? :-)
>
> Rich
>

You can watch (and critically comment) on my progress with gentoo + musl 
at the following repo:

http://git.overlays.gentoo.org/gitweb/?p=proj/hardened-dev.git;a=shortlog;h=refs/heads/musl

My goal it to get patches that can eventually be incorporated upstream 
where upstream means either 1) gentoo if its a gentoo specific issue and 
2) the developer/maintainer of the code.

My approach to #2 will be to patch the build system to detect what is 
available and what isn't, breaking down assumptions like linux = glibc. 
  Currently though, I'm busy just "dirty" hacking, ie just making sure 
it compiles in face of missing header, etc.

#1 is not trivial either as it requires things like a wrapper for 
ldconfig which our package management system assumes is available.  When 
emerge gcc, libgcc_s.so.1 must be made available else breakage!

It appears our goals overlap so I guess this makes me volunteer.


-- 
Anthony G. Basile, Ph. D.
Chair of Information Technology
D'Youville College
Buffalo, NY 14201
(716) 829-8197


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-04 18:05 ` Rob Landley
  2013-07-08  7:40   ` Szabolcs Nagy
@ 2013-07-08 14:45   ` Rich Felker
  2013-07-09  2:54     ` Rich Felker
  1 sibling, 1 reply; 30+ messages in thread
From: Rich Felker @ 2013-07-08 14:45 UTC (permalink / raw)
  To: musl

On Thu, Jul 04, 2013 at 01:05:06PM -0500, Rob Landley wrote:
> On 06/30/2013 12:52:02 AM, Rich Felker wrote:
> >Hi all,
> >
> >With us nearing musl 1.0, there are a lot of things I could use some
> >help with. Here are a few specific tasks/roles I'm looking for:
> >
> >1. Put together a list of relevant conferences one or more of us could
> >   attend in the next 4-6 months. I'm in the US and travelling outside
> >   the country would probably be prohibitive unless we also find
> >   funding, but anything in the US is pretty easy for me to get to,
> >   and other people involved in the project could perhaps attend
> >   other conferences outside the US.
> 
> CELF turned into the "Linux Foundation Embedded Linux Conference"
> and they squished it together with some Android thing. I've never
> been to the Plumber's conference half my twitter list seems to go.
> 
> Ohio LinuxFest's call for papers is open until Monday:

I wish we'd noticed a few days sooner. I've been thinking about
submitting a proposal for a talk, especially since the timing is so
good (September 13-15), but I'm having trouble coming up with a good
topic.

My leaning so far is to propose something related to the state of
library quality on Linux, dealing with both libc (the issues in glibc
and uclibc that lead to musl, and how glibc has been improving since
then) and non-system libraries that play a core role on modern Linux
systems (glib, etc.) and how they impact robustness (abort!) and
portability.

The idea is that it could be mildly promotional for musl, but also a
chance to promote proper library practices, possibly my definition of
"library-safe" (if we work out a good one), etc.

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-08  7:40   ` Szabolcs Nagy
@ 2013-07-09  2:14     ` Rob Landley
  2013-07-09  2:57       ` Matthew Fernandez
  0 siblings, 1 reply; 30+ messages in thread
From: Rob Landley @ 2013-07-09  2:14 UTC (permalink / raw)
  To: musl; +Cc: musl

On 07/08/2013 02:40:38 AM, Szabolcs Nagy wrote:
> * Rob Landley <rob@landley.net> [2013-07-04 13:05:06 -0500]:
> > >5. Rigorous testing. My ideal vision of this role is having  
> somebody
> > >   who takes a look at each bug fix committed and writes test cases
> > >   for the bug and extrapolates tests for possible related bugs  
> that
> > >   haven't yet been found. And who reads the glibc bug tracker so  
> we
> > >   can take advantage of their bug reports too.
> >
> > Is the Linux Test Project relevant?
> >
> 
> most of their tests are for kernel related features
> some of them are pretty outdated (stress testing floppy io..)
> and not very high quality (mostly written by ibm folks)
> there is a large set of 'openhpi' tests and the entire
> posix_testsuit (already audited), there is a fair amount
> of network tests, mostly sctp and nfs
> 
> it seems a bit messy and not quite what we want

The Linux Foundation was formed by the merger of OSDL with the Linux  
Standards Group to form "a voltron of bureaucracy", so I'm not  
surprised their actual testing and standardization functions  
essentially stopped.

The purpose of OSDL was to provide Linus Torvalds with a salary  
independent of any specific company. Unfortunately, the amount of money  
companies contributed to it was well above Linus's needs, they went on  
to justify _having_ so much money by getting offices and hiring people,  
meaning instead of a trust fund now they needed more money on a regular  
basis.

So they set themselves up as "the face of Linux" for corporations the  
same way AOL set itself up as the face of the internet for dialup users  
in the 1990's, and promised that they could translate between suit and  
geek. And they got very very good at talking to suits (where the money  
comes from), and are baffled by the _existence_ of hobbyists. (People  
do open source without geting paid? Inconceivable! They can't possibly  
be relevant to the process, they're just hangers-on mooching off our  
extensively funded work. Free riders we tolerate for historical  
reasons...)

So yeah, not surprising if LSB became a corporate rubber stamp. Sad,  
but not surprising.

Rob

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-08 14:45   ` Rich Felker
@ 2013-07-09  2:54     ` Rich Felker
  0 siblings, 0 replies; 30+ messages in thread
From: Rich Felker @ 2013-07-09  2:54 UTC (permalink / raw)
  To: musl

On Mon, Jul 08, 2013 at 10:45:59AM -0400, Rich Felker wrote:
> > Ohio LinuxFest's call for papers is open until Monday:
> 
> I wish we'd noticed a few days sooner. I've been thinking about
> submitting a proposal for a talk, especially since the timing is so
> good (September 13-15), but I'm having trouble coming up with a good
> topic.
> 
> My leaning so far is to propose something related to the state of
> library quality on Linux, dealing with both libc (the issues in glibc
> and uclibc that lead to musl, and how glibc has been improving since
> then) and non-system libraries that play a core role on modern Linux
> systems (glib, etc.) and how they impact robustness (abort!) and
> portability.
> 
> The idea is that it could be mildly promotional for musl, but also a
> chance to promote proper library practices, possibly my definition of
> "library-safe" (if we work out a good one), etc.

I proposed a talk along these lines today before the deadline. Hoping
it gets accepted.

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-09  2:14     ` Rob Landley
@ 2013-07-09  2:57       ` Matthew Fernandez
  0 siblings, 0 replies; 30+ messages in thread
From: Matthew Fernandez @ 2013-07-09  2:57 UTC (permalink / raw)
  To: musl, Rob Landley



Rob Landley <rob@landley.net> wrote:
>
>So they set themselves up as "the face of Linux" for corporations the
>same way AOL set itself up as the face of the internet for dialup users
>
>in the 1990's, and promised that they could translate between suit and
>
>geek. And they got very very good at talking to suits (where the money
>
>comes from), and are baffled by the _existence_ of hobbyists. (People
>do open source without geting paid? Inconceivable! They can't possibly
>
>be relevant to the process, they're just hangers-on mooching off our
>extensively funded work. Free riders we tolerate for historical
>reasons...)

To briefly defend the Linux Foundation, some of their employees spend a great deal of their time on engaging with the hobbyist community. Jennifer Cloer and Carla Schroder spring to mind. I'm not affiliated with the Foundation, but just my two cents :)

________________________________

The information in this e-mail may be confidential and subject to legal professional privilege and/or copyright. National ICT Australia Limited accepts no liability for any damage caused by this email or its attachments.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-02  7:49           ` Rich Felker
@ 2013-07-16 16:20             ` Szabolcs Nagy
  2013-07-17 15:41               ` Rich Felker
  0 siblings, 1 reply; 30+ messages in thread
From: Szabolcs Nagy @ 2013-07-16 16:20 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 2378 bytes --]

* Rich Felker <dalias@aerifal.cx> [2013-07-02 03:49:20 -0400]:
> What about a mix? Have the makefile include another makefile fragment
> with a rule to generate that fragment, where the fragment is generated
> from comments in the source files. Then you have full dependency
> tracking via make, and self-contained tests.

i wrote some tests but the build system became a bit nasty
i attached the current layout with most of the test cases
removed so someone can take a look and/or propose a better
buildsystem before i do too much work in the wrong direction

each directory has separate makefile because they work
differently

functional/ and regression/ tests have the same makefile,
they set up a lot of make variables for each .c file in
the directory, the variables and rules can be overridden
by a similarly named .mk file
(this seems to be more reliable than target specific vars)

(now it builds both static and dynamic linked binaries
this can be changed)

i use the srcdir variable so it is possible to build
the binaries into a different directory (so a single
source tree can be used for glibc and musl test binaries)
i'm not sure how useful is that (one could use several
git repos as well)

another approach would be one central makefile that
collects all the sources and then you have to build
tests from the central place
(but i thought that sometimes you just want to run
a subset of the tests and that's easier with the
makefile per dir approach, another issue is dlopen
and ldso tests need the .so binary at the right path
at runtime so you cannot run the tests from arbitrary
directory)

yet another approach would be to use a simple makefile
with explicit rules without fancy gnu make tricks
but then the makefile needs to be edited whenever a
new test is added

i'm not sure what's the best way to handle common/
code in case of decentralized makefiles, now i
collected them into a separate directory that is
built into a 'libtest.a' that is linked to all
tests so you have to build common/ first

i haven't yet done proper collection of the reports
and i'll need some tool to run the test cases:
i don't know how to report the signal name or number
(portably) from sh when a test is killed by a signal
(the shell prints segfault etc to its stderr which may
be used) and i don't know how to kill the test reliably
after a timeout

i hope this makes sense

[-- Attachment #2: test.tar.gz --]
[-- Type: application/octet-stream, Size: 19332 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-16 16:20             ` Szabolcs Nagy
@ 2013-07-17 15:41               ` Rich Felker
  0 siblings, 0 replies; 30+ messages in thread
From: Rich Felker @ 2013-07-17 15:41 UTC (permalink / raw)
  To: musl

On Tue, Jul 16, 2013 at 06:20:31PM +0200, Szabolcs Nagy wrote:
> * Rich Felker <dalias@aerifal.cx> [2013-07-02 03:49:20 -0400]:
> > What about a mix? Have the makefile include another makefile fragment
> > with a rule to generate that fragment, where the fragment is generated
> > from comments in the source files. Then you have full dependency
> > tracking via make, and self-contained tests.
> 
> i wrote some tests but the build system became a bit nasty
> i attached the current layout with most of the test cases
> removed so someone can take a look and/or propose a better
> buildsystem before i do too much work in the wrong direction

If you really want to do multiple makefiles, what about at least
setting up the top-level makefile so it invokes them via dependencies
rather than a shell for-loop?

> each directory has separate makefile because they work
> differently
> 
> functional/ and regression/ tests have the same makefile,
> they set up a lot of make variables for each .c file in
> the directory, the variables and rules can be overridden
> by a similarly named .mk file
> (this seems to be more reliable than target specific vars)
> 
> (now it builds both static and dynamic linked binaries
> this can be changed)

It's probably useful. We had plenty of bugs that only showed up one
way or the other, but it may be useful just to test the cases where we
know or expect it matters (purely in the interest of build time and
run time).

> i use the srcdir variable so it is possible to build
> the binaries into a different directory (so a single
> source tree can be used for glibc and musl test binaries)
> i'm not sure how useful is that (one could use several
> git repos as well)

If it's easy to do, I like it. It makes it easy to try local changes
on both without committing them to a repo.

> another approach would be one central makefile that
> collects all the sources and then you have to build
> tests from the central place
> (but i thought that sometimes you just want to run
> a subset of the tests and that's easier with the
> makefile per dir approach,

I would like this better, but I'm happy to have whatever works. IMO
it's not too bad to support building subsets with a single makefile.
You just have variables containing the names of all tests in a
particular subset and rules that depend on just those tests. One thing
I also just thought of is that you could have separate REPORT files
for each test which are concatenated to the final REPORT file. This
makes it possible to run the tests in parallel. In general, I think
the more declarative/functional and less procedural you make a
makefile, the simpler it is and the better it works.

> another issue is dlopen
> and ldso tests need the .so binary at the right path
> at runtime so you cannot run the tests from arbitrary
> directory)

Perhaps the makefile could pass the directory containing the test as
an argument to it for these tests so they could chdir to their own
location as part of the test?

> yet another approach would be to use a simple makefile
> with explicit rules without fancy gnu make tricks
> but then the makefile needs to be edited whenever a
> new test is added

I like the current approach where you don't have to edit the makefile.
:-)

> i'm not sure what's the best way to handle common/
> code in case of decentralized makefiles, now i
> collected them into a separate directory that is
> built into a 'libtest.a' that is linked to all
> tests so you have to build common/ first

That's why I like unified makefiles.

> i haven't yet done proper collection of the reports
> and i'll need some tool to run the test cases:
> i don't know how to report the signal name or number
> (portably) from sh when a test is killed by a signal

Instead of using the shell, run it from your own program that gets the
exit status with waitpid and passes that to strsignal.

> (the shell prints segfault etc to its stderr which may
> be used) and i don't know how to kill the test reliably
> after a timeout
> 
> i hope this makes sense

Yes. Hope this review is helpful. Again, this is your project and I'm
very grateful that you're doing it, so I don't want to impose my
opinions on how to do stuff, especially if it hinders your ability to
get things done.

Thanks and best wishes,

Rich


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
  2013-07-01 17:44 Felix Janda
@ 2013-07-02  1:08 ` Michael Kerrisk (man-pages)
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Kerrisk (man-pages) @ 2013-07-02  1:08 UTC (permalink / raw)
  To: 20130701034317.GW29800, musl, Rich Felker, Michael Kerrisk

Gidday,

On Mon, Jul 1, 2013 at 7:44 PM, Felix Janda <felix.janda@posteo.de> wrote:
> Hello,
>
> Rich Felker wrote:
>> > >What might be better for the near future is to get the POSIX man pages
>> > >project updated to match POSIX-2008+TC1 so that users of musl who want
>> > >man pages for libc functions can install them and have them match the
>> > >current version.
>> >
>> > I note that the guy who did the posix man pages ten years ago was:
>> > Michael Kerrisk.

(To be more precise, it was Andries Brouwer, the previous maintainer,
who carried out the task shortly before I took over.)

>> Maybe someone should contact him about all the stuff we're discussing.
>
> I asked him some time ago about the posix man pages and offered help with the
> conversion.
>
> He has interest in creating new posix man pages for the SUSV4 TC1. The status
> from beginning of June was that he had contacted The Open Group asking for
> permission. Possibly, we might get troff sources.

As of a few days ago, the necessary permissions are granted. I do not
yet have the source files (and so am unsure of the source format), but
I expect that they will become available to us in the next couple of
weeks.

> (He also told me that the conversion for the previous posix man pages were done
> before he had become their maintainer.)
>
> I CC'ed him so that he knows about this discussion.

Thank you, Felix.

Cheers,

Michael


--
Michael Kerrisk
Linux man-pages maintainer; http://www.kernel.org/doc/man-pages/
Author of "The Linux Programming Interface"; http://man7.org/tlpi/


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Request for volunteers
@ 2013-07-01 17:44 Felix Janda
  2013-07-02  1:08 ` Michael Kerrisk (man-pages)
  0 siblings, 1 reply; 30+ messages in thread
From: Felix Janda @ 2013-07-01 17:44 UTC (permalink / raw)
  To: musl; +Cc: Rich Felker, Michael Kerrisk

Hello,

Rich Felker wrote:
> > >What might be better for the near future is to get the POSIX man pages
> > >project updated to match POSIX-2008+TC1 so that users of musl who want
> > >man pages for libc functions can install them and have them match the
> > >current version.
> > 
> > I note that the guy who did the posix man pages ten years ago was:
> > Michael Kerrisk.
>
> Maybe someone should contact him about all the stuff we're discussing.

I asked him some time ago about the posix man pages and offered help with the
conversion.

He has interest in creating new posix man pages for the SUSV4 TC1. The status
from beginning of June was that he had contacted The Open Group asking for
permission. Possibly, we might get troff sources.

(He also told me that the conversion for the previous posix man pages were done
before he had become their maintainer.)

I CC'ed him so that he knows about this discussion.


Felix


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2013-07-17 15:41 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-30  5:52 Request for volunteers Rich Felker
2013-06-30 10:48 ` Szabolcs Nagy
2013-07-01  3:51   ` Rich Felker
2013-07-01 20:58     ` Szabolcs Nagy
2013-07-01 23:59       ` Rich Felker
2013-07-02  2:19         ` Szabolcs Nagy
2013-07-02  7:49           ` Rich Felker
2013-07-16 16:20             ` Szabolcs Nagy
2013-07-17 15:41               ` Rich Felker
2013-06-30 11:02 ` Daniel Cegiełka
2013-06-30 12:13   ` Rich Felker
2013-06-30 22:29     ` idunham
2013-07-01  3:31       ` Rich Felker
2013-07-01 17:42         ` Isaac
2013-07-01 17:46           ` Alex Caudill
2013-07-01 21:12           ` Isaac
2013-07-01  3:13     ` Rob Landley
2013-07-01  3:43       ` Rich Felker
2013-06-30 15:25 ` Nathan McSween
2013-06-30 22:59   ` Luca Barbato
2013-07-01 11:42 ` LM
2013-07-04 18:05 ` Rob Landley
2013-07-08  7:40   ` Szabolcs Nagy
2013-07-09  2:14     ` Rob Landley
2013-07-09  2:57       ` Matthew Fernandez
2013-07-08 14:45   ` Rich Felker
2013-07-09  2:54     ` Rich Felker
2013-07-08 14:12 ` Anthony G. Basile
2013-07-01 17:44 Felix Janda
2013-07-02  1:08 ` Michael Kerrisk (man-pages)

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).