From: Alan Schmitt <alan.schmitt@polytechnique.org>
To: "lwn" <lwn@lwn.net>, "cwn" <cwn@lists.idyll.org>, caml-list@inria.fr
Subject: [Caml-list] Attn: Development Editor, Latest OCaml Weekly News
Date: Tue, 07 Jan 2020 14:43:10 +0100 [thread overview]
Message-ID: <87sgkrbcwh.fsf@polytechnique.org> (raw)
[-- Attachment #1: Type: text/plain, Size: 23631 bytes --]
Hello
Here is the latest OCaml Weekly News, for the week of December 31, 2019
to January 07, 2020.
Table of Contents
─────────────────
ocaml-lsp preview
Mkocaml Release - Project generator
Garbage Collection, Side-effects and Purity
A Lightweight OCaml Webapp Tutorial (Using Opium, Caqti, and Tyxml)
Release of owl-symbolic 0.1.0
Static lifetime
Old CWN
ocaml-lsp preview
═════════════════
Archive: <https://discuss.ocaml.org/t/ann-ocaml-lsp-preview/4876/15>
Continuing this thread, Edwin Török said
────────────────────────────────────────
Here is an example with ALE and Neovim (tested with v0.3.8):
• Install the [Ale] plugin. If your Vim has support for packages (Vim
8+ or Neovim) you can simply clone it in the correct subdir, no need
for a plugin manager: `git clone https://github.com/w0rp/ale.git
.vim/pack/my-plugins/start/ale'
• Add this to your .vimrc:
┌────
│ " only invoke merlin to check for errors when
│ " exiting insert mode, not on each keystroke.
│ let g:ale_lint_on_text_changed="never"
│ let g:ale_lint_on_insert_leave=1
│
│ " enable ALE's internal completion if deoplete is not used
│ let g:ale_completion_enabled=1
│
│ " only pop up completion when stopped typing for ~0.5s,
│ " to avoid distracting when completion is not needed
│ let g:ale_completion_delay=500
│
│ " see ale-completion-completeopt-bug
│ set completeopt=menu,menuone,preview,noselect,noinsert
│
│ if has('packages')
│ packloadall
│
│ " This should be part of ALE itself, like ols.vim
│ call ale#linter#Define('ocaml',{
│ \ 'name':'ocaml-lsp',
│ \ 'lsp': 'stdio',
│ \ 'executable': 'ocamllsp',
│ \ 'command': '%e',
│ \ 'project_root': function('ale#handlers#ols#GetProjectRoot')
│ \})
│
│ " remap 'gd' like Merlin would
│ nmap <silent><buffer> gd <Plug>(ale_go_to_definition_in_split)<CR>
│
│ " go back
│ nnoremap <silent> <LocalLeader>gb <C-O>
│
│ " show list of file:line:col of references for symbol under cursor
│ nmap <silent><buffer> <LocalLeader>go :ALEFindReferences -relative<CR>
│
│ " Show documentation if available, and type
│ nmap <silent><buffer> <LocalLeader>hh <Plug>(ale_hover)<CR>
│
│ " So I can type ,hh. More convenient than \hh.
│ nmap , <LocalLeader>
│ vmap , <LocalLeader>
│ endif
└────
[Ale] <https://github.com/dense-analysis/ale>
Mkocaml Release - Project generator
═══════════════════════════════════
Archive:
<https://discuss.ocaml.org/t/mkocaml-release-project-generator/4949/1>
Chris Nevers announced
──────────────────────
I recently created a tool to generate OCaml projects. I constantly
have difficulties with dune commands and setting up opam files,
etc. Mkocaml generates a dune project with inline tests, opam file,
git repository, git ignore, and a Makefile with easy commands. This
tool can be of great help to newcomers, allowing them to get up and
running faster!
<https://github.com/chrisnevers/mkocaml>
Garbage Collection, Side-effects and Purity
═══════════════════════════════════════════
Archive:
<https://discuss.ocaml.org/t/garbage-collection-side-effects-and-purity/4737/1>
Gerard asked
────────────
GC = Garbage Collection
GC, in a pure program, is a point that's always confused me. I always
understood that freeing memory from a program was impure and would
create side-effects but it seems it doesn't matter if the program is
remove from all consequences of those impure acts and side-effects.
Basically, if any memory block has no remaining references in the
program, then freeing that block will have no consequences on the
running program so its allowed to happen behind the scenes..
Is this correct reasoning?
Guillaume Munch-Maccagnoni replied
──────────────────────────────────
To answer your question “does de-allocation creates a side-effect?”:
To state the obvious: if you care about the memory consumption of your
program, then you care about the side-effect of de-allocation, and
this indeed voids purity.
A language like OCaml lets you reason about de-allocation. Memory is
collected when values are no longer reachable. Like in other
languages, 1) a value that does not escape and goes out of scope will
get collected, and 2) you can reason about when a value escapes and
goes out of scope thanks to OCaml respecting the strict evaluation
order of value types. OCaml (like other compiled languages) is in fact
more precise: it ties the dynamic notion of reachability to the
lexical notion of variable occurrence. For instance, in the following:
┌────
│ let x = get_huge_data () in
│ let z = long_running_function x in
│ f z
└────
OCaml will be able to collect the value in `x' before `x' goes out of
scope, and thus if possible before `long_running_function'
returns. Indeed, OCaml performs liveness analysis during compilation,
and records the information about variable occurrences in frame
descriptors, for consumption by the GC when it scans for roots. In
fact, you can rely on call-by-value operational semantics to (loosely)
reason that a value no longer appears in a program, and therefore that
the corresponding memory will be collected by the GC¹ ([Morrisett,
Felleisen and Harper, "Abstract Models of Memory Management"]). Of
course, using lazy or higher-order interfaces (when closures escape;
with many idioms they do not) will make it harder to reason about the
lifetime of values.
(¹: For OCaml, this is a conjecture I make, for subsets which could be
given such operational semantics, and only for native
compilation. Morrisett, Felleisen and Harper's semantics obviously
assumes that the results of liveness analysis are made available to
the GC, but this is not written, nor is there any mention of the link
between liveness analysis and accuracy of garbage collection in
Appel's "Modern Compiler Implementation in C". I assume that it was
part of folklore at the time, though recently I mentioned it to some
functional PL researcher and they seemed surprised. I only found it
explicitly mentioned in later papers from the OOP community. I checked
that everything seems in place for OCaml to allow such reasoning, but
only the authors of the original code, @xavierleroy and
@damiendoligez, can tell us if this is intended to be part of the
language semantics.)
Furthermore, memory is not collected immediately when a value becomes
unreachable. Instead:
• Short-lived values are allocated contiguously and deallocated in a
batch, so that allocating and deallocating short-lived values is
very cheap, with additional benefits in terms of cache
locality. This replaces stack allocation from languages with
explicit memory management.
• Longer-lived values are moved to a heap that is scanned
incrementally, to ensure a bounded latency. In contrast, naive
reference-counting and unique pointers from C++/Rust make you pay
the cost of deallocation up-front.
While this is essential for understanding the performance of OCaml
programs, from the point of view of deallocation-as-an-effect, the
delaying of the collection of unreachable memory can be seen as a
runtime optimisation, that does not change the effectful status of
deallocation (the memory still gets freed). [The intuition is that an
effect can support some degree of reordering without requiring purity,
as illustrated by strong monads which can be commutative without being
idempotent, one possible definition of purity for semanticists.]
But is de-allocation an effect _in practice_? Faced with the
scepticism and misunderstandings from this thread, I emit two
hypotheses:
1) Memory consumption is not an issue in functional programming, for
application areas that interest functional programmers.
2) Memory management in OCaml is efficient in such a way that
programmers do not need to think about it in their day-to-day
programming activities in those terms.
Hypothesis 2) could be explained for instance if OCaml programmers are
already dealing with effects and thinking about the order in which
their code executes (my experience), and are only used to deal with
deallocation as an afterthought, e.g. when chasing leaks with a
profiler.
Let us turn towards two programming language experiments from the
1990's that allow me to reject hypothesis 1). Both show what happens
when one denies the status of deallocation as an effect controlled by
the programmer.
• Region-based memory management consisted in allocating in a stack of
memory _regions_ deallocated at once, and determined by a
whole-program static analysis. Now regarded as a failed idea but
successful experiment (i.e. good science!), it taught us a lot about
the structure of functional programs in relationship to memory
management ([see this retrospective]). There were some good
performance results, but also pathological cases _“where lifetimes
were not nested or where higher-order functions were used
extensively”_, sometimes requiring them to be altered to be _“region
friendly”_, which was _“time-consuming”_ and required knowledge of
the inference algorithm. In addition, the regions changed
unpredictably when the programs evolved, and memory leaks appeared
when the compiler inferred too wide regions.
• Haskell was (at the time) an experiment with lazy functional
programming. Pervasive laziness prevents reasoning about the
lifetime of values, and purity is a central assumption used by the
compiler for program transformations, which is antithetical with
reasoning about deallocation as an effect. It is well-known that
naive Haskell code has issues with memory leaks, and that realistic
Haskell programs have to follow "best practices" to avoid leaks, by
making extensive use of strictness annotations (e.g. bang
patterns). Unfortunately, I found it hard to find reliable academic
sources about lessons drawn from the experiment like the RBMM
retrospective. The best I could find on the topic of memory leaks is
the following blog post:
<https://queue.acm.org/detail.cfm?id=2538488>, from a Haskell
programmer who wrote in another post (linked from that one) _“My
suspicion is that many (most?) large Haskell programs have space
leaks, but they often go unnoticed”_. This is consistent with
comments I received from people with Haskell experience (first-hand,
one academic and one industrial) and about an industrial Haskell
consultant (second-hand) who reportedly commented that their main
job was to fix memory leaks (but maybe in jest). Of course, take
this with a grain of salt. At least, I believe that the Haskell
academic community has accumulated empirical evidence of the extent
and manner in which deallocation voids purity assumptions. Having an
authoritative source about it would be pretty important to me, given
the initial promises of functional programs being more tractable
mathematically specifically via “referential transparency” and
independence of execution order, whose theoretical justification
already looks shaky to me from a semantic point of view. Some parts
of the literature continues to promise far-reaching consequences of
equational reasoning, without clear statements of limitation of the
application domain. I have the impression that the Haskell which is
practiced in the real world is very different from what you can read
in some academic papers.
The hypothesis that deallocation matters as an effect, and that ML
makes it easy to program and reason about effects, seems to me a
strong argument explaining OCaml's predictable and competitive
performance.
So, thank you for your healthy scepticism.
[Morrisett, Felleisen and Harper, "Abstract Models of Memory
Management"] <https://dash.harvard.edu/handle/1/3293156>
[see this retrospective]
<https://link.springer.com/article/10.1023/B:LISP.0000029446.78563.a4>
Xavier Leroy replied
────────────────────
Concerning the "don't scan local variables that are dead" trick:
• Technically it is not "intended to be part of the language
semantics" because the bytecode compiler (ocamlc) doesn't implement
it, only the native-code compiler (ocamlopt).
• As far as I remember, I reinvented this trick circa 1993, but it
seems it was used earlier in the Lazy ML compiler by Augustsson and
Johnsson. See Appel and Shao's paper "An Empirical and Analytic
Study of Stack vs. Heap Cost for Languages with Closures", JFP,
1996, end of section 5.
Guillaume Munch-Maccagnoni the asked
────────────────────────────────────
TL;DR: the paper mentioned by @xavierleroy provides additional
references regarding the importance of liveness analysis for GC,
including a demonstration by Appel that this actually matters for
space complexity (thanks!). I find that a link is still missing with
an abstract semantics à la Morrisett, Felleisen & Harper. This seems
important to me because more theoretical works about time & space
complexity in the lambda-calculus seem to take for granted that
garbage collection implements something like the latter (i.e., how
does one specify and certify that a compiler is sound for space
complexity?).
Xavier Leroy replied
────────────────────
See for example [Closure Conversion is Safe for Space], by Zoe
Paraskevopoulou and Andrew W. Appel, ICFP 2019.
[Closure Conversion is Safe for Space]
<https://www.cs.princeton.edu/~appel/papers/safe-closure.pdf>
A Lightweight OCaml Webapp Tutorial (Using Opium, Caqti, and Tyxml)
═══════════════════════════════════════════════════════════════════
Archive:
<https://discuss.ocaml.org/t/a-lightweight-ocaml-webapp-tutorial-using-opium-caqti-and-tyxml/4967/1>
Shon announced
──────────────
The tutorial is [hosted on gitlab pages], out of [this repository].
I put this together in response to some requests for introductory
material on the topic (here and on [/r/ocaml]. I don't have much
expertise to offer in this area, but I had hacked together some simple
servers based on Opium in the past few months, so it seemed like I
should be able to memorialize some of what I learned for the benefit
of others. I received some critical guidance by the Opium maintainers,
rgrinberg and anuragsoni, and from other resources online (mentioned
at the end of the tutorial).
Any feedback or improvements are welcome: this is my first time
writing such lengthy instructional material, and I'm sure there's lots
of room to make it better.
[hosted on gitlab pages] <https://shonfeder.gitlab.io/ocaml_webapp/>
[this repository] <https://gitlab.com/anuragsoni/ocaml_webapp>
[/r/ocaml] <https://www.reddit.com/r/ocaml/>
Release of owl-symbolic 0.1.0
═════════════════════════════
Archive:
<https://discuss.ocaml.org/t/announce-release-of-owl-symbolic-0-1-0/4930/2>
jrzhao42 announced
──────────────────
The Owl tutorial book URL address is now changed to:
<https://ocaml.xyz/book/symbolic.html>.
Static lifetime
═══════════════
Archive: <https://discuss.ocaml.org/t/static-lifetime/4908/19>
André asked and Guillaume Munch-Maccagnoni replied
──────────────────────────────────────────────────
> Is it possible to “statically” allocate a value? By this I mean mark
a value such that it gets ignored by the GC and lives until the
program exits?
This is indeed the purpose of Ancient, which comes with limitations
and does not allow you to reclaim the memory until you exit the
program. (I am curious to know how well it works with recent OCaml
versions.)
> it would be really interesting to learn whether Ocaml forbids blocks
outside the heap.
The OCaml runtime has two modes (chosen at compilation) for dealing
with so-called "out-of-heap" pointers. In the legacy one that Chet
remembers, the GC uses a page table when scanning to be able to tell
which pointers it possesses. In the "no-naked-pointers" mode devised
more recently for efficiency reasons, the page table is replaced by
looking at the colour in the header of the dereferenced
value. Out-of-heap values must be preceded by a header with colour
black. The no-naked-pointer mode is more restricted, because once a
static value is referenced, it can no longer be deallocated, as you
never know whether it is still reachable by the GC. This should be
enough to support Ancient.
> One should verify such intuitions experimentally, before trying to
fix them, but I’m not familiar with what OCaml profilers can do…
Excluding large long-lived data from the GC is an old idea. Among
recent developments, Nguyen et al. [1] distinguish a "control path"
(where the generational hypothesis is assumed to hold) from a "data
path" (where values are assumed to follow an "epochal" behaviour
(long-lived, similar lifetimes, benefit from locality), and are
excluded from GC). They give as motivation so-called "big data" and as
figures of pathological GC usage up to 50% of total runtime. I
remember reading similar figures from blog posts about large data sets
in OCaml. In reality this indeed depends on knobs you can turn on your
GC that can result in increased peak memory usage among
others. (Assuming infinite available memory, it is even possible to
let the GC share drop to 0%.)
@ppedrot reported to me that in a recent experiment with Coq, using an
Ancient-like trick to exclude some large, long-lived and
rarely-accessed values from being scanned (namely serialising them
into bigarrays), they saw an 8% performance improvement across the
board in benchmarks.
Multicore, if I understood correctly, aims to support only the
no-naked-pointer mode, and I am not sure what the page table will
become. Coq currently does some out-of-heap allocation in the VM, and
has been adapted to be compatible with the no-naked-pointer mode by
wrapping out-of-heap pointers into custom blocks. For scanning its
custom stack (which mixes in-heap and out-of-heap values), Coq sets up
a custom root-scanning function (`caml_scan_roots_hook`), which still
relies on the page table.
Note that having to wrap out-of-heap pointers in custom blocks is
(much!) less expressive: for instance with Ancient you can call
`List.filter` on a statically-allocated list (and correctly get a
GC-allocated list of statically-allocated values). With custom blocks
you cannot mix in-heap and out-of-heap values in this way.
For a type system to deal with "statically" allocated values, have a
look at Rust, which: 1) prevents cycles of reference-counting schemes
thanks to uniqueness, 2) can treat GC roots as resources to deal with
backpointers at the leaves of the value (cf. the interoperability with
SpiderMonkey's GC in Servo). A point of view that I like is that
tracing GCs and static allocation differ fundamentally by how they
traverse values for collection: traversing live values for the first
one, and traversing values at the moment of their death for the
other. This gives them distinct advantages and drawbacks so one can
see them as complementary. (See notably [2,3].) Static allocation is
interesting for performance in some aspects (no tracing, no read-write
barrier, reusability of memory cells, avoids calling the GC at
inappropriate times), but I find it even more interesting for
interoperability (e.g. exchanging values freely with C or Rust, or
[applications from that other thread]). It is natural to want to mix
them in a language.
As far as I understand, developing the runtime capabilities for OCaml
to deal with out-of-heap pointers without resorting to an expensive
page table is an engineering problem, not a fundamental one. If anyone
is interested in this, please contact me.
[1] Nguyen et al., [Yak : A High-Performance Big-Data-Friendly Garbage
Collector], 2016
[2] Bacon, Cheng and Rajan, [A Unified Theory of Garbage Collection],
2004
[3] Shahriyar, Blackburn and Frampton, [Down for the Count? Getting
Reference Counting Back in the Ring], 2012
[applications from that other thread]
<https://discuss.ocaml.org/t/using-a-bigarray-as-a-shared-memory-for-parallel-programming/4841/19>
[Yak : A High-Performance Big-Data-Friendly Garbage Collector]
<https://www.usenix.org/system/files/conference/osdi16/osdi16-nguyen.pdf>
[A Unified Theory of Garbage Collection]
<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.439.1202&rep=rep1&type=pdf>
[Down for the Count? Getting Reference Counting Back in the Ring]
<https://dl.acm.org/citation.cfm?doid=2258996.2259008>
UnixJunkie also replied
───────────────────────
If you can store your long-leaved data into a bigarray, I think you
would reach the effect that you were looking for (no more GC scanning
of this data).
This was once advised to me by Oleg, for some performance-critical
section of some code.
Old CWN
═══════
If you happen to miss a CWN, you can [send me a message] and I'll mail
it to you, or go take a look at [the archive] or the [RSS feed of the
archives].
If you also wish to receive it every week by mail, you may subscribe
[online].
[Alan Schmitt]
[send me a message] <mailto:alan.schmitt@polytechnique.org>
[the archive] <http://alan.petitepomme.net/cwn/>
[RSS feed of the archives] <http://alan.petitepomme.net/cwn/cwn.rss>
[online] <http://lists.idyll.org/listinfo/caml-news-weekly/>
[Alan Schmitt] <http://alan.petitepomme.net/>
[-- Attachment #2: Type: text/html, Size: 35507 bytes --]
next reply other threads:[~2020-01-07 13:43 UTC|newest]
Thread overview: 112+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-07 13:43 Alan Schmitt [this message]
-- strict thread matches above, loose matches on Subject: below --
2022-07-26 17:54 Alan Schmitt
2022-07-19 8:58 Alan Schmitt
2022-07-12 7:59 Alan Schmitt
2022-07-05 7:42 Alan Schmitt
2022-06-28 7:37 Alan Schmitt
2022-06-21 8:06 Alan Schmitt
2022-06-14 9:29 Alan Schmitt
2022-06-07 10:15 Alan Schmitt
2022-05-31 12:29 Alan Schmitt
2022-05-24 8:04 Alan Schmitt
2022-05-17 7:12 Alan Schmitt
2022-05-10 12:30 Alan Schmitt
2022-05-03 9:11 Alan Schmitt
2022-04-26 6:44 Alan Schmitt
2022-04-19 5:34 Alan Schmitt
2022-04-12 8:10 Alan Schmitt
2022-04-05 11:50 Alan Schmitt
2022-03-29 7:42 Alan Schmitt
2022-03-22 13:01 Alan Schmitt
2022-03-15 9:59 Alan Schmitt
2022-03-01 13:54 Alan Schmitt
2022-02-22 12:43 Alan Schmitt
2022-02-08 13:16 Alan Schmitt
2022-02-01 13:00 Alan Schmitt
2022-01-25 12:44 Alan Schmitt
2022-01-11 8:20 Alan Schmitt
2022-01-04 7:56 Alan Schmitt
2021-12-28 8:59 Alan Schmitt
2021-12-21 9:11 Alan Schmitt
2021-12-14 11:02 Alan Schmitt
2021-11-30 10:51 Alan Schmitt
2021-11-16 8:41 Alan Schmitt
2021-11-09 10:08 Alan Schmitt
2021-11-02 8:50 Alan Schmitt
2021-10-19 8:23 Alan Schmitt
2021-09-28 6:37 Alan Schmitt
2021-09-21 9:09 Alan Schmitt
2021-09-07 13:23 Alan Schmitt
2021-08-24 13:44 Alan Schmitt
2021-08-17 6:24 Alan Schmitt
2021-08-10 16:47 Alan Schmitt
2021-07-27 8:54 Alan Schmitt
2021-07-20 12:58 Alan Schmitt
2021-07-06 12:33 Alan Schmitt
2021-06-29 12:24 Alan Schmitt
2021-06-22 9:04 Alan Schmitt
2021-06-01 9:23 Alan Schmitt
2021-05-25 7:30 Alan Schmitt
2021-05-11 14:47 Alan Schmitt
2021-05-04 8:57 Alan Schmitt
2021-04-27 14:26 Alan Schmitt
2021-04-20 9:07 Alan Schmitt
2021-04-06 9:42 Alan Schmitt
2021-03-30 14:55 Alan Schmitt
2021-03-23 9:05 Alan Schmitt
2021-03-16 10:31 Alan Schmitt
2021-03-09 10:58 Alan Schmitt
2021-02-23 9:51 Alan Schmitt
2021-02-16 13:53 Alan Schmitt
2021-02-02 13:56 Alan Schmitt
2021-01-26 13:25 Alan Schmitt
2021-01-19 14:28 Alan Schmitt
2021-01-12 9:47 Alan Schmitt
2021-01-05 11:22 Alan Schmitt
2020-12-29 9:59 Alan Schmitt
2020-12-22 8:48 Alan Schmitt
2020-12-15 9:51 Alan Schmitt
2020-12-01 8:54 Alan Schmitt
2020-11-03 15:15 Alan Schmitt
2020-10-27 8:43 Alan Schmitt
2020-10-20 8:15 Alan Schmitt
2020-10-06 7:22 Alan Schmitt
2020-09-29 7:02 Alan Schmitt
2020-09-22 7:27 Alan Schmitt
2020-09-08 13:11 Alan Schmitt
2020-09-01 7:55 Alan Schmitt
2020-08-18 7:25 Alan Schmitt
2020-07-28 16:57 Alan Schmitt
2020-07-21 14:42 Alan Schmitt
2020-07-14 9:54 Alan Schmitt
2020-07-07 10:04 Alan Schmitt
2020-06-30 7:00 Alan Schmitt
2020-06-16 8:36 Alan Schmitt
2020-06-09 8:28 Alan Schmitt
2020-05-19 9:52 Alan Schmitt
2020-05-12 7:45 Alan Schmitt
2020-05-05 7:45 Alan Schmitt
2020-04-28 12:44 Alan Schmitt
2020-04-21 8:58 Alan Schmitt
2020-04-14 7:28 Alan Schmitt
2020-04-07 7:51 Alan Schmitt
2020-03-31 9:54 Alan Schmitt
2020-03-24 9:31 Alan Schmitt
2020-03-17 11:04 Alan Schmitt
2020-03-10 14:28 Alan Schmitt
2020-03-03 8:00 Alan Schmitt
2020-02-25 8:51 Alan Schmitt
2020-02-18 8:18 Alan Schmitt
2020-02-04 8:47 Alan Schmitt
2020-01-28 10:53 Alan Schmitt
2020-01-21 14:08 Alan Schmitt
2020-01-14 14:16 Alan Schmitt
2019-12-31 9:18 Alan Schmitt
2019-12-17 8:52 Alan Schmitt
2019-12-10 8:21 Alan Schmitt
2019-12-03 15:42 Alan Schmitt
2019-11-26 8:33 Alan Schmitt
2019-11-12 13:21 Alan Schmitt
2019-11-05 6:55 Alan Schmitt
2019-10-15 7:28 Alan Schmitt
2019-09-03 7:35 Alan Schmitt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87sgkrbcwh.fsf@polytechnique.org \
--to=alan.schmitt@polytechnique.org \
--cc=caml-list@inria.fr \
--cc=cwn@lists.idyll.org \
--cc=lwn@lwn.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).