Multimarkdown also support quite extensive amount of features and because of that you might find it slower than Discount.

And regarding optimization, actually in the real world, a lot of people don't care about it. An example is a workflow to generate a large document from many smaller ones outlined in [Repeated Footnotes Anchors and Headers Across Multiple Files — Pandoc Tricks · jgm/pandoc Wiki · GitHub](https://github.com/jgm/pandoc/wiki/Pandoc-Tricks#repeated-footnotes-anchors-and-headers-across-multiple-files). If you go through that workflow, you find that a lot of redundant computation is done in order to guarantee I don't have repeated headers and footnotes anchors. (I think there's an alternative to ask pandoc to prefix header id as well, but I don't want the prefix.) And because my project involved LaTeX, the bottleneck in time is the PDF generation from LaTeX so I can afford time lost in the pandoc md parsing.

Going back to your use case, it seems you requires every last drop of performance. In markdown rendering, one thing you should shop for besides rendering speed is the feature sets (markdown extensions) and the syntaxes. In your case, you should narrow down the minimal set of features you need, and given that feature set, you choose the one that's fastest.

I can't speak for the others, but I can guess that for most pandoc users, we never care about performance but look for the feature set. When I said "future compatibility", I mean in the future, when I write new contents, what if I need one more feature that the current software stack I setup didn't support? How large will the migration / work-around cost be in the future? In order words, we trade the computer time for the programmers/writers' time. But this might not be realistic for other people, because if their project is large enough, long computer time is going to cost you (server time, client wait time, etc.). And it seems that you are exactly in this later case (especially you're generating the contents dynamically).

And regarding compiling from source, you actually don't need to do that if you are using the platforms that pandoc releases official binaries — Windows, Mac, Linux (`.deb`) 64-bit. From my experience, compiling from source on Mac with homebrew is easy, as well as using stack, but less so when using cabal. If you are on alternative CPU architecture (e.g. ARM), it is super hard (or impossible).

P.S. Haskwell is actually a very interesting language, and as far as I can tell, the most mathematical language. It seems such a language would be suited for scientific computing because all we do in science involves Math. But unfortunately, virtually no body is doing Science in Haskell, at least in HPC application, exactly because of the lack of performance. By the way, AFAIK in Haskell you almost get parallelization for free, but in other language a lot of time has to be spent on thinking about how to parallelize a particular algorithm (and to implement it!). IMO, the "guarantee in correctness" and "free parallelism" of Haskell should be major selling points of Haskell to scientists but for various other reasons scientists doesn't really use it.

--
You received this message because you are subscribed to the Google Groups "pandoc-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pandoc-discuss+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to pandoc-discuss-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/pandoc-discuss/b7a15adb-8e28-4d0a-9aa5-7c77b4f4f4eb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.