I'm a novice in applying pandoc. But I'm interested in batch processing of personaiised documents (web content and slide presentations).
This requires both pre-processing and post-processing stages in the pipeline of document production.
My inclination is to leverage php (although I see that other scripting languages such as python are used).
After playing around with different options I read in this thread I can now meet the requirements
by the following approach which fits with my PHP development workflow.
(a) PHP5 is requirement in Ubuntu.
(b) add a php extension to my input markdown file (e.g. test.md becomes test.md.php)
(c) now add php pre-processing functions to my test.md.php input file
(d) run in command terminal .. php test.md.php > test.md .. to pre-process the markdown mixed with php
(e) pre-process functions included in test.md.php might be (as an example) ..
<?php embedObject("path/to/test.csv", $csv_range, $object_type, $object_style) ?>
(f) generated HTML code returned from above function is embedded as inline code between <section></section> tags
(g) another php function includes files (sections) recursively from nested folders.
(h) also I'm researching how harp might fit in to workflow... https://harpjs.com/
(i) finally the personalisation variables for each run might be driven from mongodb json content.
(j) I use Atom markdown editor with markdown preview and PHP packages installed
This hybrid md.php approach might run against the grain by I throw it in here as another suggested workflow.