On Monday, November 14, 2016 at 9:38:48 AM UTC-5, BP wrote:
One possible problem with including metadata as YAML is that it may become harder for filters earlier in the chain to query the metadata or inject a CSV block using the normal attribute interface, if any, of the filter engine, not to mention parsing the CSV, query or alter it and write it back. For that reason I think it be better if the content of the code block is the pure CSV data. FWIW I tried both strategies with my unpublished filters, so I'm not just speculating.
Also since there are many filters doing the same thing the identifying class should better not be just `csv` but also identify the filter expected to handle the data.
Sorry, I don't understand what is the problem. CSV blocks are not a standard feature of Pandoc, and each filter has its own conventions, so I don't think is reasonable to expect a new filter to allow its data to be queried/exposed to other, unknown filters.
About having additional information besides the raw CSV, I think it's actually the most important thing, because it allows you to have a title, to load CSV from external sources, add footnotes, and specify output options, all of which wouldn't be possible if we restricted the content to be some CSV-delimited info.
Finally, I do agree that having a filter named "csv" or "pandoc-csv" might collide with existing filters, but I don't think there is a problem with having a csv class. I think the chance that a user ends up requiring two different filters that use CSV code blocks is low enough for this to be a non-issue.