I came very close to getting Pandoc to actually do what I mean today. Unfortunately, when I ran my Pandoc wrapper script (it divides up my custom-formatted whole-story Markdown files into individual chapters, each with a prepended metadata block, then calls Pandoc on each individual chapter) on a different input file, it worked the first couple of times and then started complaining that a specific well-formed UTF-8 character wasn’t well-formed (specifically, the CJKV ideograph for girl/woman/female:
女). Pandoc is the only software I can find that makes this claim about my file, so I am inclined to believe the file is not at fault — especially since it worked fine yesterday. I have reinstalled both Haskell and Pandoc, without effect.
This is not the first time Pandoc has been annoying at me about UTF-8 interpretation; I have found that any attempt to print UTF-8 text to standard output or standard error from within my custom writer is doomed to failure. The individual bytes within each UTF-8 encoded character are being interpreted by some layer within Pandoc as Latin-1 or some similar single-byte encoding, and then erroneously re-translated into a string of two or three UTF-8 characters for every single UTF-8 character I try to output.
Every software setting I have control of is set to UTF-8. Even setting the locale within Lua with “os.setlocale('en_CA.UTF-8')” doesn’t have any effect.
I’m completely stumped here. Help!