public inbox archive for pandoc-discuss@googlegroups.com
 help / color / mirror / Atom feed
* Odd MD2HTML issue: MD2DOCX, MD2ODT both product expected output for nested lists, but not MD2HTML
@ 2020-02-21 17:22 Guy Stalnaker
       [not found] ` <71ff68f9-a7ac-4a40-98b7-b24711d6200f-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: Guy Stalnaker @ 2020-02-21 17:22 UTC (permalink / raw)
  To: pandoc-discuss


[-- Attachment #1.1: Type: text/plain, Size: 3480 bytes --]

I've no idea how to fix this. MD doc with extensive lists, nested maybe 4-5 
levels. Pandoc outputs to DOCX and ODT *perfectly*. But does not product 
same output to HTML5, HTML5 with TOC or HTML strict. Some nested lists are 
ignored and "rolled" up into a <li> rather than set in their own <ol> or 
<ul> section.

I typically use Pandoc with a build system in SublimeText 3, but testing 
with running Pandoc from the CLI and get the same mal-formed list HTML 
output there.

What can I do? This document (it's simply a help page from a vendor's web 
site converted so we have it locally for easy reference) is added to this 
post. Original uses <dl><dt><dd> nested and it truly awful. My MD is 
cleaned up and converted to <h2><h3> and ordered lists. File is 202 lines 
long.

Here is the pandoc command that's being run:

C:\Users\<uid>\AppData\Local\Pandoc\pandoc.exe -f markdown+
blank_before_blockquote+fenced_code_blocks+backtick_code_blocks+line_blocks+
fancy_lists+startnum+definition_lists+example_lists+table_captions+
simple_tables+multiline_tables+pipe_tables+raw_html+yaml_metadata_block --to
=html5 --no-highlight --tab-stop=2 --standalone --toc

The ST3 plugin loads the output into a new buffer and opens it.

Here is an example of the malformed output that command produces (this 
starts at line 52 in the attached markdown document):

<ol type="1">
<li>For a file transfer job specifically, you can set a success or 
unsuccess condition for the job by analyzing the job properties. For 
example, you enter the following expression: 
<code>${this.File.1.Size}&gt;0</code><br />
if you want to qualify a file transfer job as successful when the size of 
the transferred file is greater than zero. 1. For a file transfer job 
specifically, you can set a success or unsuccess condition for the job by 
analyzing the job properties or the job output of another job in the same 
job stream. For example, you enter the following expression:<br />
<code>${this.NumberOfTransferredFiles}=${job.DOWNLOAD.NumberOfTransferredFiles}</code> 
if you want to qualify a file transfer job as successful when the number of 
uploaded files in the job is the same as the number of downloaded files in 
another job, named DOWNLOAD, in the same job stream.</li>
<li>All Xpath (XML Path Language) functions and expressions are supported, 
for the above conditions, in the <strong>Condition Value</strong> field: * 
String comparisons (contains, starts-with, matches, and so on) * String 
manipulations (concat, substring, uppercase, and so on) * Functions on 
numeric values (abs, floor, round, and so on) * Operators on numeric values 
(add, sum, div, and so on) * Boolean operators</li>
</ol>

Those * in the second <li> should be converted as an unordered list.

Am I doing something wrong?

Pandoc version is latest:

> C:\Users\jstalnak\AppData\Local\Pandoc\pandoc.exe -v
pandoc.exe 2.9.2
Compiled with pandoc-types 1.20, texmath 0.12.0.1, skylighting 0.8.3.2
Default user data directory: C:\Users\jstalnak\AppData\Roaming\pandoc
Copyright (C) 2006-2019 John MacFarlane

-- 
You received this message because you are subscribed to the Google Groups "pandoc-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pandoc-discuss+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To view this discussion on the web visit https://groups.google.com/d/msgid/pandoc-discuss/71ff68f9-a7ac-4a40-98b7-b24711d6200f%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 8425 bytes --]

[-- Attachment #2: IWS - File transfer job definition properties.md --]
[-- Type: text/markdown, Size: 24973 bytes --]

---
title: "IBM Workload Scheduler File Transfer Job definition"
pagetitle: "IBM Workload Scheduler File Transfer Job definition"
lang: "en-US"
---

## IBM Workload Scheduler File Transfer Job definition

The properties of an IBM Workload Scheduler File Transfer Job definition.

Select or specify properties as required.

You can specify the following information:

### General

Use this page to specify general information about the job definition.

1.  **Name**: The name of the job definition.
1.  **Workstation**: The name of the workstation or workstation class on which the job runs.
1.  **Description**: Optionally, include a description of the job.
1.  **Output conditions**: Output conditions are used when you need a successor job to start only after certain conditions are satisfied by the predecessor job. They can also be used to specify alternative flows in a job stream starting from a predecessor job. The successor job is determined by which conditions the predecessor job satisfies. You can specify any number of output conditions. Output conditions can include conditions based on the successful outcome of the predecessor job, or other conditions that when met determine which flow in the job stream is undertaken.
	1.  **Successful output conditions**: A condition that when satisfied signifies that the predecessor job completed successfully. The job status is set to SUCC. Successful output conditions can be expressed as return codes, job status, output variables or based on job log content.
		1.  **Condition Name**: Specify a name that identifies the successful condition that must be met by the predecessor job before a successor job can run.
		1.  **Condition Value**: Specify the value of the condition that signifies a successful outcome for the predecessor job.
		1.  For example, a successful output condition might be: **Condition Name** `STATUS_OK` and **Condition Value** `RC=0`
	1.  **Other conditions**: A condition that when satisfied by the predecessor determines which successor job runs. Conditions can be expressed as return codes, job status, output variables or based on job log content.
		1.  **Condition Name**: Specify a name that identifies the condition that must be met by the predecessor job before a successor job can run.
		1.  **Condition Value**: Specify the value of the condition that must be met by the predecessor job before a successor job can run.
	1.  For example, you might want to create a condition that signifies that the predecessor job has completed with errors. You can define your output condition as follows: **Condition Name** `STATUS_ERR1` and **Condition Value** `RC=2`
	1.  The format of **Condition Value** for both successful output conditions and other conditions is as follows: `(RC <operator> <operand>)` where:
		1.  **RC**: The instruction keyword
			1.  **Operator**: The comparison operator. Allowed operators are comparison operators (=, != or \<\>, \>, \>=, \<, \<=) that can be combined with logical operators (AND, OR, NOT).
			1.  **Operand**: Any integer between -2147483647 and 2147483647.
			1.  **Successful output conditions:**:
				*  `(RC<=3)` to qualify a job as successful when the job ends with a return code less than or equal to 3.
				*  `NOT ((RC=0) AND (RC=1))` to qualify a job successful when the job ends with a return code different from 0 and 1.
				*  `(RC=2) OR (RC=4)` to qualify a job successful when the job ends with a return code equal to 2 or equal to 4.
				*  `(RC<7) AND (RC!= 5)` to qualify a job successful when the job ends with a return code less than 7 and not equal to 5.
			1.  **Other conditions**:
				*  `(RC=1)` for a condition named `STATUS_ERR`.
				*  `(RC=4 OR RC=9)` for a condition named `FIRST_PATH`
				*  `(RC <>5) OR (RC > 2)` for a condition named `SECOND_FLOW`
			1.  In the **Condition Value** field for both successful conditions and other ouput conditions, you can also express the output condition using variables other than the return code. For example, you can specify three different output conditions as follows:
				*  **Condition Name**: `STATUS_ERR` **Condition Value**: `RC=0`
				*  **Condition Name**: `STATUS_ERR1` **Condition Value**: `RC=${varname}`
				*  **Condition Name**: `STATUS_ERR2` **Condition Value**: `RC=${LOG.CONTENT}`
				*  You can set a success or other output condition for the job by analyzing the job output. To analyze the job output, you must check the `this.stdlist` variable.
				*  For example, you enter the following expression:  
						`contains("error",${this.stdlist})`  
				if you want to qualify a job as unsuccessful when the word \"error\" is contained in the job output.
				1.  For a file transfer job specifically, you can set a success or unsuccess condition for the job by analyzing the job properties. For example, you enter the following expression:
						`${this.File.1.Size}>0`  
				if you want to qualify a file transfer job as successful when the size of the transferred file is greater than zero. 1.  For a file transfer job specifically, you can set a success or unsuccess condition for the job by analyzing the job properties or the job output of another job in the same job stream. For example, you enter the following expression:  
						`${this.NumberOfTransferredFiles}=${job.DOWNLOAD.NumberOfTransferredFiles}`
				if you want to qualify a file transfer job as successful when the number of uploaded files in the job is the same as the number of downloaded files in another job, named DOWNLOAD, in the same job stream.
				1.  All Xpath (XML Path Language) functions and expressions are supported, for the above conditions, in the **Condition Value** field:
					*  String comparisons (contains, starts-with, matches, and so on)
					*  String manipulations (concat, substring, uppercase, and so on)
					*  Functions on numeric values (abs, floor, round, and so on)
					*  Operators on numeric values (add, sum, div, and so on)
					*  Boolean operators

### Affinity

Use this page to define the affinity relationship between two or more jobs. Affinity relationships cause jobs to run on the same workstation as the affine job.

1.  **IBM Workload Scheduler**: Use this section to specify that a IBM Workload Scheduler job is affine to another IBM Workload Scheduler job belonging to the same job stream.
1.  **Job name**: The name of the instance of the IBM Workload Scheduler job with which you want to establish an affinity relationship.
1.  **Recovery options**: Use this page to specify the recovery options to be followed if the job abends. You can choose to stop or continue the scheduling activity, rerun the job, to display a prompt or run a recovery job. Select an option form the **Action** menu to specify the action to be taken when the job abends. You can also choose to issue a recovery prompt or run a recovery job, after the action selected in this menu has been performed.
1.  **Stop**: If the job ends in error and there is a follows dependency, processing does not continue with the next job. This is the default option. In the **Prompt text** field, specify a recovery prompt to be displayed if the job ends in error. The recovery prompt is an ad hoc prompt that is always displayed and its status is **Not Asked**. If the job ends in error, the prompt status changes from **Not Asked** to **Asked**. If the job ends successfully, the status remains **Not Asked**. You can add a variable to this field by clicking **Add variable\...** and selecting an item from the list.
1.  **Continue**: If the job ends in error and there is a follows dependency, processing continues with the next job. The job is not listed as abended in the properties of the job stream. If no other problems occur, the job stream completes successfully.
1.  **Continue after prompt**: Continue with the next job after the operator has replied to the prompt. In the **Prompt text** field, specify a recovery prompt to be displayed if the job ends in error. The recovery prompt is an ad hoc prompt that is always displayed and its status is **Not Asked**: If the job ends in error, the prompt status changes from**Not Asked**. to **Asked**. If the job ends successfully, the status remains **Not Asked**. You can add a variable to this field by clicking **Add variable\...** and selecting an item from the list.
1.  **Rerun**: If the job ends in error, rerun the job.
	1.  **Retry after (hh:mm)**: How often IBM Workload Scheduler attempts to rerun the failed job. The default value is 0. The maximum supported value is 99 hours and 59 minutes.
	1.  **Number of attempts**: Maximum number of rerun attempts to be performed. The default value is 1. The maximum supported value is 10.000 attempts.
	1.  **Run on the same workstation**: Specify whether the job must be rerun on the same workstation as the parent job. This option is applicable only to pool and dynamic pool workstations.
1.  **Rerun after prompt**: Rerun the job after the operator has replied to the prompt. In the **Prompt text** field, specify a recovery prompt to be displayed if the job ends in error. The recovery prompt is an ad hoc prompt that is always displayed and its status is **Not Asked** . If the job ends in error, the prompt status changes from **Not Asked**: to **Asked**. If the job ends successfully, the status remains **Not Asked**. You can add a variable to this field by clicking **Add variable\...** and selecting an item from the list.
	1.  **Run on the same workstation**: Specify whether the job must be rerun on the same workstation as the parent job. This option is applicable only to pool and dynamic pool workstations.
1.  You can also choose whether you want to run a recovery job in case the parent job ends in error. Specify the following options: 
1.  **Job**: The name of a recovery job to run if the parent job ends in error. Recovery jobs are run only once for each instance of the parent job ended in error. You can type a job name or click the **Search** button and select it from the list.
1.  **Workstation**: The name of the workstation where the recovery job runs. The name is entered automatically when you select a recovery job.

### File Transfer

Use this section to define the options for the file transfer.

1.  **Transfer Type**: The type of transfer operation to be performed. 
	1.  Supported values are as follows:
		*  **Download**
			*  **Permissions (Octal Notation).** Specify file permissions for the user on the local system. File permissions are expressed as octal notation.
		*  **Upload**
			*  **Delete source files after transfer.** Specify if source files must be deleted after transfer.
1.  **Server**: The host name of the server where the file transfer is to be performed. If you want to specify a port number different from the default one, use the following syntax: 
		`server_name:port_number`
1.  **Remote file**:
	*  The name of the remote file that you want to transfer. When uploading, this is the target file, when downloading, this is the source file.
	*  You can use asterisks (\*) or question marks (?) as wildcard characters when downloading the file.
	*  If you want to maintain the same file name, specify the path in the **Remote file** field with two backslashes (\\\\) or a forward slash (/) at the end of the path.
1.  **Local file**:
	*  The name of the local file that you want to transfer. When uploading, this is the source file, when downloading, this is the target file.
	*  You can use asterisks (\*) or question marks (?) as wildcard characters when uploading the file.
1.  **Protocol**: The protocol to be used for the file transfer. Supported values are as follows:
	*  **FTP**: A standard network protocol used to exchange files over a TCP/IP-based network, such as the Internet. When transferring files to or from a z/OS server, the SBDataconn command is used.
	*  **FTPS**: An extension to the File Transfer Protocol (FTP) that adds support for the Transfer Layer Security (TLS) cryptographic protocol. Specifically, the file transfer is performed using implicitly the TLS security protocol for the FTP sessions, providing a private security level for the data connection. TLS protocol version 1 is supported. The SSL session reuse configuration is not supported. If you specify this protocol only the user and password authentication is supported.
	*  **FTPES**: An extension to the File Transfer Protocol (FTP) that adds support for the Transfer Layer Security (TLS) cryptographic protocol. Specifically, the file transfer is performed using explicitly the TLS security protocol for the FTP sessions, providing a private security level for the data connection. TLS protocol version 1 is supported. The SSL session reuse configuration is not supported. If you specify this protocol only the user and password authentication is supported.
	*  **WINDOWS**: The Microsoft file sharing protocol. Use the samba syntax to specify the path. Share the folder containing the files you want to transfer. When transferring ASCII files, the local and remote code pages are identified automatically.
	*  **SSH**: A network protocol that provides file access, file transfer, and file management functions over any data stream. When transferring ASCII files, the local and remote code pages are identified automatically.
	*  **AUTO**: The protocol is selected automatically between the Windows and SSH protocols. The product tries using the Windows protocol first. If this protocol fails, the SSH protocol is used. When using SSH, the path has to be in the SSH format. In this case the Cygwin ssh server is mounted on /home/Administrator.

### Remote Credentials

Use this section to define the credentials for accessing the remote workstation.

1.  **User name**: The user name for accessing the remote workstation.
1.  **Password**: The password for accessing the remote workstation. You can click on the ellipsis to display the Password type options. Select one of the following buttons:
	1.  **Password**: Takes the password value entered in the `Password` field.
	1.  **User**:
		1.  **On dynamic agents**: It is resolved at run time with the password value defined for `User Name` in the IBM Workload Scheduler database using either the `User` definition panel or the `composer user` command.
		1.  You can also specify the user (and the related password) of another workstation if it is defined in the database. See the description of the **Variable** button.
		1.  **Attention:** User definitions lack referential integrity. This implies that, if a user definition referenced in the credentials section is changed or deleted, no warning or error message is returned until the job is run.
		1.  **On IBM Workload Scheduler for z/OS agents**: It is resolved at run time with the password value defined for `User Name` in the IBM Workload Scheduler for z/OS database using the USSREC initialization statement, where the value of `User Name` is defined by the `USRNAM` parameter and the password by `USRPSW`.
	1.  **Agent User**: It is resolved at run time with the password value defined for `User Name` locally on the dynamic agent or IBM Workload Scheduler for z/OS agent that will run the job (or on any agent of a pool or dynamic pool that may run the job) with the `param` command.
	1.  **Variable**: It is resolved at run time with the value defined for the variable you enter in the field (using the `${variable_name}` notation).
		1.  **On dynamic agents**: The variable must have been defined either locally on the agent, using the `param` command, or in the IBM Workload Scheduler database, utilizing the `User` panel or the `composer username` command. 
		1.  For example:
			*  A variable defined locally on the agent, enter here as:  
					`${agent:file_With_Sections.password.dbPwd}`
			*  A variable defined in the database, enter here as:  
					`${password:workstation#user}`
		1.  You can use this button to specify the password of the remote user of a different workstation (as long as it was defined in the database) by entering the following string in the adjacent field:
					`${password:workstation_name:value_of_user_name_field}`
		1.  **On IBM Workload Scheduler for z/OS agents**:
			*  Use this field if you want to use the password defined for a user different from the one specified in the `User Name` field.
			*   For example if you are defining a File Transfer job and the local and remote user names are identical (`user1`), you can differentiate the password by defining two USRREC initialization statement entries (for example one for `user1` and one for `user1remote`). After doing this, in the remote user password field you specify:  
					`${password:user1remote}`
			*  The traditional variable substitution mechanism which uses variables defined in the IBM Workload Scheduler for z/OS database variable tables is not supported in this field.
		1.  Variables are resolved both when you generate a plan and when you submit a job or a job stream. While defining jobs, the variables are not resolved and cannot be used in lists or for test connections. 
		1.  The password is not required if a keystore file path and password are specified when using the SSH protocol.

### Local Credentials

Use this section to define the credentials for accessing the local workstation.

1.  **User name**: The user name for accessing the local workstation.
1.  **Password**: The password for accessing the local workstation.You can click on the ellipsis to display the Password type options. Select one of the following buttons:
	1.  **Password**: Takes the password value entered in the `Password` field.
	1.  **User**:
		1.  **On dynamic agents**: It is resolved at run time with the password value defined for `User Name` in the IBM Workload Scheduler database using either the `User` definition panel or the `composer user` command.
		1.  You can also specify the user (and the related password) of another workstation if it is defined in the database. See the description of the **Variable** button. 
		1.  **Attention**: User definitions lack referential integrity. This implies that, if a user definition referenced in the credentials section is changed or deleted, no warning or error message is returned until the job is run.
		1.  **On IBM Workload Scheduler for z/OS agents**: It is resolved at run time with the password value defined for `User Name` in the IBM Workload Scheduler for z/OS database using the USSREC initialization statement, where the value of `User Name` is defined by the `USRNAM` parameter and the password by `USRPSW`.
	1.  **Agent User**: It is resolved at run time with the password value defined for `User Name` locally on the dynamic agent or IBM Workload Scheduler for z/OS agent that will run the job (or on any agent of a pool or dynamic pool that may run the job) with the `param` command.
	1.  **Variable**: It is resolved at run time with the value defined for the variable you enter in the field (using the `${variable_name}` notation).
		1.  **On dynamic agents**: The variable must have been defined either locally on the agent, using the `param` command, or in the IBM Workload Scheduler database, utilizing the `User` panel or the `composer username` command. For example:
			*  A variable defined locally on the agent, enter here as:  
					`${agent:file_With_Sections.password.dbPwd}`
			*  A variable defined in the database, enter here as:  
					`${password:workstation#user}`
			*  You can use this button to specify the password of the remote user of a different workstation (as long as it was defined in the database) by entering the following string in the adjacent field:  
					`${password: workstation_name #  value_of_user_name_field }`
		1.  **On IBM Workload Scheduler for z/OS agents**:
			*  Use this field if you want to use the password defined for a user different from the one specified in the `User Name` field.
			*  For example if you are defining a File Transfer job and the local and remote user names are identical (`user1`), you can differentiate the password by defining two USRREC initialization statement entries (for example one for `user1` and one for `user1remote`). After doing this, in the remote user password field you specify:  
					`${password:user1remote}`
			*  The traditional variable substitution mechanism which uses variables defined in the IBM Workload Scheduler for z/OS database variable tables is not supported in this field.
			*  Variables are resolved both when you generate a plan and when you submit a job or a job stream. While defining jobs, the variables are not resolved and cannot be used in lists or for test connections.
	1.  **Certificates**: Use this section to specify a keystore file containing the private key and the keystore password used to make the connection.
		1.  **KeyStore file path**: The fully qualified path of the keystore file containing the private key used to make the connection. A keystore is a database of keys. Private keys in a keystore have a certificate chain associated with them, which authenticates the corresponding public key on the remote server. A keystore also contains certificates from trusted entities. Applicable to SSH protocol only.
		1.  **Password**: The password that protects the private key and is required to make the connection. This attribute is required only if you specify a keystore file path. If the keystore file path and keystore password combination fail to make a connection, then an attempt is made using the user name and password that correspond to the user authorized to start a connection on the remote computer.
	1.  **Test Connection**: Verifies the connection to the specified

### Transfer Options

Use this section to define the options for the file transfer.

1.  **Transfer mode**: The type of encoding for the file transfer. The following values are supported:
	-  Binary
	-  Text
1.  **Convert code page**: Select to enable code page conversion.
1.  **Code page Conversion**: Use this section to specify the code page used on the remote and local workstations. Ensure that the file is written in the correct code page for the destination system before transferring it.
	1.  **Remote code page**: The code page used on the remote workstation.
	1.  **Local code page**: The code page used on the local workstation.
	1.  **Timeout**: Specifies the maximum number of seconds that can be used for the file transfer operation. The default value is **60** seconds.
1.  **Connection Mode**: The type of connection for the file transfer. Specifies whether the server is passive or active when establishing connections for data transfers. The following values are supported:
	1.  **Active Mode**: The server establishes the data connection with the client. This is the default value.
	1.  **Passive Mode**: The client establishes the data connection with the server.
	1.  **Port range**: The port range to use on the client side of TCP/IP data connections. The port range limits the port numbers sent by the FTP PORT command. Use this option if you have highly restrictive firewall rules. If you do not specify port range, the operating system determines the port numbers to use. The following values are supported:
		1.  **Min port**: The minimum port value to use on the client side of TCP/IP data connections. Specify a value in the range from 0 to 65535. For example, if you set this value to 1035, the product restricts the port numbers to be equal to or greater than port 1035.
		1.  **Max port**: The maximum port value to use on the client side of TCP/IP data connections. Specify a value in the range from 0 to 65535. For example, if you set this value to 1038, the product restricts the port number to be less than or equal to port 1038.

### Versions

Use this section to view the history of object changes and work with the different versions. You can run the following actions:

1.  **Compare**: Select two different versions and compare them.
1.  **Restore\...**: Select a previous version and start the restore

#### Note

Credential management is different between version 8.6 and version 851, fix pack 1 because version 8.5.1, Fix Pack 1 supports only one set of credentials. A compatibility issue might arise if you use a file transfer job defined for IBM® Workload Scheduler, version 8.5.1, Fix Pack 1, or if you use a file transfer job, version 8.6, but specify a single set of credentials. If the file transfer job contains a single set of credentials, the system automatically uses the path management implemented for file transfer jobs in IBM Workload Scheduler, version 8.5.1, Fix Pack 1, where the following rule applies:
-   The specified path applies to the home directory of the user on the FTP, SSH, or Windows server, even in the case the path starts with a forward slash (/).

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Odd MD2HTML issue: MD2DOCX, MD2ODT both product expected output for nested lists, but not MD2HTML
       [not found] ` <71ff68f9-a7ac-4a40-98b7-b24711d6200f-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org>
@ 2020-02-21 19:24   ` John MacFarlane
       [not found]     ` <yh480k5zfzrbf6.fsf-pgq/RBwaQ+zq8tPRBa0AtqxOck334EZe@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: John MacFarlane @ 2020-02-21 19:24 UTC (permalink / raw)
  To: Guy Stalnaker, pandoc-discuss


The problem is --tab-stop=2.

If you remove that it should work fine.  Why is that there?  It
causes your tab-indented lists not to be indented far enough
to belong in the lists above them.

Guy Stalnaker <jimmyg521-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> writes:

> I've no idea how to fix this. MD doc with extensive lists, nested maybe 4-5 
> levels. Pandoc outputs to DOCX and ODT *perfectly*. But does not product 
> same output to HTML5, HTML5 with TOC or HTML strict. Some nested lists are 
> ignored and "rolled" up into a <li> rather than set in their own <ol> or 
> <ul> section.
>
> I typically use Pandoc with a build system in SublimeText 3, but testing 
> with running Pandoc from the CLI and get the same mal-formed list HTML 
> output there.
>
> What can I do? This document (it's simply a help page from a vendor's web 
> site converted so we have it locally for easy reference) is added to this 
> post. Original uses <dl><dt><dd> nested and it truly awful. My MD is 
> cleaned up and converted to <h2><h3> and ordered lists. File is 202 lines 
> long.
>
> Here is the pandoc command that's being run:
>
> C:\Users\<uid>\AppData\Local\Pandoc\pandoc.exe -f markdown+
> blank_before_blockquote+fenced_code_blocks+backtick_code_blocks+line_blocks+
> fancy_lists+startnum+definition_lists+example_lists+table_captions+
> simple_tables+multiline_tables+pipe_tables+raw_html+yaml_metadata_block --to
> =html5 --no-highlight --tab-stop=2 --standalone --toc
>
> The ST3 plugin loads the output into a new buffer and opens it.
>
> Here is an example of the malformed output that command produces (this 
> starts at line 52 in the attached markdown document):
>
> <ol type="1">
> <li>For a file transfer job specifically, you can set a success or 
> unsuccess condition for the job by analyzing the job properties. For 
> example, you enter the following expression: 
> <code>${this.File.1.Size}&gt;0</code><br />
> if you want to qualify a file transfer job as successful when the size of 
> the transferred file is greater than zero. 1. For a file transfer job 
> specifically, you can set a success or unsuccess condition for the job by 
> analyzing the job properties or the job output of another job in the same 
> job stream. For example, you enter the following expression:<br />
> <code>${this.NumberOfTransferredFiles}=${job.DOWNLOAD.NumberOfTransferredFiles}</code> 
> if you want to qualify a file transfer job as successful when the number of 
> uploaded files in the job is the same as the number of downloaded files in 
> another job, named DOWNLOAD, in the same job stream.</li>
> <li>All Xpath (XML Path Language) functions and expressions are supported, 
> for the above conditions, in the <strong>Condition Value</strong> field: * 
> String comparisons (contains, starts-with, matches, and so on) * String 
> manipulations (concat, substring, uppercase, and so on) * Functions on 
> numeric values (abs, floor, round, and so on) * Operators on numeric values 
> (add, sum, div, and so on) * Boolean operators</li>
> </ol>
>
> Those * in the second <li> should be converted as an unordered list.
>
> Am I doing something wrong?
>
> Pandoc version is latest:
>
>> C:\Users\jstalnak\AppData\Local\Pandoc\pandoc.exe -v
> pandoc.exe 2.9.2
> Compiled with pandoc-types 1.20, texmath 0.12.0.1, skylighting 0.8.3.2
> Default user data directory: C:\Users\jstalnak\AppData\Roaming\pandoc
> Copyright (C) 2006-2019 John MacFarlane
>
> -- 
> You received this message because you are subscribed to the Google Groups "pandoc-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pandoc-discuss+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
> To view this discussion on the web visit https://groups.google.com/d/msgid/pandoc-discuss/71ff68f9-a7ac-4a40-98b7-b24711d6200f%40googlegroups.com.
> ---
> title: "IBM Workload Scheduler File Transfer Job definition"
> pagetitle: "IBM Workload Scheduler File Transfer Job definition"
> lang: "en-US"
> ---
>
> ## IBM Workload Scheduler File Transfer Job definition
>
> The properties of an IBM Workload Scheduler File Transfer Job definition.
>
> Select or specify properties as required.
>
> You can specify the following information:
>
> ### General
>
> Use this page to specify general information about the job definition.
>
> 1.  **Name**: The name of the job definition.
> 1.  **Workstation**: The name of the workstation or workstation class on which the job runs.
> 1.  **Description**: Optionally, include a description of the job.
> 1.  **Output conditions**: Output conditions are used when you need a successor job to start only after certain conditions are satisfied by the predecessor job. They can also be used to specify alternative flows in a job stream starting from a predecessor job. The successor job is determined by which conditions the predecessor job satisfies. You can specify any number of output conditions. Output conditions can include conditions based on the successful outcome of the predecessor job, or other conditions that when met determine which flow in the job stream is undertaken.
> 	1.  **Successful output conditions**: A condition that when satisfied signifies that the predecessor job completed successfully. The job status is set to SUCC. Successful output conditions can be expressed as return codes, job status, output variables or based on job log content.
> 		1.  **Condition Name**: Specify a name that identifies the successful condition that must be met by the predecessor job before a successor job can run.
> 		1.  **Condition Value**: Specify the value of the condition that signifies a successful outcome for the predecessor job.
> 		1.  For example, a successful output condition might be: **Condition Name** `STATUS_OK` and **Condition Value** `RC=0`
> 	1.  **Other conditions**: A condition that when satisfied by the predecessor determines which successor job runs. Conditions can be expressed as return codes, job status, output variables or based on job log content.
> 		1.  **Condition Name**: Specify a name that identifies the condition that must be met by the predecessor job before a successor job can run.
> 		1.  **Condition Value**: Specify the value of the condition that must be met by the predecessor job before a successor job can run.
> 	1.  For example, you might want to create a condition that signifies that the predecessor job has completed with errors. You can define your output condition as follows: **Condition Name** `STATUS_ERR1` and **Condition Value** `RC=2`
> 	1.  The format of **Condition Value** for both successful output conditions and other conditions is as follows: `(RC <operator> <operand>)` where:
> 		1.  **RC**: The instruction keyword
> 			1.  **Operator**: The comparison operator. Allowed operators are comparison operators (=, != or \<\>, \>, \>=, \<, \<=) that can be combined with logical operators (AND, OR, NOT).
> 			1.  **Operand**: Any integer between -2147483647 and 2147483647.
> 			1.  **Successful output conditions:**:
> 				*  `(RC<=3)` to qualify a job as successful when the job ends with a return code less than or equal to 3.
> 				*  `NOT ((RC=0) AND (RC=1))` to qualify a job successful when the job ends with a return code different from 0 and 1.
> 				*  `(RC=2) OR (RC=4)` to qualify a job successful when the job ends with a return code equal to 2 or equal to 4.
> 				*  `(RC<7) AND (RC!= 5)` to qualify a job successful when the job ends with a return code less than 7 and not equal to 5.
> 			1.  **Other conditions**:
> 				*  `(RC=1)` for a condition named `STATUS_ERR`.
> 				*  `(RC=4 OR RC=9)` for a condition named `FIRST_PATH`
> 				*  `(RC <>5) OR (RC > 2)` for a condition named `SECOND_FLOW`
> 			1.  In the **Condition Value** field for both successful conditions and other ouput conditions, you can also express the output condition using variables other than the return code. For example, you can specify three different output conditions as follows:
> 				*  **Condition Name**: `STATUS_ERR` **Condition Value**: `RC=0`
> 				*  **Condition Name**: `STATUS_ERR1` **Condition Value**: `RC=${varname}`
> 				*  **Condition Name**: `STATUS_ERR2` **Condition Value**: `RC=${LOG.CONTENT}`
> 				*  You can set a success or other output condition for the job by analyzing the job output. To analyze the job output, you must check the `this.stdlist` variable.
> 				*  For example, you enter the following expression:  
> 						`contains("error",${this.stdlist})`  
> 				if you want to qualify a job as unsuccessful when the word \"error\" is contained in the job output.
> 				1.  For a file transfer job specifically, you can set a success or unsuccess condition for the job by analyzing the job properties. For example, you enter the following expression:
> 						`${this.File.1.Size}>0`  
> 				if you want to qualify a file transfer job as successful when the size of the transferred file is greater than zero. 1.  For a file transfer job specifically, you can set a success or unsuccess condition for the job by analyzing the job properties or the job output of another job in the same job stream. For example, you enter the following expression:  
> 						`${this.NumberOfTransferredFiles}=${job.DOWNLOAD.NumberOfTransferredFiles}`
> 				if you want to qualify a file transfer job as successful when the number of uploaded files in the job is the same as the number of downloaded files in another job, named DOWNLOAD, in the same job stream.
> 				1.  All Xpath (XML Path Language) functions and expressions are supported, for the above conditions, in the **Condition Value** field:
> 					*  String comparisons (contains, starts-with, matches, and so on)
> 					*  String manipulations (concat, substring, uppercase, and so on)
> 					*  Functions on numeric values (abs, floor, round, and so on)
> 					*  Operators on numeric values (add, sum, div, and so on)
> 					*  Boolean operators
>
> ### Affinity
>
> Use this page to define the affinity relationship between two or more jobs. Affinity relationships cause jobs to run on the same workstation as the affine job.
>
> 1.  **IBM Workload Scheduler**: Use this section to specify that a IBM Workload Scheduler job is affine to another IBM Workload Scheduler job belonging to the same job stream.
> 1.  **Job name**: The name of the instance of the IBM Workload Scheduler job with which you want to establish an affinity relationship.
> 1.  **Recovery options**: Use this page to specify the recovery options to be followed if the job abends. You can choose to stop or continue the scheduling activity, rerun the job, to display a prompt or run a recovery job. Select an option form the **Action** menu to specify the action to be taken when the job abends. You can also choose to issue a recovery prompt or run a recovery job, after the action selected in this menu has been performed.
> 1.  **Stop**: If the job ends in error and there is a follows dependency, processing does not continue with the next job. This is the default option. In the **Prompt text** field, specify a recovery prompt to be displayed if the job ends in error. The recovery prompt is an ad hoc prompt that is always displayed and its status is **Not Asked**. If the job ends in error, the prompt status changes from **Not Asked** to **Asked**. If the job ends successfully, the status remains **Not Asked**. You can add a variable to this field by clicking **Add variable\...** and selecting an item from the list.
> 1.  **Continue**: If the job ends in error and there is a follows dependency, processing continues with the next job. The job is not listed as abended in the properties of the job stream. If no other problems occur, the job stream completes successfully.
> 1.  **Continue after prompt**: Continue with the next job after the operator has replied to the prompt. In the **Prompt text** field, specify a recovery prompt to be displayed if the job ends in error. The recovery prompt is an ad hoc prompt that is always displayed and its status is **Not Asked**: If the job ends in error, the prompt status changes from**Not Asked**. to **Asked**. If the job ends successfully, the status remains **Not Asked**. You can add a variable to this field by clicking **Add variable\...** and selecting an item from the list.
> 1.  **Rerun**: If the job ends in error, rerun the job.
> 	1.  **Retry after (hh:mm)**: How often IBM Workload Scheduler attempts to rerun the failed job. The default value is 0. The maximum supported value is 99 hours and 59 minutes.
> 	1.  **Number of attempts**: Maximum number of rerun attempts to be performed. The default value is 1. The maximum supported value is 10.000 attempts.
> 	1.  **Run on the same workstation**: Specify whether the job must be rerun on the same workstation as the parent job. This option is applicable only to pool and dynamic pool workstations.
> 1.  **Rerun after prompt**: Rerun the job after the operator has replied to the prompt. In the **Prompt text** field, specify a recovery prompt to be displayed if the job ends in error. The recovery prompt is an ad hoc prompt that is always displayed and its status is **Not Asked** . If the job ends in error, the prompt status changes from **Not Asked**: to **Asked**. If the job ends successfully, the status remains **Not Asked**. You can add a variable to this field by clicking **Add variable\...** and selecting an item from the list.
> 	1.  **Run on the same workstation**: Specify whether the job must be rerun on the same workstation as the parent job. This option is applicable only to pool and dynamic pool workstations.
> 1.  You can also choose whether you want to run a recovery job in case the parent job ends in error. Specify the following options: 
> 1.  **Job**: The name of a recovery job to run if the parent job ends in error. Recovery jobs are run only once for each instance of the parent job ended in error. You can type a job name or click the **Search** button and select it from the list.
> 1.  **Workstation**: The name of the workstation where the recovery job runs. The name is entered automatically when you select a recovery job.
>
> ### File Transfer
>
> Use this section to define the options for the file transfer.
>
> 1.  **Transfer Type**: The type of transfer operation to be performed. 
> 	1.  Supported values are as follows:
> 		*  **Download**
> 			*  **Permissions (Octal Notation).** Specify file permissions for the user on the local system. File permissions are expressed as octal notation.
> 		*  **Upload**
> 			*  **Delete source files after transfer.** Specify if source files must be deleted after transfer.
> 1.  **Server**: The host name of the server where the file transfer is to be performed. If you want to specify a port number different from the default one, use the following syntax: 
> 		`server_name:port_number`
> 1.  **Remote file**:
> 	*  The name of the remote file that you want to transfer. When uploading, this is the target file, when downloading, this is the source file.
> 	*  You can use asterisks (\*) or question marks (?) as wildcard characters when downloading the file.
> 	*  If you want to maintain the same file name, specify the path in the **Remote file** field with two backslashes (\\\\) or a forward slash (/) at the end of the path.
> 1.  **Local file**:
> 	*  The name of the local file that you want to transfer. When uploading, this is the source file, when downloading, this is the target file.
> 	*  You can use asterisks (\*) or question marks (?) as wildcard characters when uploading the file.
> 1.  **Protocol**: The protocol to be used for the file transfer. Supported values are as follows:
> 	*  **FTP**: A standard network protocol used to exchange files over a TCP/IP-based network, such as the Internet. When transferring files to or from a z/OS server, the SBDataconn command is used.
> 	*  **FTPS**: An extension to the File Transfer Protocol (FTP) that adds support for the Transfer Layer Security (TLS) cryptographic protocol. Specifically, the file transfer is performed using implicitly the TLS security protocol for the FTP sessions, providing a private security level for the data connection. TLS protocol version 1 is supported. The SSL session reuse configuration is not supported. If you specify this protocol only the user and password authentication is supported.
> 	*  **FTPES**: An extension to the File Transfer Protocol (FTP) that adds support for the Transfer Layer Security (TLS) cryptographic protocol. Specifically, the file transfer is performed using explicitly the TLS security protocol for the FTP sessions, providing a private security level for the data connection. TLS protocol version 1 is supported. The SSL session reuse configuration is not supported. If you specify this protocol only the user and password authentication is supported.
> 	*  **WINDOWS**: The Microsoft file sharing protocol. Use the samba syntax to specify the path. Share the folder containing the files you want to transfer. When transferring ASCII files, the local and remote code pages are identified automatically.
> 	*  **SSH**: A network protocol that provides file access, file transfer, and file management functions over any data stream. When transferring ASCII files, the local and remote code pages are identified automatically.
> 	*  **AUTO**: The protocol is selected automatically between the Windows and SSH protocols. The product tries using the Windows protocol first. If this protocol fails, the SSH protocol is used. When using SSH, the path has to be in the SSH format. In this case the Cygwin ssh server is mounted on /home/Administrator.
>
> ### Remote Credentials
>
> Use this section to define the credentials for accessing the remote workstation.
>
> 1.  **User name**: The user name for accessing the remote workstation.
> 1.  **Password**: The password for accessing the remote workstation. You can click on the ellipsis to display the Password type options. Select one of the following buttons:
> 	1.  **Password**: Takes the password value entered in the `Password` field.
> 	1.  **User**:
> 		1.  **On dynamic agents**: It is resolved at run time with the password value defined for `User Name` in the IBM Workload Scheduler database using either the `User` definition panel or the `composer user` command.
> 		1.  You can also specify the user (and the related password) of another workstation if it is defined in the database. See the description of the **Variable** button.
> 		1.  **Attention:** User definitions lack referential integrity. This implies that, if a user definition referenced in the credentials section is changed or deleted, no warning or error message is returned until the job is run.
> 		1.  **On IBM Workload Scheduler for z/OS agents**: It is resolved at run time with the password value defined for `User Name` in the IBM Workload Scheduler for z/OS database using the USSREC initialization statement, where the value of `User Name` is defined by the `USRNAM` parameter and the password by `USRPSW`.
> 	1.  **Agent User**: It is resolved at run time with the password value defined for `User Name` locally on the dynamic agent or IBM Workload Scheduler for z/OS agent that will run the job (or on any agent of a pool or dynamic pool that may run the job) with the `param` command.
> 	1.  **Variable**: It is resolved at run time with the value defined for the variable you enter in the field (using the `${variable_name}` notation).
> 		1.  **On dynamic agents**: The variable must have been defined either locally on the agent, using the `param` command, or in the IBM Workload Scheduler database, utilizing the `User` panel or the `composer username` command. 
> 		1.  For example:
> 			*  A variable defined locally on the agent, enter here as:  
> 					`${agent:file_With_Sections.password.dbPwd}`
> 			*  A variable defined in the database, enter here as:  
> 					`${password:workstation#user}`
> 		1.  You can use this button to specify the password of the remote user of a different workstation (as long as it was defined in the database) by entering the following string in the adjacent field:
> 					`${password:workstation_name:value_of_user_name_field}`
> 		1.  **On IBM Workload Scheduler for z/OS agents**:
> 			*  Use this field if you want to use the password defined for a user different from the one specified in the `User Name` field.
> 			*   For example if you are defining a File Transfer job and the local and remote user names are identical (`user1`), you can differentiate the password by defining two USRREC initialization statement entries (for example one for `user1` and one for `user1remote`). After doing this, in the remote user password field you specify:  
> 					`${password:user1remote}`
> 			*  The traditional variable substitution mechanism which uses variables defined in the IBM Workload Scheduler for z/OS database variable tables is not supported in this field.
> 		1.  Variables are resolved both when you generate a plan and when you submit a job or a job stream. While defining jobs, the variables are not resolved and cannot be used in lists or for test connections. 
> 		1.  The password is not required if a keystore file path and password are specified when using the SSH protocol.
>
> ### Local Credentials
>
> Use this section to define the credentials for accessing the local workstation.
>
> 1.  **User name**: The user name for accessing the local workstation.
> 1.  **Password**: The password for accessing the local workstation.You can click on the ellipsis to display the Password type options. Select one of the following buttons:
> 	1.  **Password**: Takes the password value entered in the `Password` field.
> 	1.  **User**:
> 		1.  **On dynamic agents**: It is resolved at run time with the password value defined for `User Name` in the IBM Workload Scheduler database using either the `User` definition panel or the `composer user` command.
> 		1.  You can also specify the user (and the related password) of another workstation if it is defined in the database. See the description of the **Variable** button. 
> 		1.  **Attention**: User definitions lack referential integrity. This implies that, if a user definition referenced in the credentials section is changed or deleted, no warning or error message is returned until the job is run.
> 		1.  **On IBM Workload Scheduler for z/OS agents**: It is resolved at run time with the password value defined for `User Name` in the IBM Workload Scheduler for z/OS database using the USSREC initialization statement, where the value of `User Name` is defined by the `USRNAM` parameter and the password by `USRPSW`.
> 	1.  **Agent User**: It is resolved at run time with the password value defined for `User Name` locally on the dynamic agent or IBM Workload Scheduler for z/OS agent that will run the job (or on any agent of a pool or dynamic pool that may run the job) with the `param` command.
> 	1.  **Variable**: It is resolved at run time with the value defined for the variable you enter in the field (using the `${variable_name}` notation).
> 		1.  **On dynamic agents**: The variable must have been defined either locally on the agent, using the `param` command, or in the IBM Workload Scheduler database, utilizing the `User` panel or the `composer username` command. For example:
> 			*  A variable defined locally on the agent, enter here as:  
> 					`${agent:file_With_Sections.password.dbPwd}`
> 			*  A variable defined in the database, enter here as:  
> 					`${password:workstation#user}`
> 			*  You can use this button to specify the password of the remote user of a different workstation (as long as it was defined in the database) by entering the following string in the adjacent field:  
> 					`${password: workstation_name #  value_of_user_name_field }`
> 		1.  **On IBM Workload Scheduler for z/OS agents**:
> 			*  Use this field if you want to use the password defined for a user different from the one specified in the `User Name` field.
> 			*  For example if you are defining a File Transfer job and the local and remote user names are identical (`user1`), you can differentiate the password by defining two USRREC initialization statement entries (for example one for `user1` and one for `user1remote`). After doing this, in the remote user password field you specify:  
> 					`${password:user1remote}`
> 			*  The traditional variable substitution mechanism which uses variables defined in the IBM Workload Scheduler for z/OS database variable tables is not supported in this field.
> 			*  Variables are resolved both when you generate a plan and when you submit a job or a job stream. While defining jobs, the variables are not resolved and cannot be used in lists or for test connections.
> 	1.  **Certificates**: Use this section to specify a keystore file containing the private key and the keystore password used to make the connection.
> 		1.  **KeyStore file path**: The fully qualified path of the keystore file containing the private key used to make the connection. A keystore is a database of keys. Private keys in a keystore have a certificate chain associated with them, which authenticates the corresponding public key on the remote server. A keystore also contains certificates from trusted entities. Applicable to SSH protocol only.
> 		1.  **Password**: The password that protects the private key and is required to make the connection. This attribute is required only if you specify a keystore file path. If the keystore file path and keystore password combination fail to make a connection, then an attempt is made using the user name and password that correspond to the user authorized to start a connection on the remote computer.
> 	1.  **Test Connection**: Verifies the connection to the specified
>
> ### Transfer Options
>
> Use this section to define the options for the file transfer.
>
> 1.  **Transfer mode**: The type of encoding for the file transfer. The following values are supported:
> 	-  Binary
> 	-  Text
> 1.  **Convert code page**: Select to enable code page conversion.
> 1.  **Code page Conversion**: Use this section to specify the code page used on the remote and local workstations. Ensure that the file is written in the correct code page for the destination system before transferring it.
> 	1.  **Remote code page**: The code page used on the remote workstation.
> 	1.  **Local code page**: The code page used on the local workstation.
> 	1.  **Timeout**: Specifies the maximum number of seconds that can be used for the file transfer operation. The default value is **60** seconds.
> 1.  **Connection Mode**: The type of connection for the file transfer. Specifies whether the server is passive or active when establishing connections for data transfers. The following values are supported:
> 	1.  **Active Mode**: The server establishes the data connection with the client. This is the default value.
> 	1.  **Passive Mode**: The client establishes the data connection with the server.
> 	1.  **Port range**: The port range to use on the client side of TCP/IP data connections. The port range limits the port numbers sent by the FTP PORT command. Use this option if you have highly restrictive firewall rules. If you do not specify port range, the operating system determines the port numbers to use. The following values are supported:
> 		1.  **Min port**: The minimum port value to use on the client side of TCP/IP data connections. Specify a value in the range from 0 to 65535. For example, if you set this value to 1035, the product restricts the port numbers to be equal to or greater than port 1035.
> 		1.  **Max port**: The maximum port value to use on the client side of TCP/IP data connections. Specify a value in the range from 0 to 65535. For example, if you set this value to 1038, the product restricts the port number to be less than or equal to port 1038.
>
> ### Versions
>
> Use this section to view the history of object changes and work with the different versions. You can run the following actions:
>
> 1.  **Compare**: Select two different versions and compare them.
> 1.  **Restore\...**: Select a previous version and start the restore
>
> #### Note
>
> Credential management is different between version 8.6 and version 851, fix pack 1 because version 8.5.1, Fix Pack 1 supports only one set of credentials. A compatibility issue might arise if you use a file transfer job defined for IBM® Workload Scheduler, version 8.5.1, Fix Pack 1, or if you use a file transfer job, version 8.6, but specify a single set of credentials. If the file transfer job contains a single set of credentials, the system automatically uses the path management implemented for file transfer jobs in IBM Workload Scheduler, version 8.5.1, Fix Pack 1, where the following rule applies:
> -   The specified path applies to the home directory of the user on the FTP, SSH, or Windows server, even in the case the path starts with a forward slash (/).

-- 
You received this message because you are subscribed to the Google Groups "pandoc-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pandoc-discuss+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To view this discussion on the web visit https://groups.google.com/d/msgid/pandoc-discuss/yh480k5zfzrbf6.fsf%40johnmacfarlane.net.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Odd MD2HTML issue: MD2DOCX, MD2ODT both product expected output for nested lists, but not MD2HTML
       [not found]     ` <yh480k5zfzrbf6.fsf-pgq/RBwaQ+zq8tPRBa0AtqxOck334EZe@public.gmane.org>
@ 2020-02-21 19:54       ` Guy Stalnaker
  0 siblings, 0 replies; 3+ messages in thread
From: Guy Stalnaker @ 2020-02-21 19:54 UTC (permalink / raw)
  To: pandoc-discuss


[-- Attachment #1.1: Type: text/plain, Size: 1259 bytes --]

John,

Thank you. That's easily remedied and I confirm removing it results in 
expected output. The DOCX and ODT output worked because the configs for 
them did not use that optional syntax.

It's there because I misunderstood it's purpose. I thought it was used to 
specify the tab spacing for the output file, not the input file. Now I know 
better.

And since I have your attention - THANK YOU for a truly great application 
in Pandoc. I've been using it for a very long time for my documentation 
needs at work where I can create markdown docs then use them to putput 
DOCX, JIRA, and HTML for use by colleagues, management, and customers 
depending on the application that hosts the documents (KnowledgeBase, Jira, 
Wiki, etc.). Of late I've been using it to create epub files that I can 
read on my Kobo Aura for personal use. 

Best regards!

-- 
You received this message because you are subscribed to the Google Groups "pandoc-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pandoc-discuss+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To view this discussion on the web visit https://groups.google.com/d/msgid/pandoc-discuss/b5a13943-9514-4edb-bd89-08f7a099fe4b%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 1698 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-02-21 19:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-21 17:22 Odd MD2HTML issue: MD2DOCX, MD2ODT both product expected output for nested lists, but not MD2HTML Guy Stalnaker
     [not found] ` <71ff68f9-a7ac-4a40-98b7-b24711d6200f-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org>
2020-02-21 19:24   ` John MacFarlane
     [not found]     ` <yh480k5zfzrbf6.fsf-pgq/RBwaQ+zq8tPRBa0AtqxOck334EZe@public.gmane.org>
2020-02-21 19:54       ` Guy Stalnaker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).