ntg-context - mailing list for ConTeXt users
 help / color / mirror / Atom feed
From: Hans Hagen <pragma@wxs.nl>
Cc: ntg-context@ntg.nl
Subject: Re: General inquiries
Date: Sun, 21 Jan 2001 20:51:13 +0100	[thread overview]
Message-ID: <3.0.6.32.20010121205113.015daa10@server-1> (raw)
In-Reply-To: <3.0.5.32.20010119214121.00b6e750@mail.northcoast.com>

At 09:41 PM 1/19/01 -0800, David Arnold wrote:
>Hans et al,
>
>1. Can you explain the difference between the switches --nomp and
>--nomprun? What's the difference?

--nomp can be uses in the [currently rare] case that tex calls mp calls tex
calls mp calls tex calls mp calls tex ... freezes windows

>2. Let's say you are working on a doc. The doc has embedded MP graphics.
>You get a good compile, the images are generated, and you go back to typing
>the source. You work a bit, entering narrative, but no new MP graphics. How
>should the next compile be run?
>
> texexec --pdf --nomp filename
>
>or
>
> texexec --pdf --nomprun filename

If you consistently use --nomprun every next run will have the right
graphics. This is a good method when you are sure that the graphics don't
interfere [use numeric abc later as pair and vise versa] and graphic
dimensions are based on cq. determine layout properties in which case they
can [temporarily] get out of sync. 

>Or perhaps some other option?
>
>Also, once you get a good compile, how does Context know about the graphics
>when you do a texexec --nomp or texexec --nomprun? Is the answer you will
>provide to this question valid whether or not \recycleMPslotstrue is set?
>Is it true that once the embedded graphics are crafted at runtime, future
>compiles need not regenerate these graphics because they are available for
>future compiles?

When the graphic does not change [reusable or unique graphics] they will be
reused anyway. 

The \recycleMPslotstrue will become default when i can conclude that
everything works as expected.

I am thinking of a checksum calculation to determine unchanged graphics,
btw, this is already used in mpo generation [outline fonts], btu this is
not that safe. In many cases --nomprun is your friend, since the mprun
itself is always quite fast.  

>3. The way I usually work, I will type several paragraphs of code, then
>compile to see how I am doing. I am reticent to type pages and pages
>without checking, for fear that I will create such a morass of errors that
>I will never figure my way out of the mess. So, my editing cycle is usually
>type a bit a code, compile and check, fix errors, type a little code,
>compile and check, fix errors, .... What's the best way to compile in this
>situation?

(1) here we select and run that part [all defs go before the
\starttext|...] and are also copied to the temp file
(2) use a small temp file [which is what i often do] and copy the text to
the main file when ok

>4. What can be frustrating is to watch Context compiling a document for a
>significant amount of time, only to finally inform me after a minute or so
>that I have mispelled some command, forgotten a $ sign, or some such error
>that is typical of the normal tex editing cycle. Is there some way to
>bypass all the preliminary stuff and just do a quick check for this sort of
>error? Any advice here?

Isn't your editor capable of testing on missing $'s? Any odd number of them
is wrong. 

>5. I will usually compile metapost graphics with mpost, then view in
>Gsview. It is so much faster this way. Once I have my Metapost graphic(s)
>just right, then I will embed it (them) into my Context document. Is there
>a similar method that will compete with this development speed that you
>might suggest?

no, sounds ok. 

>6. Which of the following will give faster processing?
>
> \startbuffer[name]
>   MPcode
> \stopbuffer
>
> \placefigure
> [here][fig1]
> {caption}
> {\processMPbuffer[name]}
>
>or
> 
> \startreusableMPgraphic{name}
>   MPcode
> \stopreusableMPgraphic
>
> \placefigure
> [here][fig1]
> {caption}
> {\reuseMPgraphic{name}}

the same, unless you use {name} a second time, in which case the second
method is faster

>Or do you suggest a better way for faster, more efficient processing? Which
>method leaves the fewest auxiliary files hanging around after the compile?

You can just make stand alone graphics, or even: 

(1) put your graphics in david.mp
(2) "mpost david" or "texexec --mprun david"
(3) "texexec --pdf --fig=c david.*" 
(4) "move texexec.pdf david.pdf" 
(5) use \externalfigure[david.pdf][page=23] to fetch the figure
"beginfig(23) ... endfig"

So, lots of ways to go. 

Hans
-------------------------------------------------------------------------
                                  Hans Hagen | PRAGMA ADE | pragma@wxs.nl
                      Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
 tel: +31 (0)38 477 53 69 | fax: +31 (0)38 477 53 74 | www.pragma-ade.com
-------------------------------------------------------------------------


      reply	other threads:[~2001-01-21 19:51 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-01-20  5:41 David Arnold
2001-01-21 19:51 ` Hans Hagen [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3.0.6.32.20010121205113.015daa10@server-1 \
    --to=pragma@wxs.nl \
    --cc=ntg-context@ntg.nl \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).