ntg-context - mailing list for ConTeXt users
 help / color / mirror / Atom feed
From: Lars Huttar <lars_huttar@sil.org>
To: mailing list for ConTeXt users <ntg-context@ntg.nl>
Cc: jelle_huisman@sil.org
Subject: Re: distributed / parallel TeX?
Date: Tue, 16 Dec 2008 13:28:07 -0600	[thread overview]
Message-ID: <49480147.5030109@sil.org> (raw)
In-Reply-To: <4947E764.9030205@wxs.nl>

On 12/16/2008 11:37 AM, Hans Hagen wrote:
> Lars Huttar wrote:
> 
>> We have close to 7000 mpgraphics, and they add about 15 minutes to the
>> run time.
> 
> most of them are the same so reusing them made sense
> 
>> But the run time was already quite long before we started using those.
>>
>>> - define fonts beforehand
>>
>> OK, we will look into this. I'm sure Jelle knows about this but I'm a
>> noob. I'm pretty sure we are not *loading* fonts every time, but maybe
>> we're scaling fonts an unnecessary number of times.
>> For example, we have the following macro, which we use thousands of
>> times:
>>     \def\LN#1{{\switchtobodyfont[SansB,\LNfontsize]{#1}}}
> 
> indeed this will define the scaled ones again and again (whole sets of
> them since you use a complete switch); internall tex reuses them but it
> only know so when they're defined
> 
>> Would it help much to instead use
>>     \definefont[SansBLN][... at \LNfontsize]
>> and then
>>     \def\LN#1{{\SansBLN{#1}}}
>> ?
> 
> indeed:
> 
> \definefont[SansBLN][... at \LNfontsize]
> 
> but no extra { } needed:
> 
> \def\LN#1{{\SansBLN#1}}

Thanks, we will try this.
(Jelle, since you have worked with this a lot longer than I have, please
stop me if you have concerns about my making this sort of change.)

>>> - use unique mpgraphic when possible
>>
>> I would be interested to know if this is possible in our situation. Most
>> of our mpgraphics are due to wanting thick-and-thin or single-and-double
>> borders on tables, which are not natively supported by the ConTeXt table
>> model.
> 
> i sent jelle the patched files

OK, I'll look to hear from him. Are these patches to support these kinds
of borders on tables, thus no longer needing to use MPgraphics?

>> The advice I received said to define each mpgraphic using
>> \startuseMPgraphic (we have about 18 of these), associate them with
>> overlays using \defineoverlay (again, we have 18), and then use them in
>> table cells using statements like
>>     \setupTABLE[c][first][background={LRtb}]
>> Empirically, this seems to end up using one mpgraphic per table cell,
>> hence our thousands of mpgraphics. I don't know why a new mpgraphic
>> would be created for each cell. Can someone suggest a way to avoid this?
> 
> metafun manual: unique mp graphics

Great...
I converted our useMPgraphics to uniqueMPgraphics. This reduced our
number of mpgraphics from 7000 to 800!

Unfortunately the result doesn't look quite right... but since we may
not need to use mpgraphics anyway thanks to your patches, I'll hold off
on debugging the result.

>>> i changes the definitions a bit and now get 5 pages per second on my
>>> laptop in luatex; xetex processes the pages a bit faster but spends way
>>> more time on the mp part
>>
>> My last run gave about 0.25 pages per second on our fastest server, when
>> taking into account multiple passes; that comes out to about 2 pps for
>> --once.
> 
> the patched files do 5-10 pps on my laptop (was > 1 sec pp) so an
> improvement factor of at least 5 is possible
> 
> there are probably other optimizations possible but i cannot spent too
> much time on it

Thanks for all your help thus far.

Lars

___________________________________________________________________________________
If your question is of interest to others as well, please add an entry to the Wiki!

maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context
webpage  : http://www.pragma-ade.nl / http://tex.aanhet.net
archive  : https://foundry.supelec.fr/projects/contextrev/
wiki     : http://contextgarden.net
___________________________________________________________________________________


  reply	other threads:[~2008-12-16 19:28 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-15 23:06 Lars Huttar
2008-12-16  8:08 ` Taco Hoekwater
2008-12-16 18:13   ` Lars Huttar
2008-12-16 21:31     ` Martin Schröder
2008-12-16 22:10       ` Lars Huttar
2008-12-16 22:17         ` Martin Schröder
2008-12-17  8:47           ` Taco Hoekwater
2008-12-16 21:15   ` luigi scarso
2008-12-16 23:02     ` Lars Huttar
2008-12-17  8:22       ` Hans Hagen
2008-12-17  8:53         ` luigi scarso
2008-12-17 13:50           ` Lars Huttar
2008-12-16  9:07 ` Hans Hagen
2008-12-16 15:06   ` Aditya Mahajan
2008-12-16 15:53     ` Hans Hagen
2008-12-16 17:25       ` Lars Huttar
2008-12-16 17:37         ` Hans Hagen
2008-12-16 19:28           ` Lars Huttar [this message]
2008-12-17  2:57             ` Yue Wang
2008-12-23  3:48             ` error when using uniqueMPgraphics Lars Huttar
2008-12-23  5:33               ` Lars Huttar
2008-12-23  7:30               ` Wolfgang Schuster
2008-12-16 18:40         ` distributed / parallel TeX? Mojca Miklavec

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49480147.5030109@sil.org \
    --to=lars_huttar@sil.org \
    --cc=jelle_huisman@sil.org \
    --cc=ntg-context@ntg.nl \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).