ntg-context - mailing list for ConTeXt users
 help / color / mirror / Atom feed
From: Hans Hagen via ntg-context <ntg-context@ntg.nl>
To: Rik Kabel via ntg-context <ntg-context@ntg.nl>
Cc: Hans Hagen <j.hagen@freedom.nl>
Subject: Re: bottlenecks
Date: Fri, 16 Dec 2022 23:07:33 +0100	[thread overview]
Message-ID: <2ea0d40e-b545-de4c-4b4c-4cfdc242acab@freedom.nl> (raw)
In-Reply-To: <4dd11121-7761-861a-1f4e-7ed04fb8117b@rik.users.panix.com>

On 12/16/2022 10:08 PM, Rik Kabel via ntg-context wrote:
> Hans,
> 
> Here are the stats for a 346 page book. Fonts are all cached. 
> Compilation is via a make file which processes this as:
> 
>     context --noconsole --overloadmode=error --batchmode --nonstopmode
>     --nosynctex misquotation_bodyonly.mkvi > nul
> 
> and is run under W11 x64 on an i7-8550U. The only tables are contents 

ok, not the fastest i7 out there, more the tablet one, right?

> and acronyms, and such, nothing complex. No graphics. Compact fonts are 
> enabled.

can you check compact mode .. when compact fonts are not enabled, do you 
get the same

 >     mkiv lua stats  > font engine: otf 3.131, afm 1.513, tfm 1.000, 84
 >     instances, 67 shared in backend, 3 common vectors, 64 common hashes,

i wonder why so many instances

>     mkiv lua stats  > node memory usage: 6869 attribute, 4608 dir, 4612
>     glue, 84 gluespec, 2304 glyph, 3072 hlist, 3 kern, 647 mathspec, 5
>     penalty, 2 temp

this is suspicious ... i fixed a dir leak recently but having 3K boxes 
dangling ...

> In neither case do the sum of the times listed in the stats come close 
> to the total runtime (in the second example, 14.774 seconds of 23.057 
> are accounted), so there are other unidentified processes involved.

these stats are an indication because below a threshold (time accuracy) 
nothing is measured

> In any case, the processing time has been improving greatly over the 
> last couple of years, and LMTX is significantly faster that MkIV in all 
> of my work.
sure, that is to be expected although it depends a bit on the use case, 
for instance the backend is slower (but does much more) so initially 
lmtx was actually slower but at some point we started gaining (and i can 
probably gain a little more)

i wonder why directions bump time because much of what tex does is sort 
of agnostic for directions (the backend needs more time but i don't see 
that in your stats)

when you run with --profile you get a much slower run but might get some 
info from the extra log

Hans




-----------------------------------------------------------------
                                           Hans Hagen | PRAGMA ADE
               Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
        tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl
-----------------------------------------------------------------

___________________________________________________________________________________
If your question is of interest to others as well, please add an entry to the Wiki!

maillist : ntg-context@ntg.nl / https://www.ntg.nl/mailman/listinfo/ntg-context
webpage  : https://www.pragma-ade.nl / http://context.aanhet.net
archive  : https://bitbucket.org/phg/context-mirror/commits/
wiki     : https://contextgarden.net
___________________________________________________________________________________

  reply	other threads:[~2022-12-16 22:07 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-16 17:10 bottlenecks Hans Hagen via ntg-context
2022-12-16 19:36 ` bottlenecks Henning Hraban Ramm via ntg-context
2022-12-16 20:02   ` bottlenecks Hans Hagen via ntg-context
2022-12-16 21:08     ` bottlenecks Rik Kabel via ntg-context
2022-12-16 22:07       ` Hans Hagen via ntg-context [this message]
2022-12-17  0:05         ` bottlenecks Rik Kabel via ntg-context
2022-12-17  9:48           ` bottlenecks Hans Hagen via ntg-context
2022-12-17 15:05             ` bottlenecks Rik Kabel via ntg-context
2022-12-17 17:21               ` bottlenecks Hans Hagen via ntg-context
2022-12-17 21:11                 ` bottlenecks Alan Braslau via ntg-context
2022-12-17 21:35                   ` bottlenecks Hans Hagen via ntg-context
2022-12-18 21:35                     ` bottlenecks Alan Braslau via ntg-context
2022-12-17 16:02     ` bottlenecks Henning Hraban Ramm via ntg-context
2022-12-17 17:43       ` bottlenecks Hans Hagen via ntg-context
2022-12-18 13:14 ` bottlenecks mf via ntg-context
2022-12-18 13:19   ` bottlenecks mf via ntg-context
2022-12-18 13:49   ` bottlenecks Hans Hagen via ntg-context
2022-12-18 15:21     ` bottlenecks mf via ntg-context
2022-12-18 15:27       ` bottlenecks Hans Hagen via ntg-context

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2ea0d40e-b545-de4c-4b4c-4cfdc242acab@freedom.nl \
    --to=ntg-context@ntg.nl \
    --cc=j.hagen@freedom.nl \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).