Hans,
As I said my desktop is elderly... it has a 2.8GHz processor, 16Gb of
DDR3 memory, and a couple of old SATA1 hard disks, and only 3Mb of CPU
cache...
... all well past its use by date for single threaded ConTeXt. ;-(
So one way to get better performance for ConTeXt is to invest in a new
ultra fast processor. Which will cost a lot, and use a lot of power
which has to be cooled, which uses even more power....
Alternatively, for the same costs (or less), I can buy cheaper slower
processors but have lots of threads (a cluster of Raspberry Pi 4 8Gb
cards)...
Alas this requires finding some way to parallelize ConTeXt....
(Fools rush in where Angels fear to tread ;-(
Regards,
Stephen Gaito
On Wed, 2 Dec 2020 14:04:18 +0100
Hans Hagen
On 12/2/2020 10:40 AM, Stephen Gaito wrote:
Many thanks for your swift and helpful comments.
After some *very crude* tests using the `luametatex` and `luametafun` documents, I find that while I *can* stop effective processing at various points in the LuaMetaTeX pipeline, the time difference overall is not really significant enough to bother with this approach.
The principle problem is, as you suggested below, "stopping" the pipeline at the PDF stage (using for example the `pre_output_filter`) corrupted the `*.tuc` data which is for my purposes, critical.
Your comment was:
but keep in mind that multipass data is flushed as part of the shipout (because it is often location and order bound)
For the record, using the `append_to_vlist_filter` callback, I did manage to drastically reduce the "pages" (which were all blank, not surprisingly).
However, on my elderly desktop from 2008, both callbacks essentially cut only 6-8 seconds out of 18 seconds, for the `luametatex` document, and 190 seconds, for the `luametafun` document.
hm, on my 2013 laptop the luametatex manual needs 10 sec (i have all the fonts, so that includes a bunch) and a metafun manual should do about 20
a test on am M1 mini needs half those times as reported yesterday
i bet that on a modern desktop the luatex manual will do < 5 sec
In the case of the `luametafun` document, it is the MetaFun/MetaPost processing which, of course, is taking a long time (as it should, the graphics computations represent important but complex computations).
One run or many due to xref? Maybe your machine has no significant cpu cache? Do you run from disk or ssd? How much memory?
My ultimate goal is to parallelize the production of large, heavily cross-referenced, ConTeXt documents... more on this in a future email... Hans
----------------------------------------------------------------- Hans Hagen | PRAGMA ADE Ridderstraat 27 | 8061 GH Hasselt | The Netherlands tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl -----------------------------------------------------------------