[luatex] Removing the zero-initialization of an array can speedup LuaTeX on plain TeX document by 25%
Hans Hagen
j.hagen at xs4all.nl
Tue Sep 3 12:49:15 CEST 2024
On 9/3/2024 3:00 AM, Reinhard Kotucha via luatex wrote:
> On 2024-09-01 at 10:07:38 +0200, Hans Hagen wrote:
>
> > Alas luametatex needs .46 sec on windows 10 (as does luatex), .39
> > on the linux subsystem, .36 on bare metal linux (all the same 7th
> > gen intel 2018 laptops)
>
> Just out of curiosity, how do you determine runtime on Windows?
using precise timers in the runner and engine
> > pdftex 0.65
> > xetex 0.80
> > luatex 0.89
> > luametatex 0.61
> > luametatex cf 0.50
> >
> > That's the average of 1000 runs and a run includes the management
> > i.e. a startup script, multipass data processing and such. In the
> > end it doesn't tell us anything here.
>
> Wouldn't it be more appropriate to determine the minimum time instead
> of the average? A predictable program requires a certain number of
> clock cycles, independent of when the program is invoked. If the time
> is greater than the minimum, the measurement is certainly distorted by
> other processes. There are a lot of kernel processes running in the
> background, and Firefox permanently creates files even if you don't
> touch it. Don't know why.
when i time real documents (say a 350 page manual) i do several runs and
take the fastest, so when i then end up with 9.56 sec, i normally
trucate to the second decimal .. in the end performance mostly about
what feels right .. i normally look a the 'pages per second' for a
specific document
concerning processes: consoles make a difference so one can't compare a
HD setup with a 4K one.
> > This somewhat slow kpse file lookup is why context had a minimal
> > distribution for mkii right from the start.
>
> Sure, but no option for LaTeX. TeX Live is modular but LaTeX users
> can't know in advance what they need.
as starter just remove the tons of not used font files
> > Also, the overhead of loading the lsr files comes with every
> > kpsewhich but that we could get around using different startup
> > scripts.
>
> At least if you use other scripting languages than texlua. kpsewhich
> could be more efficient if it doesn't read the ls-R files when only
> variable expansion is requested.
really? lua is much faster than for instance ruby (which is what i used
before) and in bith cases i want an instantaneous result from a file lookup
> > But that's all kind of old because SSD combined with operating
> > systems using abundant memory for directory and file caching helps
> > a lot. Just measure a decent tex run (or making a format) after a
> > system restart and then a second time.
>
> There is no need to restart a Linux system. Run this as root:
>
> sync && echo 3 > /proc/sys/vm/drop_caches
>
> I mention it here because it's the fastest and most convenient way to
> drop the caches though many people aren't aware of this solution.
i suppose that suspending does the same ...
> I can read cached files with a transfer rate of 12 GB/s on my main
> machine but my Web server is a Raspberry Pi 3B with limited resources.
> On this server PDF files are created on-the-fly and the cache might be
> filled by other processes between the TeX runs. In this case file
> sizes matter.
imagine running on a nfs share on a GB network ... that's what i keep in
mind when cooking up solutions
> > > With the setup decribed above I can compile a "hello world" with
> > > Knuth's TeX within 13 ms and "The TeXbook" (285 pages) in less than
> > > 190 ms.
> >
> > What system? Luatex or pdftex?
>
> With Knuth's TeX (DVI output). I can compile it with pdfTeX in less
> than 600 ms, and with LuaTeX within one second.
i suppose 8 bit fonts and base mode with luatex
> A "hello world\bye" takes 13 ms with Knuth's TeX (DVI output), less
> than 120 ms with pdfTeX and less then 95 ms with LuaTeX.
>
> pdfTeX and LuaTeX have to embed the fonts which takes some time.
> Obviously LuaTeX is faster than pdfTeX when processing small files
> while the opposite is true for large files.
sure, a few page versus hundreds makes a differece here (so one can see
the average runtime per page drop from say 0.03 sec do 0.02 sec after
hundreds of pages)
fwiw: in luametatex all pdf generation is in lua as as all font handling
and esp the backend (which also has more features) is a bottleneck but
it gets compensated by the engine being faster so in the end we win runtime
> I suppose that the reason is that LuaTeX has to make more nodes
> accessible if files are large and for small files it benefits from the
> fact that it loads hyphenation patterns for only one language.
loading patterns takes zero time (fifth decimal) and node allocation
time is pool based so has little and rather constant overhead; of cource
nodes are larger so nulling them can teke a bit more time
but the performance hit is often not where one thinks it is (for
instance adding 5 more passes to the par builder has little impact for
variosu reasons)
Hans
-----------------------------------------------------------------
Hans Hagen | PRAGMA ADE
Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl
-----------------------------------------------------------------
More information about the luatex
mailing list.