[lltx] fontspec tests suite

Diederick C. Niehorster dcnieho at gmail.com
Fri Jun 4 09:02:08 CEST 2010

(sorry if this a top-post or without original text, not quite sure
what my phone does to it.)

Ah, if pngs are generated like that for both compare and test image is
even better. The way you've set it up now sounds very efficient. If in
the script you can get the output from imagemagick whether images are
the same or not, you could copy all the pairs giving a negative to a
temp directory for verification  of a human. Any simple image viewing
program that allows to navigate to next and previous image with arrow
keys is all you need then. When flipping front and back quickly, any
differences are extremely easily seen when there is not too much
content in the text (i study vision, humans are the best pattern
detectors). If the imagemagick part is complicating everything a lot,
you could even do away with it all together and let the human do it by
flipping front and back quickly (I suppose i could write a quick
OpenGL/GLUT program for showing the pairs fullscreen and flipping
quickly and writing the human response to file, but can't promise any
timeline right now).
I think the resolution of the png you attached is good!

Sounds like very nice setup, better than what I just came up with. If
you could roll this out for fontspec would be very useful to get quick
and detailed feedback about a whole bunch of less and more advanced
features from a lot of users.


On 6/4/10, Will Robertson <wspr81 at gmail.com> wrote:
> On 04/06/2010, at 1:07 PM, Diederick C. Niehorster wrote:
>> I'd say tests are good! If you want to quickly know if the
>> functionality of your package works correctly accross a range of
>> different systems and (in the case of fontspec importantly) different
>> interpreters. You'd want to provide the pngs so set them side to side
>> (which you seem to be suggesting, not familiar with your unicode test
>> suite) so all testers can quickly see if output is as intented.
> That's true.
> And even if the output is (say) one pixel wrong, it's still easy to verify
> that the output is as intended but the test is outputting a false negative.
>> And indeed, if you provide the script for creating the pngs, then we
>> can test our stuff easily as well, that'd be nice! Pngs of text should
>> be very small btw, and long processing times is not really an issue
> For unicode-math, the PNGs are about 20kB--40kB each, but in hindsight I
> think their dimensions are larger than necessary (they're about 1000x700
> pixels). I've attached a typical test image to the end of this message.
>> So, yeah I'm all for it. I could envision a kind of package with a
>> command like \addtest{arbitrary code} which, when processed a file wih
>> those commands would generate the pngs (is this possible
>> automatically) and a document with the test codes and the png output
>> set next to each other, then its easy to maintain and extend.
> The process with unicode math isn't quite that automatic, but it's close.
> You write a new file called umtestXXXX.ltx and then write
>     make initest
> which generates the PNG alongside the test, called umtestXXXX.safe.png.
> Then when you run
>     make test
> it recompiles the PNG in a build/ directory, called umtestXXXX.test.png, and
> then uses ImageMagick's `compare` tool to check if the two PNGs are the same
> or not.
> (The `make test` script also updates another document that contains a
> listing of all the test files with their output, but that's purely for
> documentation purposes.)
> -- Will

More information about the lualatex-dev mailing list