[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Round-off errors in metrics ...
At 04:36 PM 97/12/17 -0800, Melissa O'Neill wrote:
>It'd also be useful to know just how much difference the current `round
>to an integer' process makes -- Berthold Horn did seem pretty outraged
>at the very idea, but are other people similarly upset by these
>inaccuracies. Is there a noticable difference?
Isn't worrying about the round-off (or truncation?) rather like
straining at a gnat and swallowing a camel? The camel is the complete
neglect of 'overshoot' by every afm conversion program I've seen,
which means that the height of a,e,o in the .tfm is slightly greater
than xheight, with the visible result that accents placed over these
letters by \accent (in OT1 encoding) are higher than the same accents,
either placed over \i or u by \accent, or placed by PostScript if T1
encoding is used. [Accents are raised by (character height - xheight).]
In some fonts this may be partially corrected by the rounding done by
vptovf, but this seems a very strange mechanism to rely on to get
decent output (I think that if rounding merged the height of x and
that of o, the result would be that all lower-case letters without
ascenders would acquire a height a bit more than the xheight recorded
as a fontdimen), and furthermore never applies to inferior diacritics,
because zero is always excluded from the calculations of rounding.
Incidentally, in reply to a later question, the rounding takes place
because the format of a .tfm file only allows 16 different values for
character height and 16 for depth.
So if anyone is planning to improve the algorithms for afm conversion,
I suggest that the overshoot area is well worth investigation. It's
even possible that fixing heights and depths before they are seen by
vptovf might eliminate the necessity for rounding at that stage.