[metapost] arclength() tolerances

Hartmut Henkel hartmut_henkel at gmx.de
Sat May 24 16:54:18 CEST 2014


Hi Taco,

On Thu, 3 Apr 2014, Taco Hoekwater wrote:
> On 02 Apr 2014, at 21:10, Hartmut Henkel <hartmut_henkel at gmx.de> wrote:
> > in metapost there seem to be absolute tolerance values for iterative
> > algorithms like arclength() even in the double numbersystem:
>
> Yes, the arc_test() routine (the core or arclength/arctime) uses a
> predefined tolerance of unity/4096. While it is clear to me why it
> needs a tolerance setting, it is not clear why that precise value, nor
> does lowering it actually help, it just makes mp run slower:

> Maybe there is some intrinsic error in the approximation code?

for the "double" number system the cause is in mp.w:

  number_add_scaled (tmp, 2);

This adds scaled 2, which still is a fixed value (1/(2^16)), perhaps for
better rounding (?) in the original fixed mpost number system, and it
limits the precision, even when math->arc_tol_k.data.dval is reduced
considerably.

When this line is just removed for "double", the arclength calculation
seems to scale nicely with smaller tolerance settings. Then there is no
problem to get an arclength exact to e. g., 1e-12. It gives deeper
mp_arc_test() recursions, but not excessively. For 1e-12 some test arc
gave approx. total 1800 instead of originally 13 calls to mp_arc_test(),
so it gets slower, but not much. For the checked example arc the method
by Gravesen needed about 18000 steps, but with another arc shape this
can be an inverse ratio as well, seems with rather straight curves the
Gravesen method is faster than the method with Simpson rule.

A similar improvement is possible for arctime(), since there is another
such number_add_scaled() to be removed. It looks as if for "double" all
(?) these number_add_scaled() give more errors than they help. Should
such a tolerance be presettable and how?

Regards, Hartmut



More information about the metapost mailing list