[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Unicode and math symbols
On Tue, 25 Feb 1997, Berthold K.P. Horn wrote:
> Let me stick my neck out here: I know this was not the intent
> of UNICODE, and UNICODE has many features that make it non-ideal
> for this, but UNICODE *is* a de facto glyph standard.
> (1) Which is why we have the `alphabetic presentation forms'
> ff, ffi, ffl, fi, fl, slongt, st etc. in UNICODE.
They are in the compatibility section.
Well, they were put in *somewhere* because they are needed, since (i) we
do not have a usable and widely accepted glyph standard, and (ii) because
most software wants to be able to have *some* way of telling what
glyph in a font is to be used. We can all dream about a better
world (GX?) but, do you really want to deal with each font as a
separate entitity? Wasn't it bad enough to have ten or twelve
different character encoding schemes used by Computer Modern fonts?
If you consider the Indic scripts and Arabic (except for the
compatibility section), you would not say that Unicode is a
Oh yes, I agree. I know what the religiously pure answer is.
And I know that this is much more involved for some scripts
than others. But do I really need - in English - to make a distinction
between the characters A-Z and the glyphs A-Z? Or, beyond that,
most of the glyphs in the ISO Latin X tables (if we ignore the
mistakes and bad choices made).
But anyway, meantime we need to make life easier! And despite all the
explanations and arguments I don't see a whole lot wrong with using
UNICODE as essentially a glyph standard for Latin X, Cyrillic, Greek,
and yes, most math symbols, relations, operators, delimiters etc.
Except that unfortunately they don't cover enough of math to be