[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Unicode and math symbols
> Well, what was in TeX, at a certain stage, got accepted as
> "well-used". If you can come up with textbooks, journal articles,
> reccomendations from some standard bodies,... (e.g. Z), then
> you should have a strong case.
This seems to me to imply that some mathematical body has to police
what people are using and make cases as the notation changes.
But why? I have seen no arguments that I could put to the IMU or
national organisations to convince them that this is a sensible
activity. And I would certainly not tell authors and editors that
they should not use a new symbol because it is not in Unicode.
This is, as I see it, completely different from TeX or SGML where it
is easy to declare a new bit of notation; all you need to do is say
what its name is (and give some method of typesetting an approximation
to it if that is important). This may well result after some time in
this notation becoming common enough that typesetters/publishers see a
need for it becoming something to add to the glyph registry; but that
need not involve mathematicians at all.
> I agree very much with you. However, I read Barbara's comments
> as to that she wants to be closer to semantics than we would.
But it seems that, eg Berthold and Richard (Kinch) do not: and they are at
the coal face (running the mine even?) so we need to know why.
I too was puzzled by what Barbara meant by "meaning" in the context of
> > One reason for this is that the natural structure of even quite simple
> > typeset maths is visually much more complex than the Unicode model
> > (for Latin-based systems) of "base+diacritics" and it is not closely
> > related to the more complex visual structure of other writing systems.
> If you mean *text* writing systems, then they all use lines/columns.
> Many of them have some more complicated structure on a micro-level
> (i.e. Tibetan stacks, ligatures, diacritics,...) which Unicode
> deals case-by-case (you need separate rendering logic for each of the
> more complicated scripts, or a very general mechanism).
Yes, that is what I mean: some of these local mechanisms seem to me to
be similar (some in nature, others in complexity) to those needed in
math typestting. So here I was just pointing out to bb that it would
be technically possible to include "simple math typestting" within
this paradigm; I hope I made it clear that I was not suggesting that
this should be done.
> If you mean other symbolic writing systems, such as mathematical
> notation and musical notation, then you are right. Unicode is
> not designed for them nor plans to address these.
The "local mechanisms" for music are more complex than "simple math
typesetting", as are some parts of maths (eg commutative diagrams) and
But music is perhaps a good analogy: Unicode does not even try to
contain all "well-used musical characters" as it easily could do. So
why should it try to do so for maths?
> The practical value may be that you can use such symbols in
> text. You won't be able to write nice formulae, but you will
> be able to write formulae with lots of parentheses.
That is a reason, but the number of symbols one might want to use in
this way (ie outside what is, or should be within
"language=math-notation" elements) is probably far smaller than what
is already there.
My example of the minus sign is probably a good paradigm here; ie
something that needs to be clearly distinguished from similar
characters and allows simple things like "-5" to appear as text and be
rendered usefully and sensibly without using a math-formatting engine
[Also, as you say, it can already be encased in any number of amazing
"parenthetical symbols" (many of which I have never heard of before
[but which do include one that I claim is very well-used but is not in
the AMS repertoire: the "open-face bracket" (probably not the Unicode
name but it is there, [in TeX-speak it might be called a
double-bracket or even a Blackboard-bold bracket \Bbbrack])]).]