<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 06/11/2021 15:44, Don Hosek wrote:</div>
<div class="moz-cite-prefix"><br>
</div>
<blockquote type="cite"
cite="mid:F3CBF591-6631-459C-BD47-EAB0B4D310DE@gmail.com">As David
says, this is a ground-up language. By separation of concerns
there’s a good possibility to manage some interesting use cases.
The architecture is not unlike a contemporary compiler in that the
parsing is done to an intermediate representation which will then
be converted to the final output, but this means that, for
example, someone could plug a XML parser into the front end and
use all of the back-end capabilities for typesetting. There will
be multiple back ends allowing the same file to reliably target
output to PDF, HTML/ePub, XML+MathML or even InDesign or Word. I’m
thinking that a direct-to-screen backend will make sense for the
beamer-equivalent and give greater flexibility than is currently
possible using PDF presentation mode. But that’s all many years in
the future. Right now all I can do is take a text file with
TeX-style coding of -- --- `` ‘’ etc.¹ and output the
corresponding Unicode characters.</blockquote>
<p>OK, thank you, understood Don. But why, then, do you want to
"take a text file with TeX-style coding of -- --- `` ‘’ etc.¹ and
output the corresponding Unicode characters", when in your
manifesto you write "Unicode needs to be a first-class citizen.
There’s no reason in 2020 for a document writer to have to type <code>\’a</code>
instead of <code>á</code> in a document. UTF-8 is the new 7-bit
ASCII." ? Who, these days, writes -- --- `` ‘’ when they can so
easily write –, —, “, ” ?<br>
</p>
<p>-- <br>
<i>** Phil.</i><br>
</p>
</body>
</html>