Hey again,<div><br></div><div>Ok, I think I found a quick fix that "really works" on linux. On ubuntu you need the package uni2ascii.</div><div><br></div><div><i>uni2ascii -a E unicode.tex > unicode-a.tex</i></div>
<div><i>dvilualatex unicode-a.tex</i></div><div><i>tex4ht -f/unicode-a.tex</i></div><div><i>t4ht -f/unicode-a.tex</i></div><div><i>ascii2uni -a E unicode-a.html | ascii2uni -a H > unicode.html</i></div><div><br></div><meta http-equiv="content-type" content="text/html; charset=utf-8"><div>
This will produce a utf-encoded output file. The second decoding there is for characters that have added as html-entities by tx4ht. If we use utf-8 characters anyways, we can as well do it all the way. If we want an ascii-file instead (for older browsers?), we can instead do:</div>
<div><br></div><div><meta http-equiv="content-type" content="text/html; charset=utf-8"><i>ascii2uni -a E unicode-a.html | uni2ascii -a H > unicode.html</i></div><div><br></div><div>The very first line of the script has to be executed for all input files if there should be more of them. This assumes that there are only unicode characters used in the body text and not the latex-markup. It also assumes that nowhere in the text there is to be found a substring that starts with a capital U and has 4 hexadecimal numbers following it. </div>
<div><br></div><div><br></div><div>With those exceptions,I believe this should take care of all unicode characters.</div><div><div><br>-- <br>Johannes Wilm<br><a href="http://www.johanneswilm.org" target="_blank">http://www.johanneswilm.org</a><br>
tel: +1 (520) 399 8880<br><div style="display:inline"></div><br>
</div></div>