[XeTeX] Japanese Characters in PDF do not match thosein source file.
Ulrike Fischer
news3 at nililand.de
Fri Aug 20 11:29:22 CEST 2010
Am Fri, 20 Aug 2010 13:06:27 +0900 schrieb Andrew A. Adams:
> I recently upgraded my Fedora Core 10 to Fedora Core 13. I'm getting a very
> strange behaviour from processing latex files including Japanese text and
> processed using xelatex. I've created a minimal input source file which
> demonstrates the problem, which is that the unicode characters in the input
> file are not the ones that appear in the output. It's possible that somehow
> I'm getting Chinese characters instead of the Japanese ones in my original
> file. I create my files in xemacs, and set the buffer encoding to UTF-8. I
> use a script to process the file using xelatex with my default options:
>
> xelatex -interaction=nonstopmode -output-driver="xdvipdfmx -p a4 -V5 " $1.tex
> && acroread -tempFile $1.pdf
>
> Attached are the sample tex file, the resulting output file, the log file
> from manual xelatex processing and the output from manual xdvipdfmx
> processing.
Don't use inputenc with xelatex. Never! inputenc is meant for
8-bit-machines. It breaks with xelatex. If your file is utf8 or
utf16 there is no nead to declare the encoding.
--
Ulrike Fischer
More information about the XeTeX
mailing list