[XeTeX] Japanese Characters in PDF do not match thosein source file.
Andrew A. Adams
aaa at meiji.ac.jp
Fri Aug 20 13:39:12 CEST 2010
> Am Fri, 20 Aug 2010 13:06:27 +0900 schrieb Andrew A. Adams:
>
> > I recently upgraded my Fedora Core 10 to Fedora Core 13. I'm getting a very
> > strange behaviour from processing latex files including Japanese text and
> > processed using xelatex. I've created a minimal input source file which
> > demonstrates the problem, which is that the unicode characters in the input
> > file are not the ones that appear in the output. It's possible that somehow
> > I'm getting Chinese characters instead of the Japanese ones in my original
> > file. I create my files in xemacs, and set the buffer encoding to UTF-8. I
> > use a script to process the file using xelatex with my default options:
> >
> > xelatex -interaction=nonstopmode -output-driver="xdvipdfmx -p a4 -V5 " $1.tex
> > && acroread -tempFile $1.pdf
> >
> > Attached are the sample tex file, the resulting output file, the log file
> > from manual xelatex processing and the output from manual xdvipdfmx
> > processing.
>
> Don't use inputenc with xelatex. Never! inputenc is meant for
> 8-bit-machines. It breaks with xelatex. If your file is utf8 or
> utf16 there is no nead to declare the encoding.
inputenc is not the problem. It persists even when I take out that line, and
when I have the file encoded in UTF-8 (sorry for the UTF-16 version I posted
earlier).
--
Professor Andrew A Adams aaa at meiji.ac.jp
Professor at Graduate School of Business Administration, and
Deputy Director of the Centre for Business Information Ethics
Meiji University, Tokyo, Japan http://www.a-cubed.info/
More information about the XeTeX
mailing list