[texhax] question

Ariane argual at bluewin.ch
Wed Jan 30 16:48:59 CET 2013


Hello!

I have a vexing problem with latex as described in the attending file.
I am not technically competent, but an applied mathematician struggling 
to arrive
at a decent manuscript. Can you help?

Thank you very much and best regards.

Antonio Gualtierotti (Tug  member)
antonio.gualtierotti at idheap.unil.ch
-------------- next part --------------
Hello!

I have the following code:

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

\documentclass{book}
\usepackage{amsmath}

\begin{document}
%\input{III1}
%\input{III2}
%\input{III3}
%\input{III4}
%\input{III5}
\input{III6}
%\input{III7}
%\input{III8}
%\input{references}

\end{document} 

YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY

\input{III6} is:

\chapter{White noise model's scope II}

\noindent
When one does not know that the ``white noise model'' has a representation in the form of a stochastic differential equation,
it becomes important to know that such a representation exists as it is that representation that allows an explicit form for the likelihood.
In the previous chapter it was seen that the existence of the likelihood has the consequence that the observations are represented
in the form of a  stochastic differential equation. The result is however
an existence result that gives no hint as to the form of the resulting signal. When one is willing or able to assume some integrability
conditions on the signal, one then obtains a stochastic differential equation form in which the signal is a conditional expectation with respect to the observations.
That latter stochastic differential equation representation is known under the appellation of ``innovation representation'' and makes up the next topic.
In the framework of the Cram\'er-Hida representation, assumptions on the integrability of the signal in the derived ``white noise model'' may be difficult, nay, impossible
to justify. It is nevertheless useful to know that the stochastic differential equation representation is related to conditional expectations with respect to the observations process.

 \vspace*{10pt}
 \noindent
The following ``generic model'' is assumed thereafter and assumptions about it shall be explained as needed
as an innovation representation may be obtained under diverse assumptions on the signal and the noise:
$$\underline{X}=\underline{S}[\ \!\underline{a}\ \!]+\underline{N}.$$

%\input{III61}
\input{III62}
%\input{III63} 

ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ

III62.tex is:

\section{The  case of vector processes: $\underline{X}=S\left[\underline{a}\right]+\underline{B}$}

\noindent
The {\tt assumptions} are those of the ``white noise model''  except that
 $\underline{a}$
shall belong to
$ {\cal{I}}_2\left[\ \!\underline{b}\ \!\right]$
rather than to
 ${\cal{I}}_0\left[\ \!\underline{b}\ \!\right].$
That restriction has its origin in 6.2.1.D above.

\vspace*{10pt}
\noindent
{\bf 6.2.8 Proposition:}
{\em
Suppose that
$ \underline{a}\in {\cal{I}}_2\left[\ \!\underline{b}\ \!\right].$
There exists a process
 $\underline{I},$
adapted to
 $\underline{\sigma}\left(\underline{X}\right),$
whose law is
 $P_B^K$
and a predictable process
$ \underline{a}^K,$
with base
$ \left(K,\underline{{\cal{K}}},P_X^K\right),$
such that
 $\underline{a}^K \circ \Phi_X\in {\cal{I}}_2\left[\ \!\underline{b}\ \!\right]$
and, for
$ t\in \left[0,1\right],$
fixed, but arbitrary, almost surely, with respect to
$ P, $
$$ \underline{X}(\omega,t)=\underline{S}\left[\underline{a}_K\circ \Phi_X\right](\omega,t)+\underline{I}(\omega,t). $$
\noindent
That representation is unique.}

\vspace*{10pt}
\noindent
{\em Proof:} Since
$X_n(\omega,t)=\int_0^t a_n(\omega,\theta)\ \!M_n(d\theta)+B_n(\omega,t),$
one may use 6.2.5 to obtain that
$$X_n(\omega,t)=\int_0^t \alpha_n(\omega,\theta)\ \!M_n(d\theta)+I_n(\omega,t),$$
\noindent
where
$\alpha_n$
is a version of the conditional expectation of
$a_n$
 with respect to
$\underline{\sigma}\left(\underline{X}\right)$
and
$I_n$
has the same law as
$B_n.$
Furthermore
$\underline{I}$
shall have independent components (6.2.6).

\vspace*{10pt}
\noindent
One must now decompose
$\alpha_n$
 to obtain
 ${a}_n^K.$
 One proceeds as in 6.2.5 using the
 $[0,1]$
 with points
 $t_{p,k}.$
One has that
\begin{eqnarray*}
\alpha_n&=&\lim_p \alpha_p^{(n)},\\
  \alpha_p^{(n)}&=&\sum_i \alpha_{p,i}^{(n)},\\
 \alpha_{p,i}^{(n)}&\mbox{is}&\mbox{ adapted to }
  \sigma_{t_{p,i}}(\underline{X})\otimes {\cal{B}}
 \left(\left]t_{p,i},t_{p,i+1}\right]\right)\\
\end{eqnarray*}
\noindent
and that
$$\Phi_X^{-1}
\left\{
      {\cal{K}}_{t_{p,i}}
      \otimes
     {\cal{B}}
      \left(
            \left]t_{p,i},t_{p,i+1}
            \right]
      \right)
\right\}=\sigma_{t_{p,i}}
      (\underline{X})
      \otimes {\cal{B}}
      \left(
      \left]
            t_{p,i},t_{p,i+1}
      \right]
      \right).$$
\noindent
One has thus the following situation: on
$\left(
       \Omega\times [0,1],
       \sigma_{t_{p,i}}
       (\underline{X})\otimes {\cal{B}}
      \left(
      \left]
            t_{p,i},t_{p,i+1}
      \right]
      \right)
\right),$ there are two maps,
\begin{itemize}
\item
 the first, $\Phi_X,$ adapted to the range
$\left(
       K\times [0,1],
       {\cal{K}}_{t_{p,i}}\otimes
      {\cal{B}}
      \left(
      \left]
            t_{p,i},t_{p,i+1}
      \right]
      \right)
\right),$
\item
and the second, $\alpha_{p,i}^{(n)},$ adapted to the range
$\left(I\!\!R\times [0,1],{\cal{B}}(I\!\!R)\otimes {\cal{B}}([0,1])\right).$
\end{itemize}



\noindent
      The factorization theorem \cite[p. 443]{jhj} then says that
      $\alpha_{p,i}^{(n)}=a_{n,p,i}^K\circ \Phi_X$ for some adapted
$$a_{n,p,i}^{(n)}:\left(
                        K\times [0,1],
                        {\cal{K}}_{t_{p,i}}
                        \otimes
                        {\cal{B}}
                        \left(\ \!
                              \left]
                                    t_{p,i},t_{p,i+1}
                             \right]\ \!
                        \right)
                  \right)
                  \longrightarrow
                  \left(
                        I\!\!R\times [0,1],
                        {\cal{B}}(I\!\!R)
                        \otimes
                        {\cal{B}}([0,1])
                 \right).$$
\noindent
One then sets
$$a_{n,p}^K=\sum_i a_{n,p,i}^K$$
 with the property that
$$a_{n,p}\circ \Phi_X = \alpha_p^{(n)}.$$
$a_{n,p}^K$ has base
$ (K,\underline{{\cal{K}}},P_X^K)$ and is adapted to
$\bigvee_i  {\cal{K}}_{t_{p,i}}\otimes {\cal{B}} \left(\left]t_{p,i},t_{p,i+1}\right]\right).$
It is thus predictable.

\vspace*{10pt}
\noindent
Let
\begin{eqnarray*}
L_n&=&\left\{(\omega,t)\in \Omega\times [0,1]:\lim_p a_{n,p}^K\circ \Phi_X(\omega,t)\mbox{ does not exist}\right\}
\\&=&
\left\{(\omega,t)\in \Omega\times [0,1]: \lim_p \alpha_p^{(n)}(\omega,t)\mbox{ does not exist}\right\}.\\
\end{eqnarray*}

\noindent
By construction (6.2.1), $P\otimes M_n(L_n)=0.$ Let
$$L_n^K=\left\{(\underline{k},t)\in K\times [0,1]:\lim_p a_{n,p}^K(\underline{k},t)\mbox{ does not exist}\right\}.$$
\noindent
One has that
$P_X^K\otimes M_n\left(L_n^K\right)=P\otimes M_n(L_n)=0.$ One may thus define $a_n^K$ setting
$$a_n^K(\underline{k},t)=\lim_p a_{n,p}^K(\underline{k},t).$$
\noindent
$a_n^K$ is thus predictable and, for $t\in [0,1],$ fixed, but arbitrary, almost surely, with respect to $P,$
$$a_n^K\circ \Phi_X(\omega,t)=\alpha_n(\omega,t)=E_P\left[a_n(\cdot,t)\mid \sigma_t(\underline{X})\right](\omega)$$
\noindent
and thus
$$E_P\left[{\left|\!\left|\underline{a}^K\circ \Phi_X\right|\!\right|}_{L_2\left[\ \!\underline{b}\ \!\right]}^2\right]<\infty.$$



\noindent
That the representation is unique is seen as follows.
Let
$$\underline{X}(\omega,t)=\underline{S}[\underline{\ \!\tilde{a}}^K\circ \Phi_X](\omega,t)+\underline{\tilde{I}}(\omega,t)$$
\noindent
be another representation. Then, for $n\in I\!\!N$ and $t\in [0,1],$ fixed, but arbitrary, almost surely with respect to $P,$
$$I_n(\omega,t)-\tilde{I}_n(\omega,t)=\int_0^t \left\{ \tilde{a}_n^K(\omega,\theta)- a_n^K(\omega,\theta) \right\} M_n(d\theta).$$
\noindent
On the left hand side of that latter equality one has a continuous martingale and on the right hand side a process of bounded variation.
Both sides must then be zero.


\vspace*{10pt}
\noindent
{\bf 6.2.9 Remark:}
Representation 6.2.8 allows one to obtain, {\em mutatis mutandis,} the likelihoods Chapter 4.

***********************************************************************************************

When I complie the code III62 in the program:

\documentclass{article}
\usepackage{amsmath}
\begin{document}

\end{document}

***********************************************************************************************

it compiles perfectly. When I compile it in the book-input structure above it does not compile: error message = missing $

Can you help? Thank you very much. Antonio Gualtierotti (Tug member)



More information about the texhax mailing list