Random matrix theory is focused on the asymptotic properties of random matrices as their dimension tends to infinity. The spectral characteristics of a random matrix are conveniently studied via its empirical spectral distribution. In the 1950s, motivated by numerical experiments, Wigner (see [1]) proved that the empirical spectral distribution of an $n\times n$ Hermitian matrix with on and upper diagonal entries being independent Gaussian random variables converges to the semi-circle law $\mu_{sc}$ with density
It was later shown that the distribution of the matrix entries did not play a significant role and convergence to the semicircle law holds under more general conditions and holds for many other ensembles (see [2, 3]), and much of work devoted to weaken the assumptions about the ensembles (see [4]). The weakest known condition was given by Pastur (see [5]) and it was proved necessary by Girko (see [6]).
Dyson (see [7]) considered a matrix-valued random process with matrix entries are independent Brownian motions, i.e., symmetric (resp.Hermitian) Brownian motion. It can be described on the space $\mathcal{H}_{n,\gamma}$, $\gamma=1$, symmetric (resp. $\gamma=2$, Hermitian) matrices of dimension $n$, as the random process $H_{n,\gamma}$, $\gamma=1$, $2$ with entries $\{H_{n,\gamma}^{k,l}(t),t>0\}$:
where $(B_{k,l},B'_{k,l})$, $1\leq k\leq l\leq N$ is a collection of independent real valued standard Brownian motions. This leads to more effective tools, such as the method of stochastic analysis and the theory of martingale, for the study of Gaussian random matrices since a standard Gaussian variable with mean $0$ and variance $1$ can be seen as a standard Brownian motion at time $1$. The eigenvalue processes of $H_{n,\gamma}$ corresponding to a diffusion model of an interacting particles system with %linear drift towards the origin and electrostatic inter-particle repulsion which can be described by $n$ coupled Itô stochastic equations:
where $\beta_i$, $i=1,\cdots,n$ are independent Brownian motions. The associated empirical process
is an element of $\mathcal{M}_1(\mathbb{R})$, the space of probability measure on $\mathbb{R}$, where $\delta_x$ denotes the unit mass at $x$. Then, under quit general assumptions, the sequence $L_n(\cdot)$ was shown to converge in law to a determinstic $\mathcal{M}_1(\mathbb{R})$-valued process provided that $L_n(0)\rightarrow\mu$ in distribution, and thus a dynamic proof of Wigner's theorem was established (see [8]).
Chan (see [9]) studied the role of the Wigner semi-circle law from the point of view of symmetric matrix which entries are independent Ornstein-Uhlenbeck processes, it can be described on the space of $n$-dimension symmetric matrices as the random process $X_t$:
where $B$ is a standard matrix-valued Brownian motion and $B^T$ denotes the transpose of $B$. He also derived a system of Itô stochastic equations for the eigenvalues of $X_t$ which corresponds to a diffusion model of an interacting particles system with linear drift towards the origin and electrostatic inter-particle repulsion, i.e.,
where $\beta_i$, $i=1,\ldots,n$ are independent Brownian motions. Its associated empirical process converges weakly to a measure-valued process which characterized by a weak solution of a deterministic ordinary differential equation and the Wigner semi-circle law is one of the equilibrium points of this limiting equation.
The purpose of this paper is to study the large deviation properties of the law of the empirical process associate with $X$ defined by (1.1). This problem was studied by several authors, Dawsont et al. (see [10]) studied it from the McKean-Vlasov limit, Duvillard et al. (see [11]) studied the same problem for symmetric (Hermitian) Brownian motion and it was completed by Guionnet et al. (see [12]).
We first state the assumptions and some notations for our result.
H1 The initial values $\mu_n(0):=L_n(0)=\displaystyle\frac{1}{n}\sum\limits_{i=1}^n\delta_{\lambda_i(0)}$ are weak convergence to some limiting measure $\mu$, where $\lambda_1(0),\ldots,\lambda_n(0)$ are the initial condition of (1.2).
H2 $\displaystyle\sup_{n}\langle \mu_n(0),\log(1+x^2)\rangle<+\infty$.
Let $C([0,T],\mathbb{R})$ be the space of continuous function on $[0,T]$ equipped with the uniform convergence topology and $C([0,T],\mathcal{M}_1(\mathbb{R}))$ be the space of continuous measure-valued processes furnished with the topology generated by the weak topology on $\mathcal{M}_1(\mathbb{R})$ and the uniform convergence topology on $[0,T]$.Denote
and
for any measure $v$.
Next, we introduce a candidate for a rate function on $C([0,T],\mathcal{M}_1(\mathbb{R}))$. By Itô formula and (1.2) for any $f\in C_b^{2,1}(\mathbb{R}\times [0,T])$,
Thus
where $M_f^n(t)$, $t\leq T$ is a martingale given by
For any $f$, $g\in \mathcal{D}$, $s\leq t$ and $v\in C([0,T],\mathcal{M}_1(\mathbb{R}))$, set
Our main result is as follows:
Theorem 2.1 Under the Assumption H1 and H2, $\big\{L_n(t),t\in[0,T]\big\}_{n\geq 1}$ obeys a large deviation upper bound on $C([0,T],\mathcal{M}_1(\mathbb{R}))$ with speed $n^2$ and with good rate function
Thanks to the exponential tightness result established in [9], to prove Theorem 2.1, by the usual scheme (see [13]), we only need to show that the rate function is good and a weak large deviation upper bound holds. We first recall the exponential tightness result (see [9] Theorem 3.3):
Lemma 2.2 Under the Assumption H1 for any $L>0$, there exists a compact set $\mathcal{K}_L\subset \mathcal{M}_1(\mathbb{R})$ such that
Next, we show that $S$ is good and a weak large deviation upper bound holds for $\big\{L_n(t),t\in[0,T]\big\}_{n\geq 1}$.
Lemma 2.3 $S$ is a good rate function, i.e., $S$ is a non-negative function and for any $l\geq 0$, the level set $\big\{v\in C([0,T],\mathcal{M}_1(\mathbb{R})): S(v)\leq l\big\}$ is compact.
Proof First, we have
Hence, $S$ is non-negative for $v_0=\mu$, and as a supremum of continuous functions on $C([0,T],\mathcal{M}_1(\mathbb{R}))$, $S$ is lower semi-continuous. Thus, we only need to show that the level set of $S$ is compact. By Lemma 5.4 in [10], it is suffice to show that it is contained in a compact set of the form:
where $\mathfrak{C}$ and $\mathfrak{C}_n$ are compact subsets of $\mathcal{M}_1(\mathbb{R})$ and $C([0,T],\mathbb{R})$, respectively, $(g_n)_{n\geq 0}$ is a family of bounded continuous functions dense in $C_c(\mathbb{R})$.
According to Prohorov's theorem, $\mathfrak{C}$ in (2.3) can be taken as
with a positive real valued sequence $\ell=(\ell_m)_{m\in\mathbb{N}}$. Moreover, by Arzéla-Ascoli's theorem and the fact that $t\rightarrow v_t(g_n)$ ($\forall n\in \mathbb{N}$) is uniformly bounded on $C([0,T],\mathcal{P}(\mathbb{R}))$ for $g_n$ is bounded, we can take $\mathfrak{C}_n$ in (2.3) of the form
with a positive sequence $\delta=(\delta_m)_{m\in\mathbb{N}}$. Thus, to finish the proof, we need to show that for any $v\in \big\{S(\nu)\leq l\big\}$ ($\forall l>0$) and any integer $m$,
$\bullet $ there is a positive real number $\ell_m^l$ such that
$\bullet $ there exists a positive real number $\delta_m^l$ such that
For $0<\varepsilon\leq 1$, set $f_\varepsilon(x)=\log(1+x^2(1+\varepsilon x^2)^{-1})\in \mathcal{D}$, then
Taking $f=f_\varepsilon$ in the supremum of (2.2), we have that for any $t\in[0,T]$ and any $v\in\big\{S(\cdot)\leq l\big\}$,
Thus by (2.1), we have
let $\varepsilon\downarrow 0$, by monotone convergence theorem, we have
Then by Chebyshev's inequality and Assumption H2, for any $v\in \big\{S(\cdot)\leq l\big\}$ and any $K\in \mathbb{R}^+$,
So (2.4) is proved. Again by (2.6), we have for any $f\in\mathcal{D}$,
Thus by (2.1), for any $v\in\big\{S(\cdot)\leq l\big\}$ and %any $f\in C_b^2(\mathbb{R})\cap\mathcal{D}$, $0\leq s\leq t\leq T$,
This complete the proof of (2.5).
To end the proof of the main result, we would only need to show that $\big\{L_n(t),t\in[0,T]\big\}_{n\geq 1}$ obeys the weak upper bound of large deviation.
Lemma 2.4 For $v\in C([0,T],\mathcal{M}_1(\mathbb{R}))$,
where $B(v,r)$ denotes the open ball with center $v$ and radius $r$.
Proof First, since $L_n(0)$ is deterministic and converges to $\mu$, if $v_0\ne \mu$,
Second, take $v\in C([0,T],\mathcal{M}_1(\mathbb{R}))$ and $f\in \mathcal{D}$, by (2.1), we have that $S^{s,t}(L_n,f)-\frac{1}{n}\varepsilon(f)_{L_n}^{s,t}$, $s\leq t$ is a martingale for the filtration of the Brownian motion $\beta$, and equals to
its bracket is $\langle f,f\rangle_{L_n}^{s,t}$, where
Since $f'$ is uniformly bounded, we have for $v\in C([0,T],\mathcal{M}_1(\mathbb{R}))$,
is a martingale. Moreover, $C([0,T],\mathbb{R})\ni v\rightarrow \bar{S}^{s,t}(v,f) =S^{s,t}-\frac{1}{2}\langle f,f\rangle^{s,t}_v$ is continuous as $f$ and its two derivatives are bounded continuous whereas the function
is uniformly bounded by $T\|\partial_x^2f\|_\infty$. Therefore, for any $v\in C([0,T],\mathcal{M}_1(\mathbb{R}))$ and $r>0$, we have for $s\leq t\leq T$,
where we have used the fact that $E\big(M(L_n,f)(t)\big)=1$ since the process $\big\{M(L_n,f)(t),s\leq t\big\}$ is a martingale. Hence, for any $f\in \mathcal{D}$,
optimizing over $f$ gives
Thus the proof is completed.