Synchronization is a widespread natural phenomenon. It was first observed by Huygens in 1967 [1]. For instance, pacemaker cells of the heart function simultaneously; thousands of fireflies may twinkle at the same time; audiences in the theater can applaud with a rhythmic beat; and field crickets give out a unanimous cry [2-4]. The theoretical studies on synchronization phenomena from the perspective of mathematics were started by Wiener in the 1950s [5].
Mathematically, the exact synchronization for a controlled system is to ask for a control so that the difference of any two components of the corresponding solution to the system (with an initial state) takes value zero at a fixed time and remains the value zero after the aforementioned fixed time. The exact synchronization in the PDEs case was first studied for a coupled system of wave equations both for the higher-dimensional case in the framework of weak solutions in [6-8], and for the one-dimensional case in the framework of classical solutions in [2] and [9]. A minimal time control problem for the exact synchronization of some parabolic systems was studied in [10].
In this paper, we consider an optimal control problem related to the exact synchronization for a kind of linear ordinary differential system.
Let $ A\in \mathbb R^{n\times n} $ and $ B\in \mathbb R^{n\times m} $ be two constant matrices, where $ n\geq 2 $ and $ m\geq 1 $. Let $ y_0\in \mathbb R^n $. Consider the following controlled linear ordinary differential system
where $ u\in L^2(0, +\infty; \mathbb R^m) $ is a control. Write
for the solution of (1.1). Here and throughout this paper, we denote the transposition of a matrix $ J $ by $ J^\top $. It is well known that for each $ T>0 $, $ y(\cdot;y_0, u)\in C([0, T];\mathbb R^n) $. Given $ T_0>0 $, $ y_0\in \mathbb R^n $ and $ y_d\in L^2(0, T_0;\mathbb R^n) $, we define an optimal control problem as follows
where
Two concepts related to this problem are the null controllability and the exact synchronization. Let us recall them. First, the system (1.1) is said to be null controllable at time $ T $, if for any $ y_0\in \mathbb R^n $, there exists a control $ u\in L^2(0, +\infty;\mathbb R^m) $ with $ u(t) = \boldsymbol{0} $ over $ (T, +\infty) $, so that $ y(t;y_0, u) = \boldsymbol{0} $ for all $ t\ge T $. Second, the system (1.1) is said to be exactly synchronizable at time $ T $, if for any $ y_0\in \mathbb R^n $, there exists a control $ u\in L^2(0, +\infty;\mathbb R^m) $ with $ u(t) = \boldsymbol{0} $ over $ (T, +\infty) $, so that
Mathematically, the exact synchronization is weaker than the null controllability.
Here and throughout this paper, we denote $ A = (a_{ij})_{1\leq i, j\leq n} $, $ B = (b_{ij})_{1\leq i\leq n, 1\leq j\leq m} $ and
We shall use $ \langle\cdot, \cdot\rangle $ to denote the inner product of $ \mathbb R^n $ or $ \mathbb R^m $ if there is no risk of causing any confusion.
In this paper, we assume that $ A $ and $ B $ satisfy the following hypothesis $ ({\rm{H}}_1) $ or $ ({\rm{H}}_2) $.
$ ({\rm{H}}_1) $ The pair $ (A, B) $ satisfies that
and that rank$ (DB, DAB, \cdots, DA^{n-2}B) = n-1. $ Recall that $ D $ is given by (1.2).
(H2) The pair $ (A, B) $ satisfies that
and that
The main result of this paper is as follows.
Theorem 1.1 Suppose that $ A $ and $ B $ satisfy either $ ({\rm{H}}_1) $ or $ ({\rm{H}}_2) $. Then problem (P) has a unique optimal control. Moreover,
(i) If A and B satisfy (H1), $ u^* $ is the optimal control to problem (P) if and only if $ u^* \in \mathcal U $ and there exists a function $ q\in C([0, T_0];\mathbb R^n) $ so that
and
where $ q_0 = (q_{01}, q_{02}, \cdots, q_{0n})^\top $ satisfies $ \sum\limits_{i = 1}^n q_{0i} = 0 $, and $ y^* $ is the solution to (1.1) corresponding to the optimal control $ u^* $, i.e., $ y^*(\cdot) = y(\cdot;y_0, u^*) $.
(ii) If A and B satisfy (H2, $ u^* $ is the optimal control to problem (P) if and only if $ u^* \in \mathcal U $ and there exists a function $ q\in C([0, T_0];\mathbb R^n) $ so that
where $ y^* $ is the solution to (1.1) corresponding to the optimal control $ u^* $, i.e., $ y^*(\cdot) = y(\cdot;y_0, u^*) $.
Pontryagin's maximum principle of optimal control problems for differential equations was studied for decades [11-15] and the references therein. Recently, Pontryagin's maximum principle of optimal control problems for the exact synchronization of the parabolic differential equations was considered in [16]. However, the sufficient condition for the above-mentioned problem was not derived in [16]. This paper is organized as follows. In Section 2, we prove Theorem 1.1. In Section 3, we give the numerical approximation of the optimal control and present some examples to test the effectiveness of the algorithm.
Under hypothesis (H1) or (H2), by the same arguments as those in [16], we can show the existence and uniqueness of the optimal control of problem (P). We omit the proofs here. Next, we continue the proof of Theorem 1.1.
(i) We start with the proof of "Necessity" part. For any $ \mathit{\boldsymbol{v}}\in\mathcal U $ and $ \lambda\in(0, 1) $, we set $ {{\boldsymbol{u}}_{\lambda }}\triangleq {{\boldsymbol{u}}^{*}}+\lambda \left( \boldsymbol{v}-{{\boldsymbol{u}}^{*}} \right) $. Then $ \mathit{\boldsymbol{u}}_\lambda\in \mathcal {U} $. Denote
We can show that
Since $ u^* $ is the optimal control to problem $ (\textbf{P}) $, we get
Dividing by $ \lambda $ and passing to the limit for $ \lambda\to 0^+ $ in (1.7), we have
Let $ p $ be the solution to the following system
Multiplying the first equation of (1.9) by $ z $ and integrating it over $ (0, {T_0}) $, by (1.6) and (1.9), we get
This, together with (1.8), implies that
Let $ \boldsymbol{\varphi} \left( { \cdot ;{T_0},{\boldsymbol{\varphi} _0}} \right) $ be the unique solution to the following system
where $ {\boldsymbol{\varphi} _0} = {({\varphi _{01}},{\varphi _{02}}, \cdots ,{\varphi _{0n}})^\top} $. Define
It is obvious that $ Y_{T_0} $ is closed. We now claim that
Otherwise, we would have that $ {\boldsymbol{u}^*}- {B^\top}\boldsymbol{p} \in {L^2}\left( {0,{T_0};{\mathbb{R}^m}} \right)\backslash {Y_{T_0}} $. This implies that
where $ g\in L^2(0, {T_0};\mathbb R^m) $. Especially, choosing $ f = {B^\top}\varphi(\cdot;T_0, \varphi_0) $ with $ \sum\limits_{i = 1}^n \varphi_{0i} = 0 $ in (1.13), we have that
for all $ \varphi_0\in \mathbb R^n $ with $ \sum\limits_{i = 1}^n \varphi_{0i} = 0 $.
On one hand, let $ w(\cdot) $ be the solution to the following system
where $ \widetilde {g} $ is the zero extension of $ g $ over $ (0, +\infty) $. Multiplying the first equation of (1.15) by $ \boldsymbol{\varphi} ( \cdot ;{T_0},{\boldsymbol{\varphi} _0}) $ and integrating it over $ (0, {T_0}) $, by (1.11) and (1.15), we obtain that
This, together with (1.14), implies that
where $ w({T_0}) = (w_1(T_0), w_2(T_0), \cdots, w_n(T_0))^\top $.
On the other hand, by (1.3), we denote $ \sum\limits_{l = 1}^n a_{il} = \sum\limits_{l = 1}^n a_{jl}\triangleq \mu $ and $ \alpha \triangleq (1, 1, \cdots, 1)^\top $. Then we can directly check that
Since $ w(t) = e^{A(t-T_0)}w(T_0) $ for all $ t\geq T_0 $, it follows from (1.16) and (1.17) that
This implies that $ \widetilde{g}+u^*\in \mathcal U $. By (1.10), we get that
which leads to a contraction with (1.13). Hence, (1.12) follows, i.e., there exists a $ q_0 = (q_{01}, q_{02}, \cdots, q_{0n})^\top\in \mathbb R^n $ with $ \sum\limits_{i = 1}^n q_{0i} = 0 $, so that
Set $ q(\cdot)\triangleq {p}(\cdot)+\varphi(\cdot;T_0, q_0) $. Then by (1.9) and (1.18), we have that
and $ u^*(t) = B^\top\mathit{\boldsymbol{q}}(t) $ a.e. $ t\in(0, {T_0}) $.
Thus, we finish the proof of the necessity.
We next turn to the proof of "sufficiency" part. For any $ u\in\mathcal U $, we denote
where $ z(\cdot)\triangleq(z_1(\cdot), z_2(\cdot), \cdots, z_n(\cdot)) $. We can easily check that
Multiplying the first equation of (1.5) by $ z $ and integrating it over $ (0, {T_0}) $, by (1.19), (1.4) and (1.5), we obtain that
This, along with (1.20) and (1.21), implies that
which indicates that $ u^* $ is the optimal control to problem (P).
(ii) By the same arguments as those in [16], under hypothesis (H2), we observe that
We start with the proof of "Necessity" part. Let $ p $ and $ \boldsymbol{\varphi} ( \cdot ;{T_0},{\boldsymbol{\varphi} _0}) $ (where $ \varphi_0\in \mathbb R^n $) be the unique solution to the equations
respectively. Define
It is obvious that $ Y_{T_0} $ is closed. By similar arguments as those to prove (1.12), we have that $ u^*- {B^\top}p\in {Y_{T_0}}, $ i.e., there exists a $ q_0\in \mathbb R^n $, so that
Set $ q(\cdot)\triangleq {p}(\cdot)+\varphi(\cdot;T_0, q_0) $. Then by (1.22) and (1.23), we have that
We next turn to the prove of "sufficiency" part. Its proof is similar to that of "Sufficiency" part in (i). We omit it here.
In this section, we carry out two numerical tests. The tests concern the two cases considered in Theorem 1.1, where (H1) is satisfied in Test 1 and (H2) is satisfied in Test 2.
Test 1 For the framework of (ⅰ) in Theorem 1.1, we observe that the optimal control $ u^* $ and the optimal trajectory $ y^* $ are the solutions to following equations
where $ y(T_0) = (y_1(T_0), y_2(T_0), \cdots, y_n(T_0))^\top $, $ q(T_0) = (q_1(T_0), q_2(T_0), \cdots, q_n(T_0))^\top $, $ y_0 $, $ y_d $ and $ T_0 $ are given, $ A $ and $ B $ will be chosen to satisfy (H1).
Let $ (t_l)_{l = 0, 1, \cdots, N} $ be an equidistant partition of [0, $ T_0 $] with the time step $ \Delta t = \frac{T_0}{N} $, i.e.,
For $ l = 0, 1, \cdots, N, \;\; i = 1, 2, \cdots, n $, we set
The discretization of (3.1) gives the following system by an implicit finite difference scheme
This can be reformulated as a linear system of (2$ N $+1)$ \times n $ equations $ \label{Maxeq-1:16-3} MY = b, $ here
and $ M $ is a $ (2N+1)n\times(2N+1)n $ matrix given by
where $ I_n $ is the $ n $-dimension identity matrix, $ \boldsymbol{0}_{n\times n} $ is the $ n $-dimension zeros matrix,
Finally, we can solve (3.2) for different choice of $ N $ to obtain the numerical solution $ y^\Delta $, then compare them with the exact solution of $ y^* $ to check the convergence of the algorithm.
We carry out the test with $ n $ = 2, $ m = 1 $, $ T_0 $ = 1,
Clearly $ A $, $ B $ satisfy (H1), and the exact solution $ y^* = (y_1^*, y_2^*)^\top $ can be obtained by direct computation
Taking $ N $ = 10, 20, 40, we can illustrate the numerical solution $ y^\Delta $ and the exact solution of $ y^* $ in the following figures
Test 2 For the framework of (ⅱ) in Theorem 1.1, we see that the optimal control $ u^* $ and optimal trajectory $ y^* $ are the solutions to following equations
where $ y(T_0) = (y_1(T_0), y_2(T_0), \cdots, y_n(T_0))^\top $, $ y_d $ and $ T_0 $ are given, $ A $ and $ B $ will be chosen to satisfy (H2).
Analogously we take the same scheme as in Test 1 to obtain the discretization of (3.3) as the following system
This can be reformulated as a linear system of (2$ N $+1)$ \times n $ equations:
Here
where $ I_n $ is the $ n $-dimension identity matrix, and $ \boldsymbol{0}_{n\times n} $ is the $ n $-dimension zeros matrix.
Finally, we can solve (3.4) for different choice of $ N $ to obtain the numerical solution $ y^\Delta $, then compare them with the exact solution of $ y^* $ to check the convergence of the algorithm.
We carry out the test with $ n = m = 2 $, $ T_0 $ = 1,
Clearly $ A $, $ B $ satisfy (H2), and the exact solution $ y^* = (y_1^*, y_2^*)^\top $ can be obtained by direct computation:
Taking $ N $ = 10, 20, 40, we can illustrate the numerical solution $ y^\Delta $ and the exact solution of $ y^* $ in the following figures:
From these figures, we observe that the error between the numerical solution and the exact solution decreases with the increase of $ N $. So, if we take the value of $ N $ large enough, the exact solution can be approximated nicely by this method.