数学杂志  2019, Vol. 39 Issue (3): 357-369   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
WU Ke-fan
OPTIMAL CONTROL PROBLEM FOR EXACT SYNCHRONIZATION OF ORDINARY DIFFERENTIAL SYSTEMS
WU Ke-fan    
School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
Abstract: In this paper, we study a kind of optimal problems related to the exact synchronization for a controlled linear ordinary differential system. We establish a necessary and sufficient condition for the optimal control. Moreover, we give the numerical approximation of the optimal control and present some examples to test the effectiveness of the algorithm.
Keywords: exact synchronization     necessary and sufficient conditions     ordinary differential system     numerical approximation    
常微分系统同步的控制理论
吴克凡    
武汉大学数学与统计学院, 湖北 武汉 430072
摘要:本文研究了一类精确同步的线性常微分系统的最优控制问题.运用变分法以及Hahn-Banach定理,得到了最优控制的充要条件,给出其数值估计,并举例验证算法的有效性.
关键词精确同步    充要条件    常微分系统    数值估计    
1 Introduction
1.1 Background

Synchronization is a widespread natural phenomenon. It was first observed by Huygens in 1967 [1]. For instance, pacemaker cells of the heart function simultaneously; thousands of fireflies may twinkle at the same time; audiences in the theater can applaud with a rhythmic beat; and field crickets give out a unanimous cry [2-4]. The theoretical studies on synchronization phenomena from the perspective of mathematics were started by Wiener in the 1950s [5].

Mathematically, the exact synchronization for a controlled system is to ask for a control so that the difference of any two components of the corresponding solution to the system (with an initial state) takes value zero at a fixed time and remains the value zero after the aforementioned fixed time. The exact synchronization in the PDEs case was first studied for a coupled system of wave equations both for the higher-dimensional case in the framework of weak solutions in [6-8], and for the one-dimensional case in the framework of classical solutions in [2] and [9]. A minimal time control problem for the exact synchronization of some parabolic systems was studied in [10].

In this paper, we consider an optimal control problem related to the exact synchronization for a kind of linear ordinary differential system.

1.2 Formulation of the Problem and Hypotheses

Let $ A\in \mathbb R^{n\times n} $ and $ B\in \mathbb R^{n\times m} $ be two constant matrices, where $ n\geq 2 $ and $ m\geq 1 $. Let $ y_0\in \mathbb R^n $. Consider the following controlled linear ordinary differential system

$ \begin{equation} \left\{ \begin{array}{ll} \dot{y} = Ay+Bu , \quad t>0, \\ y(0) = y_0 , \end{array} \right. \end{equation} $ (1.1)

where $ u\in L^2(0, +\infty; \mathbb R^m) $ is a control. Write

$ \begin{equation*} y(t;y_0, u) = (y_1(t;y_0, u), y_2(t;y_0, u), \cdots, y_n(t;y_0, u))^\top \end{equation*} $

for the solution of (1.1). Here and throughout this paper, we denote the transposition of a matrix $ J $ by $ J^\top $. It is well known that for each $ T>0 $, $ y(\cdot;y_0, u)\in C([0, T];\mathbb R^n) $. Given $ T_0>0 $, $ y_0\in \mathbb R^n $ and $ y_d\in L^2(0, T_0;\mathbb R^n) $, we define an optimal control problem as follows

$ \begin{equation*} (\textbf{P})\qquad\frac{1}{2}\mathop {\min }\limits_{u \in \mathcal U}\int_0^{T_0} \left(\|y(t;y_0, u)-y_d(t)\|^2_{\mathbb R^n}+\|u(t)\|_{\mathbb R^m}^2\right)\, dt, \end{equation*} $

where

$ \begin{array}{lll} \mathcal U\triangleq &\{u\in L^2(0, +\infty;\mathbb R^m):u(t) = \boldsymbol{0}\;\mbox{for a.e.}\;\;t>T_0, \\ &\mbox{and}\;y_1(\cdot;y_0, u) = \cdots = y_n(\cdot;y_0, u)\;\;\mbox{over}\;[T_0, +\infty )\}.\\ \end{array} $

Two concepts related to this problem are the null controllability and the exact synchronization. Let us recall them. First, the system (1.1) is said to be null controllable at time $ T $, if for any $ y_0\in \mathbb R^n $, there exists a control $ u\in L^2(0, +\infty;\mathbb R^m) $ with $ u(t) = \boldsymbol{0} $ over $ (T, +\infty) $, so that $ y(t;y_0, u) = \boldsymbol{0} $ for all $ t\ge T $. Second, the system (1.1) is said to be exactly synchronizable at time $ T $, if for any $ y_0\in \mathbb R^n $, there exists a control $ u\in L^2(0, +\infty;\mathbb R^m) $ with $ u(t) = \boldsymbol{0} $ over $ (T, +\infty) $, so that

$ \begin{equation*} y_1(t;y_0, u) = y_2(t;y_0, u) = \cdots = y_n(t;y_0, u)\;\; {\rm for\; \; all\; \; }t\geq T. \end{equation*} $

Mathematically, the exact synchronization is weaker than the null controllability.

Here and throughout this paper, we denote $ A = (a_{ij})_{1\leq i, j\leq n} $, $ B = (b_{ij})_{1\leq i\leq n, 1\leq j\leq m} $ and

$ \begin{equation} D\triangleq\left( \begin{array}{ccccc} 1 & -1 & 0 &\cdots & 0 \\ 0 & 1 & -1 &\cdots & 0\\ \vdots & \vdots & \ddots & \ddots &\vdots \\ 0 & 0 & \cdots&1& -1 \\ \end{array} \right)_{(n-1)\times n}. \end{equation} $ (1.2)

We shall use $ \langle\cdot, \cdot\rangle $ to denote the inner product of $ \mathbb R^n $ or $ \mathbb R^m $ if there is no risk of causing any confusion.

In this paper, we assume that $ A $ and $ B $ satisfy the following hypothesis $ ({\rm{H}}_1) $ or $ ({\rm{H}}_2) $.

$ ({\rm{H}}_1) $ The pair $ (A, B) $ satisfies that

$ \begin{equation} \sum\limits_{\ell = 1}^n {{a_{i\ell }}} = \sum\limits_{\ell = 1}^n {{a_{j\ell }}} \;\;\mbox{for all}\;\;i, j\in \{1, 2, \cdots, n\}; \end{equation} $ (1.3)

and that rank$ (DB, DAB, \cdots, DA^{n-2}B) = n-1. $ Recall that $ D $ is given by (1.2).

(H2) The pair $ (A, B) $ satisfies that

$ \begin{equation*} \sum\limits_{\ell = 1}^n {{a_{{i_0}\ell }}} \ne \sum\limits_{\ell = 1}^n {{a_{{j_0}\ell }}} \;\;\mbox{for some}\;\; i_0, j_0\in \{1, 2, \cdots, n\} \end{equation*} $

and that

$ \begin{equation*} \mbox{rank}(B, AB, \cdots, A^{n-1}B) = n. \end{equation*} $

The main result of this paper is as follows.

Theorem 1.1    Suppose that $ A $ and $ B $ satisfy either $ ({\rm{H}}_1) $ or $ ({\rm{H}}_2) $. Then problem (P) has a unique optimal control. Moreover,

(i) If A and B satisfy (H1), $ u^* $ is the optimal control to problem (P) if and only if $ u^* \in \mathcal U $ and there exists a function $ q\in C([0, T_0];\mathbb R^n) $ so that

$ \begin{equation} u^*(t) = B^\top\mathit{\boldsymbol{q}}(t)\;\;\mbox{for}\;\;\mbox{a.e.}\;t\in(0, {T_0}) \end{equation} $ (1.4)

and

$ \begin{equation} \left\{ \begin{array}{lll} \dot{q}+A^\top{q} = y^*-y_d, \;\;\;t\in (0, T_0), \\ {q(T_0)} = q_0, \\ \end{array} \right. \end{equation} $ (1.5)

where $ q_0 = (q_{01}, q_{02}, \cdots, q_{0n})^\top $ satisfies $ \sum\limits_{i = 1}^n q_{0i} = 0 $, and $ y^* $ is the solution to (1.1) corresponding to the optimal control $ u^* $, i.e., $ y^*(\cdot) = y(\cdot;y_0, u^*) $.

(ii) If A and B satisfy (H2, $ u^* $ is the optimal control to problem (P) if and only if $ u^* \in \mathcal U $ and there exists a function $ q\in C([0, T_0];\mathbb R^n) $ so that

$ \begin{equation*} u^*(t) = B^\top\mathit{\boldsymbol{q}}(t)\;\;\mbox{for}\;\;\mbox{a.e.}\;t\in(0, {T_0}) \end{equation*} $

and

$ \begin{equation*} \dot{q}+A^\top{q} = y^*-y_d, \;\;\;t\in (0, T_0), \end{equation*} $

where $ y^* $ is the solution to (1.1) corresponding to the optimal control $ u^* $, i.e., $ y^*(\cdot) = y(\cdot;y_0, u^*) $.

Pontryagin's maximum principle of optimal control problems for differential equations was studied for decades [11-15] and the references therein. Recently, Pontryagin's maximum principle of optimal control problems for the exact synchronization of the parabolic differential equations was considered in [16]. However, the sufficient condition for the above-mentioned problem was not derived in [16]. This paper is organized as follows. In Section 2, we prove Theorem 1.1. In Section 3, we give the numerical approximation of the optimal control and present some examples to test the effectiveness of the algorithm.

2 Proof of Theorem 1.1

Under hypothesis (H1) or (H2), by the same arguments as those in [16], we can show the existence and uniqueness of the optimal control of problem (P). We omit the proofs here. Next, we continue the proof of Theorem 1.1.

(i) We start with the proof of "Necessity" part. For any $ \mathit{\boldsymbol{v}}\in\mathcal U $ and $ \lambda\in(0, 1) $, we set $ {{\boldsymbol{u}}_{\lambda }}\triangleq {{\boldsymbol{u}}^{*}}+\lambda \left( \boldsymbol{v}-{{\boldsymbol{u}}^{*}} \right) $. Then $ \mathit{\boldsymbol{u}}_\lambda\in \mathcal {U} $. Denote

$ {\boldsymbol{y}_\lambda }\left( t \right) \buildrel \Delta \over = \boldsymbol{y}\left( {t,{\boldsymbol{y}_0},{\boldsymbol{u}_\lambda }} \right)\;\;{\rm{and}}\;\;\boldsymbol{z}\left( t \right) \buildrel \Delta \over = \frac{{{\boldsymbol{y}_\lambda }\left( t \right) - {\boldsymbol{y}^*}\left( t \right)}}{\lambda },\forall t \in \left[ {0,{T_0}} \right]. $

We can show that

$ \begin{equation} \left\{ \begin{array}{lll} \dot{z} = Az+ B(v-u^*), \;\;\;\;t\in (0, T_0), \\ z(0) = \boldsymbol{0}. \end{array} \right. \end{equation} $ (1.6)

Since $ u^* $ is the optimal control to problem $ (\textbf{P}) $, we get

$ \begin{equation} \begin{split} &\frac{1}{2}\int_0^{T_0}\left(\|y_\lambda-y_d\|_{\mathbb R^n}^2+\|u_\lambda\|_{\mathbb R^m}^2\right)\, dt\\ -&\frac{1}{2}\int_0^{T_0}\left(\|y^*-y_d\|_{\mathbb R^n}^2+\|u^*\|_{\mathbb R^m}^2\right)\, dt \geq 0, \;\;\forall\;\lambda\in(0, 1). \end{split} \end{equation} $ (1.7)

Dividing by $ \lambda $ and passing to the limit for $ \lambda\to 0^+ $ in (1.7), we have

$ \begin{equation} \int_0^{T_0} \langle y^*-y_d, z \rangle\, dt+ \int_0^{T_0}\langle u^*, v-u^*\rangle\, dt\geq 0, \;\;\forall\;v\in \mathcal{U}. \end{equation} $ (1.8)

Let $ p $ be the solution to the following system

$ \left\{ \begin{array}{l} \boldsymbol{\dot p} + {A^\top}\boldsymbol{p} = {\boldsymbol{y}^*} - {\boldsymbol{y}_d},\;\;\;\;\;t \in \left( {0,{T_0}} \right),\\ \boldsymbol{p}\left( {{T_0}} \right) = \boldsymbol{0}. \end{array} \right. $ (1.9)

Multiplying the first equation of (1.9) by $ z $ and integrating it over $ (0, {T_0}) $, by (1.6) and (1.9), we get

$ \int_0^{{T_0}} {\left\langle {{\boldsymbol{y}^*} - {\boldsymbol{y}_d},\boldsymbol{z}} \right\rangle dt = - \int_0^{{T_0}} {\left\langle {\boldsymbol{p},B\left( {\boldsymbol{v} - {\boldsymbol{u}^*}} \right)} \right\rangle dt.} } $

This, together with (1.8), implies that

$ \int_0^{{T_0}} {\left\langle {{\boldsymbol{u}^*} - {B^\top}\boldsymbol{p},\boldsymbol{v} - {\boldsymbol{u}^*}} \right\rangle dt = 0,\forall \boldsymbol{v} \in \mathcal {U}.} $ (1.10)

Let $ \boldsymbol{\varphi} \left( { \cdot ;{T_0},{\boldsymbol{\varphi} _0}} \right) $ be the unique solution to the following system

$ \left\{ \begin{array}{l} \boldsymbol{\dot \varphi} + {A^\top}\boldsymbol{\varphi} = \boldsymbol{0},\;\;\;t \in \left( {0,{T_0}} \right),\\ \boldsymbol{\varphi} \left( {{T_0}} \right) = {\boldsymbol{\varphi} _0}, \end{array} \right. $ (1.11)

where $ {\boldsymbol{\varphi} _0} = {({\varphi _{01}},{\varphi _{02}}, \cdots ,{\varphi _{0n}})^\top} $. Define

$ \begin{equation*} Y_{T_0}\triangleq \left\{ B^\top\varphi(\cdot;T_0, \varphi_0): \varphi_0 \in \mathbb R^{n}\;{\rm and}\; \sum\limits_{i = 1}^n \varphi_{0i} = 0\right\}\subseteq L^2(0, {T_0};L^2(\Omega)^m). \end{equation*} $

It is obvious that $ Y_{T_0} $ is closed. We now claim that

$ {\boldsymbol{u}^*} - {B^\top}\boldsymbol{p} \in {Y_{{T_0}}}. $ (1.12)

Otherwise, we would have that $ {\boldsymbol{u}^*}- {B^\top}\boldsymbol{p} \in {L^2}\left( {0,{T_0};{\mathbb{R}^m}} \right)\backslash {Y_{T_0}} $. This implies that

$ \begin{equation} 0 = \int_0^{T_0}\langle g(t), f(t)\rangle\, dt< \int_0^{T_0}\langle g(t), u^*(t)- {B^\top}p(t)\rangle\, dt, \;\;\forall\;f\in {Y_{T_0}}, \end{equation} $ (1.13)

where $ g\in L^2(0, {T_0};\mathbb R^m) $. Especially, choosing $ f = {B^\top}\varphi(\cdot;T_0, \varphi_0) $ with $ \sum\limits_{i = 1}^n \varphi_{0i} = 0 $ in (1.13), we have that

$ \begin{equation} 0 = \int_0^{T_0}\langle g(t), B^\top\varphi(\cdot;T_0, \varphi_0)\rangle\, dt = \int_0^{T_0}\langle Bg(t), \varphi(\cdot;T_0, \varphi_0)\rangle\, dt\\ \end{equation} $ (1.14)

for all $ \varphi_0\in \mathbb R^n $ with $ \sum\limits_{i = 1}^n \varphi_{0i} = 0 $.

On one hand, let $ w(\cdot) $ be the solution to the following system

$ \begin{equation} \left\{ \begin{array}{lll} \dot{w} = Aw+B\widetilde {g}, \;\;\;\;t>0, \\ w(0) = \boldsymbol{0}, \end{array} \right. \end{equation} $ (1.15)

where $ \widetilde {g} $ is the zero extension of $ g $ over $ (0, +\infty) $. Multiplying the first equation of (1.15) by $ \boldsymbol{\varphi} ( \cdot ;{T_0},{\boldsymbol{\varphi} _0}) $ and integrating it over $ (0, {T_0}) $, by (1.11) and (1.15), we obtain that

$ \left\langle {\boldsymbol{w}\left( {{T_0}} \right),{\boldsymbol{\varphi} _0}} \right\rangle = \int_0^{{T_0}} {\left\langle {B\boldsymbol{g}\left( t \right),\boldsymbol{\varphi} \left( {t;{T_0},{\boldsymbol{\varphi} _0}} \right)} \right\rangle dt\;\;{\rm{for}}\;\;{\rm{all}}\;\;{\boldsymbol{\varphi} _0} \in {{\mathbb{R}}^n}\;\;{\rm{with}}\;\;\sum\limits_{i = 1}^n {{\varphi _{0i}} = 0.} } $

This, together with (1.14), implies that

$ \begin{equation} w_1(T_0) = w_2(T_0) = \cdots = w_n(T_0), \end{equation} $ (1.16)

where $ w({T_0}) = (w_1(T_0), w_2(T_0), \cdots, w_n(T_0))^\top $.

On the other hand, by (1.3), we denote $ \sum\limits_{l = 1}^n a_{il} = \sum\limits_{l = 1}^n a_{jl}\triangleq \mu $ and $ \alpha \triangleq (1, 1, \cdots, 1)^\top $. Then we can directly check that

$ \begin{equation} A^k\alpha = \mu^k \alpha\;\;\;{\rm for\; \; all\; \; }\;k\in \mathbb N^+. \end{equation} $ (1.17)

Since $ w(t) = e^{A(t-T_0)}w(T_0) $ for all $ t\geq T_0 $, it follows from (1.16) and (1.17) that

$ \begin{equation*} w(t) = w_1(T_0)e^{\mu(t-T_0)}\alpha, \;\;\;\;t\in [T_0, +\infty). \end{equation*} $

This implies that $ \widetilde{g}+u^*\in \mathcal U $. By (1.10), we get that

$ \begin{equation*} \int_0^{T_0}\langle u^*(t)-{B^\top}p(t), \widetilde{g}(t)\rangle\, dt = 0, \end{equation*} $

which leads to a contraction with (1.13). Hence, (1.12) follows, i.e., there exists a $ q_0 = (q_{01}, q_{02}, \cdots, q_{0n})^\top\in \mathbb R^n $ with $ \sum\limits_{i = 1}^n q_{0i} = 0 $, so that

$ \begin{equation} u^*(\cdot)-{B^\top}p(\cdot) = B^\top\varphi(\cdot;T_0, q_0). \end{equation} $ (1.18)

Set $ q(\cdot)\triangleq {p}(\cdot)+\varphi(\cdot;T_0, q_0) $. Then by (1.9) and (1.18), we have that

$ \begin{equation*} \left\{ \begin{array}{lll} \dot{q}+A^\top{q} = y^*-y_d, \;\;\;\;t\in(0, T_0), \\ {q(T_0)} = q_0, \\ \end{array} \right. \end{equation*} $

and $ u^*(t) = B^\top\mathit{\boldsymbol{q}}(t) $ a.e. $ t\in(0, {T_0}) $.

Thus, we finish the proof of the necessity.

We next turn to the proof of "sufficiency" part. For any $ u\in\mathcal U $, we denote

$ \begin{equation*} y(t)\triangleq {y}(t;y_0, u)\;\; \mbox{and}\;\;z(t)\triangleq {y}(t)-y^*(t), \;\;\forall\;t\in [0, T_0], \\ \end{equation*} $

where $ z(\cdot)\triangleq(z_1(\cdot), z_2(\cdot), \cdots, z_n(\cdot)) $. We can easily check that

$ \begin{eqnarray} && \left\{ \begin{array}{lll} \dot{z} = Az+ B(u-u^*), \;\;t\in (0, T_0), \\ z(0) = \boldsymbol{0}, \end{array} \right. \end{eqnarray} $ (1.19)
$ \begin{eqnarray} && z_1(T_0) = z_2(T_0) = \cdots = z_n(T_0) \end{eqnarray} $ (1.20)

and

$ \begin{eqnarray} &&\frac{1}{2}\int_0^{T_0}\left(\|y-y_d\|_{\mathbb R^n}^2+\|u\|_{\mathbb R^m}^2\right)\, dt -\frac{1}{2}\int_0^{T_0}\left(\|y^*-y_d\|_{\mathbb R^n}^2+\|u^*\|_{\mathbb R^m}^2\right)\, dt\\ & = &\int_0^{T_0} \langle y^*-y_d, z \rangle\, dt+ \int_0^{T_0}\langle u^*, u-u^*\rangle\, dt+\frac{1}{2}\int_0^{T_0}\left(\|y-y^*\|_{\mathbb R^n}^2+\|u-u^*\|_{\mathbb R^m}^2\right)\, dt\\&\geq&\int_0^{T_0} \langle y^*-y_d, z \rangle\, dt+ \int_0^{T_0}\langle u^*, u-u^*\rangle\, dt. \end{eqnarray} $ (1.21)

Multiplying the first equation of (1.5) by $ z $ and integrating it over $ (0, {T_0}) $, by (1.19), (1.4) and (1.5), we obtain that

$ \begin{equation*} \langle z(T_0), q(T_0)\rangle = \int_0^{T_0} \langle y^*-y_d, z \rangle\, dt+ \int_0^{T_0}\langle u^*, u-u^*\rangle\, dt. \end{equation*} $

This, along with (1.20) and (1.21), implies that

$ \begin{equation*} \begin{split} &\frac{1}{2}\int_0^{T_0}\left(\|y-y_d\|_{\mathbb R^n}^2+\|u\|_{\mathbb R^m}^2\right)\, dt -\frac{1}{2}\int_0^{T_0}\left(\|y^*-y_d\|_{\mathbb R^n}^2+\|u^*\|_{\mathbb R^m}^2\right)\, dt\\\geq&\langle z(T_0), q(T_0)\rangle = z_1(T_0)\sum\limits_{i = 1}^n q_{0i} = 0, \ \end{split} \end{equation*} $

which indicates that $ u^* $ is the optimal control to problem (P).

(ii) By the same arguments as those in [16], under hypothesis (H2), we observe that

$ \begin{array}{lll} \mathcal U = &\{u\in L^2(0, +\infty;\mathbb R^m):u(t) = \boldsymbol{0}\;\mbox{for a.e.}\;t>T_0, \\ &\mbox{and}\;y(\cdot;y_0, u) = \boldsymbol{0}\;\;\mbox{over}\;[T_0, +\infty )\}.\\ \end{array} $

We start with the proof of "Necessity" part. Let $ p $ and $ \boldsymbol{\varphi} ( \cdot ;{T_0},{\boldsymbol{\varphi} _0}) $ (where $ \varphi_0\in \mathbb R^n $) be the unique solution to the equations

$ \begin{equation} \left\{ \begin{array}{lll} \dot{p}+{A^\top}p = y^*-y_d, \;\;\;\;t\in (0, T_0), \\ p({T_0}) = \boldsymbol{0} \end{array} \right. \end{equation} $ (1.22)

and

$ \left\{ \begin{array}{l} \boldsymbol{\dot \varphi} + {A^\top}\boldsymbol{\varphi} = 0,\;\;t \in \left( {0,{T_0}} \right)\\ \boldsymbol{\varphi} \left( {{T_0}} \right) = {\boldsymbol{\varphi} _0}, \end{array} \right. $

respectively. Define

$ \begin{equation*} Y_{T_0}\triangleq \left\{ B^\top\varphi(\cdot;T_0, \varphi_0): \varphi_0 \in \mathbb R^{n} \right\}\subseteq L^2(0, {T_0};L^2(\Omega)^m). \end{equation*} $

It is obvious that $ Y_{T_0} $ is closed. By similar arguments as those to prove (1.12), we have that $ u^*- {B^\top}p\in {Y_{T_0}}, $ i.e., there exists a $ q_0\in \mathbb R^n $, so that

$ \begin{equation} u^*(\cdot)-{B^\top}p(\cdot) = {B^\top}\varphi(\cdot;T_0, q_0). \end{equation} $ (1.23)

Set $ q(\cdot)\triangleq {p}(\cdot)+\varphi(\cdot;T_0, q_0) $. Then by (1.22) and (1.23), we have that

$ \begin{equation*} \left\{ \begin{array}{lll} \dot{q}+A^\top{q} = y^*-y_d, \;\;\;\;t\in(0, T_0), \\ {q(T_0)} = q_0 \end{array} \right. \end{equation*} $

and $ u^*(t) = B^\top\mathit{\boldsymbol{q}}(t) $ a.e. $ t\in(0, {T_0}) $.

Thus, we finish the proof of the necessity.

We next turn to the prove of "sufficiency" part. Its proof is similar to that of "Sufficiency" part in (i). We omit it here.

3 Numerical Tests

In this section, we carry out two numerical tests. The tests concern the two cases considered in Theorem 1.1, where (H1) is satisfied in Test 1 and (H2) is satisfied in Test 2.

Test 1   For the framework of (ⅰ) in Theorem 1.1, we observe that the optimal control $ u^* $ and the optimal trajectory $ y^* $ are the solutions to following equations

$ \begin{equation} \left\{ \begin{array}{lll} \dot{y} = Ay+Bu, \;\;t\in (0, T_0), \\ y(0) = y_0, \\ y_1(T_0) = y_2(T_0) = \cdots = y_n(T_0), \\ u(t) = B^\top\mathit{\boldsymbol{q}}(t), \;\;t\in (0, T_0), \\ \dot{q}+A^\top{q} = y-y_d, \;\;t\in (0, T_0), \\ \sum\limits_{i = 1}^n q_{i}(T_0) = 0, \\ \end{array} \right. \end{equation} $ (1.24)

where $ y(T_0) = (y_1(T_0), y_2(T_0), \cdots, y_n(T_0))^\top $, $ q(T_0) = (q_1(T_0), q_2(T_0), \cdots, q_n(T_0))^\top $, $ y_0 $, $ y_d $ and $ T_0 $ are given, $ A $ and $ B $ will be chosen to satisfy (H1).

Let $ (t_l)_{l = 0, 1, \cdots, N} $ be an equidistant partition of [0, $ T_0 $] with the time step $ \Delta t = \frac{T_0}{N} $, i.e.,

$ \begin{equation*} 0 = t_0<t_1<\cdots<t_{N-1}<t_N = T_0. \end{equation*} $

For $ l = 0, 1, \cdots, N, \;\; i = 1, 2, \cdots, n $, we set

$ \begin{equation*} y_i^l = y_i(t_l), \;\;\mathit{\boldsymbol{y}}^l = (y_1^l, y_2^l, \cdots, y_n^l), \;\;q_i^l = q_i(t_l), \;\;\mathit{\boldsymbol{q}}^l = (q_1^l, q_2^l, \cdots, q_n^l). \end{equation*} $

The discretization of (3.1) gives the following system by an implicit finite difference scheme

$ \left\{ \begin{array}{l} \frac{{{\boldsymbol{y}^{l + 1}} - {\boldsymbol{y}^l}}}{{\Delta t}} = A{\boldsymbol{y}^{l + 1}} + B{B^\top}{\boldsymbol{q}^{l + 1}},\;\;\;l = 0,1,2, \cdots ,N - 1,\\ \frac{{{\boldsymbol{q}^{l + 1}} - {\boldsymbol{q}^l}}}{{\Delta t}} + {A^\top}{\boldsymbol{q}^{l + 1}} = {\boldsymbol{y}^{l + 1}} - \boldsymbol{y}_d^{l + 1},\;\;\;l = 0,1,2, \cdots ,N - 1,\\ y_1^N = y_2^N = \cdots = y_n^N,\\ \sum\limits_{i = 1}^{n1} {q_i^N = 0} . \end{array} \right. $ (1.25)

This can be reformulated as a linear system of (2$ N $+1)$ \times n $ equations $ \label{Maxeq-1:16-3} MY = b, $ here

$ \begin{equation*} Y = \left( \begin{array}{ccccccccc} y^1\\ y^2\\ \vdots \\ y^N\\ q^0\\ q^1\\ \vdots \\ q^N\\ \end{array} \right), \;\; b = \left( \begin{array}{ccccccccc} \frac{y^0}{\Delta t}\ \\ \boldsymbol{0}\\ \vdots \\ \boldsymbol{0}\\ -y_d^1\\ -y_d^2\\ \vdots \\ -y_d^N\\ \boldsymbol{0}\\ \end{array} \right), \;\; \boldsymbol{0} = \left( \begin{array}{cc} 0\\ \vdots\\ 0\\ \end{array} \right), \end{equation*} $

and $ M $ is a $ (2N+1)n\times(2N+1)n $ matrix given by

$ \begin{equation*} M = \\ \left( \begin{array}{cccccccccccc} \frac{I_n}{\Delta t}-A&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&BB^\top&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}\\ \frac{-I_n}{\Delta t}&\ddots&\ddots&\ddots&\vdots&\vdots&\boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\vdots\\ \boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots &\boldsymbol{0}_{n\times n}&\vdots&\boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}\\ \vdots&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}&\vdots&\vdots&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}\\ \boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\frac{-I_n}{\Delta t}&\frac{I_n}{\Delta t}-A&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\cdots&BB^\top\\ -I_n&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\frac{I_n}{\Delta t}+A&\frac{-I_n}{\Delta t}&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}\\ \boldsymbol{0}_{n\times n}&-I_n&\ddots&\ddots&\vdots&\boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\ddots&\vdots \\ \boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}&\vdots&\boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}\\ \vdots&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}&\vdots&\vdots&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}\\ \boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&-I_n&\vdots&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\frac{I_n}{\Delta t}+A&\frac{-I_n}{\Delta t}\\ \boldsymbol{0}_{n\times n}&\cdots&\cdots&\boldsymbol{0}_{n\times n}&F&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\cdots&\boldsymbol{0}_{n\times n}&G\\ \end{array} \right), \end{equation*} $

where $ I_n $ is the $ n $-dimension identity matrix, $ \boldsymbol{0}_{n\times n} $ is the $ n $-dimension zeros matrix,

$ \begin{equation*} F = \left( \begin{array}{cccccc} 1 & -1 & 0 &\cdots & 0 & 0\\ 0 & 1 & -1 &\cdots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots &\vdots \\ 0 & 0 & 0 & \cdots&1& -1 \\ 0 & \cdots & \cdots & \cdots & \cdots & 0 \\ \end{array} \right)_{n\times n}\mbox{and}\;\; G = \left( \begin{array}{ccc} 0&\cdots&0\\ \vdots &\ddots &\vdots\\ 0&\cdots&0\\ 1&\cdots&1\\ \end{array} \right)_{n\times n}. \end{equation*} $

Finally, we can solve (3.2) for different choice of $ N $ to obtain the numerical solution $ y^\Delta $, then compare them with the exact solution of $ y^* $ to check the convergence of the algorithm.

We carry out the test with $ n $ = 2, $ m = 1 $, $ T_0 $ = 1,

$ \begin{equation*} y_0 = \left( \begin{array}{c} 1\\ 1\\ \end{array} \right), \;\; y_d = \left( \begin{array}{c} 0\\ 0\\ \end{array} \right), \;\; A = \left( \begin{array}{cc} 1&0\\ 0&1\\ \end{array} \right)\mbox{and}\;\; B = \left( \begin{array}{cc} 1\\ 0\\ \end{array} \right). \end{equation*} $

Clearly $ A $, $ B $ satisfy (H1), and the exact solution $ y^* = (y_1^*, y_2^*)^\top $ can be obtained by direct computation

$ y_1^*(t) = \frac{e-e^{-\sqrt{2}}}{e^{\sqrt{2}}-e^{-\sqrt{2}}}e^{\sqrt{2}t}+\frac{-e+e^{\sqrt{2}}}{e^{\sqrt{2}}-e^{-\sqrt{2}}}e^{-\sqrt{2}t}, \;\; y_2^*(t) = e^t. $

Taking $ N $ = 10, 20, 40, we can illustrate the numerical solution $ y^\Delta $ and the exact solution of $ y^* $ in the following figures

Figure 1 the empty circle is the numerical solution $ y^\Delta = (y_1^\Delta, y_2^\Delta)^\top $, and the solid line is the exact solution $ y^* = (y_1^*, y_2^*)^\top $ when $ N $ = 10.

Figure 2 the empty circle is the numerical solution $ y^\Delta = (y_1^\Delta, y_2^\Delta)^\top $, and the solid line is the exact solution $ y^* = (y_1^*, y_2^*)^\top $ when $ N $ = 20.

Figure 3 the empty circle is the numerical solution $ y^\Delta = (y_1^\Delta, y_2^\Delta)^\top $, and the solid line is the exact solution $ y^* = (y_1^*, y_2^*)^\top $ when $ N $ = 40.

Test 2   For the framework of (ⅱ) in Theorem 1.1, we see that the optimal control $ u^* $ and optimal trajectory $ y^* $ are the solutions to following equations

$ \begin{equation} \left\{ \begin{array}{lll} \dot{y} = Ay+Bu, \;\;t\in (0, T_0), \\ y(0) = y_0, \\ y (T_0) = \boldsymbol{0}, \\ u(t) = B^\top\mathit{\boldsymbol{q}}(t), \;\;t\in (0, T_0), \\ \dot{q}+A^\top{q} = y-y_d, \;\;t\in (0, T_0), \\ \end{array} \right. \end{equation} $ (1.26)

where $ y(T_0) = (y_1(T_0), y_2(T_0), \cdots, y_n(T_0))^\top $, $ y_d $ and $ T_0 $ are given, $ A $ and $ B $ will be chosen to satisfy (H2).

Analogously we take the same scheme as in Test 1 to obtain the discretization of (3.3) as the following system

$ \left\{ \begin{array}{l} \frac{{{\boldsymbol{y}^{l + 1}} - {\boldsymbol{y}^l}}}{{\Delta t}} = A{\boldsymbol{y}^{l + 1}} + B{B^\top}{\boldsymbol{q}^{l + 1}},\;\;\;l = 0,1,2, \cdots ,N - 1,\\ \frac{{{\boldsymbol{q}^{l + 1}} - {\boldsymbol{q}^l}}}{{\Delta t}} + {A^\top}{\boldsymbol{q}^{l + 1}} = {\boldsymbol{y}^{l + 1}} - \boldsymbol{y}_d^{l + 1},\;\;\;l = 0,1,2, \cdots ,N - 1,\\ y_1^N = y_2^N = \cdots = y_n^N = 0. \end{array} \right. $ (1.27)

This can be reformulated as a linear system of (2$ N $+1)$ \times n $ equations:

$ M\boldsymbol{Y} = \boldsymbol{b}, $

Here

$ \begin{equation*} Y = \left( \begin{array}{ccccccccc} y^1\\ y^2\\ \vdots \\ y^N\\ q^0\\ q^1\\ \vdots \\ q^N\\ \end{array} \right), \;\; b = \left( \begin{array}{ccccccccc} \frac{y^0}{\Delta t}\ \\ \boldsymbol{0}\\ \vdots \\ \boldsymbol{0}\\ -y_d^1\\ -y_d^2\\ \vdots \\ -y_d^N\\ \boldsymbol{0}\\ \end{array} \right), \;\; \boldsymbol{0} = \left( \begin{array}{cc} 0\\ \vdots\\ 0\\ \end{array} \right), \end{equation*} $

and $ M $ is a $ (2N+1)n\times(2N+1)n $ matrix given by

$ \begin{equation*} M = \\ \left( \begin{array}{cccccccccccc} \frac{I_n}{\Delta t}-A&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&BB^\top&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}\\ \frac{-I_n}{\Delta t}&\ddots&\ddots&\ddots&\vdots&\vdots&\boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\vdots\\ \boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots &\boldsymbol{0}_{n\times n}&\vdots&\boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}\\ \vdots&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}&\vdots&\vdots&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}\\ \boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\frac{-I_n}{\Delta t}&\frac{I_n}{\Delta t}-A&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\ddots&BB^\top\\ -I_n&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\frac{I_n}{\Delta t}+A&\frac{-I_n}{\Delta t}&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}\\ \boldsymbol{0}_{n\times n}&-I_n&\ddots&\ddots&\vdots&\boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\ddots&\vdots \\ \boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}&\vdots&\boldsymbol{0}_{n\times n}&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}\\ \vdots&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}&\vdots&\vdots&\ddots&\ddots&\ddots&\boldsymbol{0}_{n\times n}\\ \boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&-I_n&\vdots&\boldsymbol{0}_{n\times n}&\cdots&\boldsymbol{0}_{n\times n}&\frac{I_n}{\Delta t}+A&\frac{-I_n}{\Delta t}\\ \boldsymbol{0}_{n\times n}&\cdots&\cdots&\boldsymbol{0}_{n\times n}&I_n&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}&\cdots&\cdots&\boldsymbol{0}_{n\times n}&\boldsymbol{0}_{n\times n}\\ \end{array} \right), \end{equation*} $

where $ I_n $ is the $ n $-dimension identity matrix, and $ \boldsymbol{0}_{n\times n} $ is the $ n $-dimension zeros matrix.

Finally, we can solve (3.4) for different choice of $ N $ to obtain the numerical solution $ y^\Delta $, then compare them with the exact solution of $ y^* $ to check the convergence of the algorithm.

We carry out the test with $ n = m = 2 $, $ T_0 $ = 1,

$ \begin{equation*} y_0 = \left( \begin{array}{c} 1\\ 1\\ \end{array} \right), \;\; y_d = \left( \begin{array}{c} 0\\ 0\\ \end{array} \right), \;\; A = \left( \begin{array}{cc} 1&0\\ 0&2\\ \end{array} \right)\mbox{and}\;\; B = \left( \begin{array}{cc} 1&0\\ 0&1\\ \end{array} \right). \end{equation*} $

Clearly $ A $, $ B $ satisfy (H2), and the exact solution $ y^* = (y_1^*, y_2^*)^\top $ can be obtained by direct computation:

$ \begin{equation*} y_1^*(t) = \frac{1}{1-e^{2\sqrt{2}}}e^{\sqrt{2}t}+\frac{e^{2\sqrt{2}}}{e^{2\sqrt{2}}-1}e^{-\sqrt{2}t}, \;\;y_2^*(t) = \frac{1}{1-e^{2\sqrt{5}}}e^{\sqrt{5}t}+\frac{e^{2\sqrt{5}}}{e^{2\sqrt{5}}-1}e^{-\sqrt{5}t}. \end{equation*} $

Taking $ N $ = 10, 20, 40, we can illustrate the numerical solution $ y^\Delta $ and the exact solution of $ y^* $ in the following figures:

Figure 4 the empty circle is the numerical solution $ y^\Delta = (y_1^\Delta, y_2^\Delta)^\top $, and the solid line is the exact solution $ y^* = (y_1^*, y_2^*)^\top $ when $ N $ = 10.

Figure 5 the empty circle is the numerical solution $ y^\Delta = (y_1^\Delta, y_2^\Delta)^\top $, and the solid line is the exact solution $ y^* = (y_1^*, y_2^*)^\top $ when $ N $ = 20.

Figure 6 the empty circle is the numerical solution $ y^\Delta = (y_1^\Delta, y_2^\Delta)^\top $, and the solid line is the exact solution $ y^* = (y_1^*, y_2^*)^\top $ when $ N $ = 40.

From these figures, we observe that the error between the numerical solution and the exact solution decreases with the increase of $ N $. So, if we take the value of $ N $ large enough, the exact solution can be approximated nicely by this method.

References
[1]
Huygens Ch. Oeuvres complètes[M]. Vol.15, Amsterdam: Swets & Zeitlinger B.V., 1967.
[2]
Li T-T, Rao B P, Hu L. Exact boundary synchronization for a coupled system of 1-D wave equations[J]. ESAIM:Control, Optimisation and Calculus of Variations, 2014, 20: 339-361. DOI:10.1051/cocv/2013066
[3]
Strogatz S. Sync:The emerging science of spontaneous order[M]. New York: THEIA, 2003.
[4]
Wu C W. Synchronization in complex networks of nonlinear dynamical systems[M]. Singapore: World Scientific, 2007.
[5]
Wiener N. Cybernetics, or control and communication in the animal and the machine[M]. Cambridge: MIT Press, 1961.
[6]
Li T-T, Rao B P. Exact synchronization for a coupled system of wave equations with Dirichlet boundary controls[J]. Chinese Ann. Math., Series B, 2013, 34: 139-160. DOI:10.1007/s11401-012-0754-8
[7]
Li T-T, Rao B P. On the state of exact synchronization of a coupled system of wave equations[J]. Comptes Rendus Mathématique-Académie des Sciencs-Paris, 2014, 352: 823-829. DOI:10.1016/j.crma.2014.08.007
[8]
Li T-T, Rao B P. On the exactly synchronizable state to a coupled system of wave equations[J]. Portugaliae Mathematica, 2015, 72: 83-100. DOI:10.4171/PM/1958
[9]
Hu L, Li T-T, Rao B P. Exact boundary synchronization for a coupled system of 1-D wave equations with coupled boundary conditions of dissipative type[J]. Commun. Pure Appl. Math., 2014, 13: 881-901.
[10]
Wang L J, Yan Q S. Minimal time control of exact synchronization for parabolic systems[J]. arxiv: 1803.00244.
[11]
Barbu V. Analysis and control of nonlinear infinite dimensional systems[M]. New York: Academic Press, 1993.
[12]
Barbu V. Optimal control of variational inequalities[M]. Boston: Pitman, 1984.
[13]
Li X J, Yong J M. Optimal control theory for infinite dimensional systems[M]. Boston: Birkhäuser, 1995.
[14]
Lou H W. Optimality conditions for semilinear parabolic equations with controls in leading term[J]. ESAIM:Control, Optimisation and Calculus of Variations, 2011, 17: 975-994. DOI:10.1051/cocv/2010034
[15]
Wang G S, Wang L J. State-constrained optimal control governed by non-well-posed parabolic differential equations[J]. SIAM J. Cont. Optim., 2002, 40: 1517-1539. DOI:10.1137/S0363012900377006
[16]
Wang L J, Yan Q S. Optimal control problem for exact synchronization of parabolic system[J]. Math. Control Rel. Fiel., to appear.