数学杂志  2022, Vol. 42 Issue (3): 205-216   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
WANG Xiang-yu
LIU Han-bing
THE MAXIMIZATION OF THE ADMISSIBLE SAMPLING INTERVAL OF BOUNDARY PROPORTIONAL SAMPLED-DATA FEEDBACKS FOR STABILIZING PARABOLIC EQUATIONS
WANG Xiang-yu, LIU Han-bing    
School of Mathematics and Physics, China University of Geosciences, Wuhan 430074, China
Abstract: In this paper, we study the problem of boundary sampled-data feedback stabilization for parabolic equations. By using modal decomposition, we show that there exists boundary proportional sampled-data feedback locally exponentially stabilize a class of parabolic equations, and the admissible sampling interval of this feedback is maximal among all feasible proportional sampled-data feedbacks.
Keywords: parabolic equations     sampled-data control     admissible sampling interval    
抛物型方程边界比例采样反馈镇定器的允许采样间隔的最大化
王翔宇, 刘汉兵    
中国地质大学(武汉)数学与物理学院, 湖北 武汉 430074
摘要:本文研究了抛物型方程的边界采样反馈镇定问题.采用模态分解的方法, 我们证明了存在边界比例采样反馈, 使得一类抛物型方程局部指数稳定, 并且该反馈的允许采样间隔在所有可行的比例采样反馈中是最大的.
关键词抛物型方程    采样控制    允许采样间隔    
1 Introduction

In this work, we study the following parabolic equation with boundary control

$ \begin{equation} \left\{\begin{array}{ll} \frac{\partial y}{\partial t}= \Delta y+ f(x, y), \ \ \mathrm{in} \ (0, +\infty)\times\Omega, \\ y=u \ \mathrm{on}\ (0, +\infty)\times\Gamma_1, \frac{\partial y}{\partial \mathbf{n}}=0 \ \mathrm{ on}\ (0, +\infty)\times\Gamma_2, \\ y(0, x)=y_0(x), \ \ \mathrm{in} \ \Omega, \end{array}\right. \end{equation} $ (1.1)

where $ \Omega $ is a bounded and open domain of $ \mathbb{R}^{d} $ with a smooth boundary $ \partial \Omega=\Gamma_{1}\cup \Gamma_{2} $, $ \Gamma_{1} $, $ \Gamma_{2} $ being connected parts of $ \partial \Omega $, and $ \mathbf{n} $ stands for unit outward normal on the boundary $ \partial \Omega $. $ y_{0} \in L^{2}(\Omega) $ is the initial data, and $ f $ is a nonlinear function defined on $ \bar{\Omega}\times\mathbb{R} $. The sampled-data controller $ u $ is applied on $ \Gamma_{1} $ while $ \Gamma_{2} $ is insulated. By sampled-data control, we mean that the control is a piecewise constant function in time. More exactly, it is of the form

$ \begin{equation} u(t, x)=\sum\limits_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)u_{i}, \ \ u_{i}\in L^{2}(\Gamma_{1}). \end{equation} $ (1.2)

Here $ \chi_{[t_{i}, t_{i+1})} $ is the characteristic function of interval $ [t_{i}, t_{i+1}) $ for $ i=0, 1, 2, ... $, where $ 0=t_{0}<t_{1}<\cdots <t_{i}<t_{i+1}<\cdots $, with $ \lim_{i \to \infty}t_{i}=\infty $, are sampling time instants, the positive numbers $ T_{i}=t_{i+1}-t_{i}, \ (i=0, 1, 2, \cdots) $ are called the sampling intervals.

Sampled-data feedback stabilization is a well-studied topic for finite dimensional systems due to the fact that modern control systems employ digital technology for the implementation of the controller. However, for sampled data feedback stabilization of infinite dimensional systems, the research results are relatively few.

To stabilize the infinite dimensional systems by sampled-data feedback control, most of existed works applied directly the feedback which works for the continuous-time case (see [1-4]). It was shown that the continuous-time stabilizing feedback can stabilize the sampled-data system when the sampling intervals are small enough. However, the requirement that sampling intervals are sufficiently small is not reasonable from two perspectives: smaller sampling intervals require faster, newer and more expensive hardware; performing all control computations might not be feasible, when the sampling interval is too small (see Section 1.2 in [5]). Recently, the second author and collaborators considered the stabilization of parabolic equations with periodic sampled-data control, and developed methods to design feedback laws for any known sampling period (see [6, 7]).

For the case when the sampling intervals are variable and uncertain, usually it is unable to design a feedback to stabilize the system for arbitrary sampling intervals. What we can do is try to find a feedback such that its admissible sampling interval is largest. By admissible sampling interval of a sampled-data feedback, we mean that for all sampling instants whose sampling intervals are less than or equal to it, the feedback stabilizes the equation (It's mathematical definition will be given in Definition 2.1). In general, it is very difficult and almost impossible to find such an optimal one from all feedbacks. In this work, we consider a class of special feedbacks, i.e. the proportional feedbacks. Such kind of feedback controls are simple, and easy to be implemented, and they have been used by V.Barbu and later on developed by I. Munteanu to stabilize various systems under continuous-time boundary feedback control in [8-10]. Giving the decay rate and the lower bound of sampling intervals, under the assumption proposed by V.Barbu, we shall construct an explicit sampled-data feedback to exponentially stabilize the parabolic equations, and the admissible sampling interval of the feedback is maximal among all feasible proportional feedbacks.

The main novelties of this work can be summarized as following: Firstly, compared with [6] and [7], we consider in this work the case that the sampling intervals are variable and uncertain, and the feedbacks do not depend on the sampling intervals but only the lower bound of these intervals. Secondly, comparing with existed literatures, this work makes the first attempt to achieve the largest admissible sampling interval.

The rest of this paper is organized as follows. In Section 2, we shall construct the feedback control, and prove that it stabilizes the linearized equation and it maximizes the admissible sampling interval. In Section 3, we will show the feedback control constructed in Section 2 also locally exponentially stabilizes the semilinear parabolic equations.

2 Stabilization of the Linearized Equation
2.1 The Linearization of Semilinear Parabolic Equation and Notations

The equilibrium solutions $ y_{e} \in C^{2}(\bar{\Omega}) $ is any solution to the equation

$ \triangle y_{e} + f(x, y_{e}) = 0 \ \mathrm{in} \ \Omega, \ \ \frac{\partial y_{e}}{\partial \mathbf{n}}=0 \ \mathrm{on} \ \Gamma_{2}. $

Translating $ y_{e} $ into zero via substitution $ y-y_{e} \to y $, we can rewrite (1.1) as

$ \begin{equation} \left\{ \begin{array}{ll} \frac{\partial y}{\partial t}=\triangle y + f(x, y+y_{e})-f(x, y_{e}), \mathrm{in}\ (0, +\infty)\times\Omega, \\ y=u-y_{e}\ \mathrm{on}\ (0, +\infty)\times\Gamma_{1}, \ \frac{\partial y}{\partial \mathbf{n}}=0\ \mathrm{on}\ (0, +\infty)\times\Gamma_{2}, \\ y(0, x)=y_{0}(x)-y_{e}(x), \ \mathrm{in}\ \Omega, \end{array} \right. \end{equation} $ (2.1)

and the stabilization problem reduces to design a sampled-data controller $ u=\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)u_{i} $ to stabilize the corresponding closed loop system.

The first step toward this goal is the stabilization of the linearized system associated with (2.1), that is

$ \begin{equation} \left\{ \begin{array}{ll} \frac{\partial y}{\partial t}=\triangle y + f_{y}(x, y_{e})y, \ \mathrm{in}\ (0, +\infty)\times\Omega, \\ y=v\ \mathrm{on}\ (0, +\infty)\times\Gamma_{1}, \ \frac{\partial y}{\partial \mathbf{n}}=0\ \mathrm{on}\ (0, +\infty)\times\Gamma_{2}, \\ y(0, x)=\tilde{y}_{0}(x), \ \mathrm{in}\ \Omega, \end{array} \right. \end{equation} $ (2.2)

where $ f_{y}(x, y_{e})=\frac{\partial f}{\partial y}(x, y_{e}) $. The stabilizing feedback controller $ v=\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)F(y(t_{i})) $ for (2.2) will be used afterwards to stabilize locally system (2.1), and implicitly the equilibrium solution $ y_{e} $.

Everywhere in the following, we shall assume that $ f, f_{y}\in C(\bar{\Omega}, \mathbb{R}) $.

In particular, this implies that $ x \to f_{y}(x, y_{e}(x)) $ is continuous in $ \bar{\Omega} $. We define the linear operator $ A:D(A)\to L^{2}(\Omega) $ by $ Ay=-\triangle y-f_{y}(x, y_{e}(x))y, $ where $ D(A)=\{y \in H^{2}(\Omega);y=0\;on\;\Gamma_{1}, \frac{\partial y}{\partial \mathbf{n}}=0\;on\;\Gamma_{2}\} $. Since the resolvent of $ A $ is compact, it has countable set of eigenvalues. We assume that the operator $ A $ has at least one negative eigenvalue. Given $ \rho>0 $, let $ \{ \lambda_{j} \}_{j=1}^{\infty} $, with

$ \begin{equation} \lambda_{1} \leq \lambda_{2} \leq \cdots \leq \lambda_{N} < \rho \leq \lambda_{N+1} \leq \cdots \end{equation} $ (2.3)

be the family of all eigenvalues of $ A $ and let $ \{ \phi_{j} \}_{j=1}^{\infty} $ be the family of the corresponding eigenfunctions, which forms an orthonormal basis of $ L^{2}(\Omega) $. For each $ M \in \mathbb{N}^{+} $, let $ X_{M}=\mathrm{span}\{\phi_{j}\}_{j=1}^{M} $, and let $ P_{M} $ be the orthogonal projection from $ L^{2}(\Omega) $ onto $ X_{M} $. We denote by $ Q_{N}:L^{2}(\Omega)\to \mathbb{R}^{N} $ the operator $ Q_{N}(y)=(\langle y, \phi_{1} \rangle, \cdots, \langle y, \phi_{N} \rangle )^{'} $. Here $ B' $ stands for the transposition of the matrix of $ B $. In the rest of the paper, we shall denote by $ \Vert\cdot\Vert, \;\Vert\cdot\Vert_{s}, \;|\cdot|_{0} $ and $ |\cdot|_{N} $ the norms of $ L^{2}(\Omega), \;H^{s}(\Omega), \;L^{2}(\Gamma_{1}) $ and $ \mathbb{R}^{N} $ respectively. The inner products in $ L^{2}(\Omega), \;L^{2}(\Gamma_{1}) $ and Euclid space $ \mathbb{R}^{N} $ will be denoted by $ \langle \cdot, \cdot\rangle, \ \langle \cdot, \cdot\rangle_{0} $ and $ \langle \cdot, \cdot\rangle_{N} $.

The notion of the solution to the equation (2.2) with sampled-data Dirichlet boundary condition was given in [6]. It is known that, for each $ v=\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)v_{i} $ with $ v_i\in L^2(\Gamma_1) $, there exists a unique solution $ y\in C([0, +\infty); L^2(\Omega)) $ (see Theorem 2.1 in [6]).

2.2 The Stabilization of the Linearized Equation

To get the stabilization result, we give the following assumption on $ \{\frac{\partial \phi_{j}}{\partial \mathbf{n}}\}_{j=1}^{N} $, where $ \frac{\partial \phi_{j}}{\partial \mathbf{n}} $ is the normal derivative of $ \phi_{j} $ to $ \partial \Omega $.

($ H_{n} $) The system $ \{ (\frac{\partial \phi_{j}}{\partial \mathbf{n}}), \; 1 \leq j \leq N \} $ is linearly independent on $ \Gamma_{1} $.

It should be mentioned that ($ H_{n} $) is a standard hypothesis in boundary stabilization theory of parabolic-like equations (see e.g., [8, 9, 11]). We note that ($ H_{n} $) holds if $ N=1 $ for $ d=1 $. For the case $ d>1 $, ($ H_{n} $) usually holds.

In practice, the sampling intervals should not tend to zero. Hence, we make the following assumption throughout this work:

$ (H_{s})\;\;T_{i}=t_{i+1}-t_{i} \geq \underline{T}, \;i=0, 1, 2, \cdots, $ where $ \underline{T} > 0 $ is given.

We introduce firstly the proportional feedbacks we shall consider. Let

$ \begin{equation} \begin{aligned} \mathcal{F}=&\{F:L^{2}(\Omega) \to L^{2}(\Gamma_{1});\ F(w)= \mathbb{1}_{\Gamma_{1}}\langle B_{0}^{-1}\Lambda Q_{N}(w), \frac{\partial \Phi^{N}}{\partial \mathbf{n}} \rangle_{N}, \ \Lambda \in S\}, \end{aligned} \end{equation} $ (2.4)

where $ B_{0}=(\langle \frac{\partial \phi_{i}}{\partial \mathbf{n}}, \frac{\partial \phi_{j}}{\partial \mathbf{n}} \rangle_{0} )_{1 \leq i, j \leq N} $, $ \frac{\partial \Phi^{N}}{\partial \mathbf{n}}=(\frac{\partial \phi_{1}}{\partial \mathbf{n}}, \frac{\partial \phi_{2}}{\partial \mathbf{n}}, \cdots, \frac{\partial \phi_{N}}{\partial \mathbf{n}})^{'} \in (L^{2}(\partial\Omega))^{N} $, $ S=\{ \Lambda \in \mathbb{R}^{N\times N};\;\Lambda={\rm diag}(\gamma_{j})_{1\leq j\leq N}, \gamma_{j}\in \mathbb{R} \} $, and $ \mathbb{1}_{\Gamma_{1}}:L^{2}(\partial\Omega) \to L^{2}(\Gamma_{1}) $ is the restriction operator. According to the assumption ($ H_{n} $), we know that matrix $ B_{0} $ is invertible. From (2.4), we see that the feedback $ F \in \mathcal{F} $ depends on the diagonal matrix $ \Lambda $, and we need to get the feedback $ F $ by designing $ \Lambda:={\rm diag}(\gamma_{j})_{1 \leq j \leq N} $, where $ \gamma_{j}, 1 \leq j \leq N $ are $ N $ real numbles to be designed.

The equation (2.2) with sampled-data feedback control

$ \begin{equation} v(t)=\sum\limits_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)F(y(t_{i})), \end{equation} $ (2.5)

where $ F \in\mathcal{F} $, can be written as

$ \begin{equation} \left\{ \begin{array}{ll} \frac{\partial y}{\partial t}=\triangle y + f_{y}(x, y_{e})y\ \mathrm{in}\ (0, +\infty)\times\Omega, \\ y=\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)F(y(t_{i}))\ \mathrm{on}\ (0, +\infty)\times\Gamma_{1}, \\ \frac{\partial y}{\partial \mathbf{n}}=0\ \mathrm{on}\ (0, +\infty)\times\Gamma_{2}, \ y(0, x)=\tilde{y}_{0}(x), \ \mathrm{in}\ \Omega. \end{array} \right. \end{equation} $ (2.6)

Let $ \rho $ be given by (2.3) and $ \underline{T} $ be given in ($ H_s $). We give the notion of the admissible sampling interval of a feedback.

Definition 2.1   (ⅰ) We call $ F \in \mathcal{F} $ a feasible feedback if $ F \in \mathcal{F}_{\rho} $, where $ \mathcal{F}_{\rho} $ is given by

$ \begin{array}{l} {{\cal F}_\rho } = \{ F \in {\cal F}\;|\;\exists \{ {t_i}\} _{i = 0}^\infty \;{{\rm{satisfying}}}\;{T_i} = {t_{i + 1}} - {t_i} \ge \underline T ,\;\;\;\\ \;\;\;\;\;\;\;\;{{\rm{equation}}}\;({{\rm{2}}}.{{\rm{6}}})\;{{\rm{with}}}\;F\;{{\rm{is}}}\;{{\rm{exponentially}}}\;{{\rm{stable}}}\;{{\rm{with}}}\;{{\rm{decay}}}\;{{\rm{rate}}}\;\rho \} . \end{array}$ (2.7)

(ⅱ) For each $ F \in \mathcal{F}_{\rho} $, we define

$ \begin{equation} \begin{aligned} \overline{T}_{F}=&\sup\{\ \widehat{T}\ \ |\ \forall\ \{t_{i}\}_{i=0}^{\infty}\ \ {{\rm{satisfying}}}\ \ \underline{T}\leq t_{i+1}-t_{i} \leq \widehat{T}, \\ & {\rm{equation}}\ (2.6)\ {\rm{with\;}} F \; {\rm is\; exponentially\;stable\; with\; decay\; rate}\ \rho \}, \end{aligned} \end{equation} $ (2.8)

and call $ \overline{T}_{F} $ the admissible sampling interval of $ F $.

Remark 2.2   Arbitrarily giving $ \overline{T} < \overline{T}_{F} $, from above definition, we see that, $ \forall\ \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \overline{T}] $, the system (2.6) can be exponentially stabilized by the boundary sampled-data feedback $ F $.

Then the problem we shall study in this section can be formulated as follows.

Problem ($ P_{\underline{T}, \rho} $)   Find the optimal feedback $ F^{*}\in \mathcal{F}_{\rho} $, such that the feedback $ F^{*} $ maximizes the admissible sampling interval, that is

$ \begin{equation} \overline{T}_{F^{*}}=\overline{T}^{*}:=\max\limits_{F\in \mathcal{F}_{\rho}}\{\overline{T}_{F}\}. \end{equation} $ (2.9)

The following theorem contains the main results of this section, which give the optimal value and the optimal solution of the Problem ($ P_{\underline{T}, \rho} $).

Theorem 2.3    Assume $ \tilde{y}_{0} \in L^{2}(\Omega) $, under the assumptions $ (H_{n}) $ and $ (H_{s}) $, the following results are true:

(ⅰ) The maximal value $ \overline{T}^{*} $ satisfies $ \overline{T}^{*} < +\infty $, and it is given by

$ \begin{equation} \overline{T}^{*}=T^{(j_{0})}=\min\{T^{(j)}, j=1, 2, \cdots, N\}, \end{equation} $ (2.10)

where, for each $ j=1, 2, \cdots, N $, $ T^{(j)} $ is uniquely determined by the following equation

$ \begin{equation} \alpha_{j}(\underline{T})=\beta_{j}(T^{(j)}). \end{equation} $ (2.11)

Here, $ \alpha_{j}(t) $ and $ \beta_{j}(t) $ are real functions defined by

$ \begin{equation} \begin{aligned} &\alpha_{j}(t)=\frac{-e^{-\rho t}+e^{-\lambda_{j}t}}{\int_{0}^{t}e^{-\lambda_{j}s}ds}, \ \beta_{j}(t)=\frac{e^{-\rho t}+e^{-\lambda_{j}t}}{\int_{0}^{t}e^{-\lambda_{j}s}ds}, \ t>0, \;j=1, 2, \cdots, N. \end{aligned} \end{equation} $ (2.12)

(ⅱ) The optimal solution of $ (P_{\underline{T}, \rho}) $ is given by

$ \begin{equation} F^{*}(w)= \mathbb{1}_{\Gamma_{1}}\langle B_{0}^{-1}\Lambda^{*} Q_{N}(w), \frac{\partial \Phi^{N}}{\partial \mathbf{n}} \rangle_{N}, \end{equation} $ (2.13)

where

$ \begin{equation} \Lambda^{*}={\rm diag}(\gamma_{j}^{*})_{1\leq j\leq N}, \;\gamma_{j}^{*} \in [\alpha_{j}(\underline{T}), \beta_{j}(T^{(j_{0})})]. \end{equation} $ (2.14)

Moreover, $ \forall\ \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \overline{T}^{*}] $, the system (2.6) can be exponentially stabilized with decay rate $ \rho $ by the boundary sampled-data feedback $ F^{*} $.

Remark 2.4    (i) From the above results in Theorem 2.3, we know that $ \gamma_{j_{0}}^{*} = \alpha_{j_{0}}(\underline{T})=\beta_{j_{0}}(\overline{T}^{*}) $. For $ j \in \{ 1, 2, \cdots, N \} \setminus \{j_{0}\} $, we have that $ \gamma_{j}^{*} \in [\alpha_{j}(\underline{T}), \ \beta_{j}(\overline{T}^{*})] $. It might happen that $ \beta_{j}(\overline{T}^{*})>\alpha_{j}(\underline{T}) $, so the optimal solution of $ (P_{\underline{T}, \rho}) $ is not necessary unique.

(ⅱ) We can see from (2.10) and (2.11) that $ \overline{T}^{*} $ depends on $ \underline{T} $ and $ \rho $.

● When we fix the decay rate $ \rho $, the larger $ \underline{T} $, the larger $ \overline{T}^{*} $, but the smaller the difference between $ \overline{T}^{*} $ and $ \underline{T} $. Moreover, we can see that $ \overline{T}^{*}-\underline{T} \to 0 $ when $ \underline{T} \to \infty $.

● When we fix the lower bound of sampling interval $ \underline{T} $, the smaller the decay rate $ \rho $, the larger $ \overline{T}^{*} $, and $ \overline{T}^{*} \to \infty $ when $ \rho \to 0 $.

(ⅲ) The key point to solve Problem $ (P_{\underline{T}, \rho}) $ can be explained roughly as follows: We firstly reduce the stabilization of system (2.6) to solve the following inequalities with respect to $ \gamma_{j} $, $ j=1, 2, \cdots, N $,

$ |e^{-\lambda_{j}T_{i}}-\int_{0}^{T_{i}}e^{-\lambda_{j}s}ds\gamma_{j}|\leq e^{-\rho T_{i}}. $

For each $ j $, there is a largest $ T^{(j)} $, which is exactly given by (2.11), such that for any $ T_{i} \in [\underline{T}, T^{(j)}] $, the above inequality has solution. Then $ \overline{T}^{*} $ is the minimum of $ \{ T^{(j)} \}_{j=1}^{N} $.

(ⅳ) The above design applies as well to equation (2.6) with homogeneous Dirichlet condition on $ \Gamma_2 $ and Dirichlet actuation on $ \Gamma_1 $, that is $ y=u $ on $ \Gamma_1 $; $ y=0 $ on $ \Gamma_2 $, or to the Neumann boundary control, but we omit the details.

To prove Theorem 2.3, we shall use the Dirichlet map to lift the boundary condition, and transfer the nonhomogeneous problem to a homogeneous one. This preliminary result will be presented in Lemma 2.5, and it will not only be used in the proof of Theorem 2.3, but also useful for the proof of stabilization for the nonlinear equation (1.1).

To state Lemma 2.5, we need to introduce the so-called Dirichlet map. It is well-known that for sufficiently large $ k>0 $, the solution to the equation

$ \begin{equation} \left\{ \begin{array}{ll} \sum\limits_{j=1}^{N}(k-\lambda_{j})\langle \psi, \phi_{j} \rangle \phi_{j}-\triangle \psi - f_{y}(x, y_{e})\psi=0, \ \mathrm{in}\ \Omega, \\ \psi=w\ \mathrm{on}\ \Gamma_{1}, \ \ \frac{\partial \psi}{\partial \mathbf{n}}=0\ \mathrm{on}\ \Gamma_{2}, \end{array} \right. \end{equation} $ (2.15)

exists for any given $ w \in L^{2}(\Gamma_{1}) $. We shall denote by $ D $ the map $ w \rightarrow \psi $, i.e., $ \psi=Dw $. It is known that $ \psi \in H^{\frac{1}{2}+s}(\Omega) $ and $ \|\psi\|_{\frac{1}{2}+s} \leq C\|w\|_{s}, \;\forall s \geq 0 $ (see [12]).

Lemma 2.5   For any given $ F \in \mathcal{F}_{\rho} $, the solution to equation (2.6) is exponentially stable with decay rate $ \rho $, if and only if the solution to equation

$ \begin{equation} \left\{ \begin{array}{ll} \frac{d z}{d t}+Az(t)=R(h)\;{\rm{in}}\;\mathbb{R}^{+}\setminus \cup_{i=0}^{\infty}\{t_{i}\}, \\ z(t_{i})=z(t_{i}^{-})+h(t_{i-1})-h(t_{i}), i=0, 1, 2, \cdots, \\ z(0)=y(0)-h(0), \end{array} \right. \end{equation} $ (2.16)

is exponentially stable with decay rate $ \rho $, where $ h $ is given by

$ \begin{equation} h=D\left(\sum\limits_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)F(y(t_{i}))\right), \end{equation} $ (2.17)

and

$ \begin{equation} R(h)=\sum\limits_{j=1}^{N}(k-\lambda_{j})\langle h, \phi_{j} \rangle \phi_{j}. \end{equation} $ (2.18)

Proof   We will divide the proof into two steps.

Step 1    We prove the relation of $ y $ and $ z $. We will find the relationship between $ y $ and $ z $ by lifting the boundary condition for equation (2.6).

Setting $ \tilde{z}(t, x)=y(t, x)-h(t, x) $, by (2.6), (2.15) and (2.17), it is not difficult to prove that $ \tilde{z} $ satisfies exactly equation (2.16) (The second identity of (2.16) holds because $ y $ is continuous). Hence, $ \tilde{z}(t, x)=z(t, x) $ and

$ \begin{equation} z(t, x)=y(t, x)-h(t, x). \end{equation} $ (2.19)

Moreover, by the definition of $ h $ and the continuity of $ y(t) $, we know that $ h(t_{i})=DFy(t_{i}^{-})=DF[z(t_{i}^{-})+h(t_{i-1})] $.

Step 2    We prove the equivalence of the stability. Suppose that $ \exists C_{1}>0 $, s.t.

$ \begin{equation} \|z(t)\|\leq C_{1}e^{-\rho t}\| z(0) \|, \forall t>0. \end{equation} $ (2.20)

Firstly, we rewrite the control $ v(t)=\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)F(y(t_{i})) $ in a feedback form of $ z $. Denote by $ \mathbf{y}^{N} $, $ \mathbf{z}^{N} $ and $ \mathbf{h}^{N} $ respectively the vectors $ Q_{N}y $, $ Q_{N}z $ and $ Q_{N}h $. Using equation (2.15) and the definition of $ \phi_{j} $, by simple calculation, we can get that, for $ 1 \leq j \leq N $,

$ \begin{equation} \langle h(t), \phi_{j} \rangle =-\frac{1}{k}\langle v(t), \frac{\partial \phi_{j}}{\partial \mathbf{n}} \rangle_{0}. \end{equation} $ (2.21)

By the definition of $ v(t) $, it follows from the above identity that

$ \begin{equation} \mathbf{h}^{N}(t)=-\sum\limits_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t){\rm diag}(\frac{\gamma_{j}}{k})_{1\leq j \leq N}\mathbf{y}^{N}(t_{i}). \end{equation} $ (2.22)

By (2.22), and the relation between $ y $ and $ z $, one can obtain that

$ \begin{equation} \mathbf{y}^{N}(t_{i})={\rm diag}(\frac{k}{k+\gamma_{j}})_{1 \leq j \leq N}\mathbf{z}^{N}(t_{i}), \end{equation} $ (2.23)

and

$ \begin{equation} \mathbf{h}^{N}=-\sum\limits_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t){\rm diag}(\frac{\gamma_{j}}{k+\gamma_{j}})_{1\leq j \leq N}\mathbf{z}^{N}(t_{i}). \end{equation} $ (2.24)

Moreover, by the definition of $ v $, we have

$ \begin{equation} \begin{aligned} v(t)=\sum\limits_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)F(y(t_{i}))=\sum\limits_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)\widetilde{F}(z(t_{i})), \end{aligned} \end{equation} $ (2.25)

where $ \widetilde{F}:L^{2}(\Omega) \to L^{2}(\Gamma_{1}) $ is the operator given by

$ \begin{equation} \begin{aligned} \widetilde{F}(w)= \mathbb{1}_{\Gamma_{1}}\langle B_{0}^{-1}\Lambda {\rm diag}(\frac{k}{k+\gamma_{j}})_{1 \leq j \leq N}Q_{N}w, \frac{\partial \Phi^{N}}{\partial \mathbf{n}} \rangle_{N}. \end{aligned} \end{equation} $ (2.26)

By (2.17), (2.20) and (2.25), it follows that $ \|h(t)\| \leq C_{2}e^{-\rho t}\|z(0)\|. $ From the relation of $ y $, $ h $, and $ z $, we can get

$ \|y(t)\| \leq \|z(t)\| +\|h(t)\| \leq C_{3}e^{-\rho t}\|z(0)\| \leq C_{3}e^{-\rho t}(\|y(0)\|+\|h(0)\|). $

Using (2.17), we have $ \|h(0)\| \leq C_{4}\|y(0)\| $. In summary, one can obtain that, $ \exists C>0 $, s.t.

$ \|y(t)\|\leq Ce^{-\rho t}\| y(0) \|. $

On the other hand, suppose that $ \exists C_{5}>0, \rho>0 $, s.t. $ \|y(t)\|\leq C_{5}e^{-\rho t}\| y(0) \| $. For the same reason, one can obtain that $ \exists C>0 $, s.t. $ \|z(t)\|\leq Ce^{-\rho t}\| z(0) \| $.

This completes the proof of Lemma 2.5.

Now, we give the proof of Theorem 2.3.

Proof of Theorem 2.3    We shall prove Theorem 2.3 by three steps: firstly, we show that $ \overline{T}_{F^{*}} \geq T^{(j_{0})} $ by proving that the feedback $ F^{*} $ can stabilize equation (2.6) for all sampling intervals which are less than or equal to $ T^{(j_{0})} $; secondly, we will prove that $ \overline{T}^{*} = T^{(j_{0})} $ by contradiction; finally, we can see from above two steps and the fact $ \overline{T}^{*} \geq \overline{T}_{F^{*}} $ that $ \overline{T}^{*} = \overline{T}_{F^{*}}=T^{(j_{0})} $. Hence, the optimality of $ F^{*} $ follows.

Step 1    We prove $ \overline{T}_{F^{*}} \geq T^{(j_{0})} $. It suffices to prove the stability of equation (2.6) under the feedback $ F^{*} $ for all sampling intervals which are less than or equal to $ T^{(j_{0})} $. To this aim, we show firstly the stability of equation (2.16), where $ F=F^{*} $, $ T_{i} \in [\underline{T}, T^{(j_{0})}], \ i=0, 1, 2, \cdots $.

Multiplying equation (2.16) by $ \phi_{1}, \;\phi_{2}, \cdots, \phi_{N} $ respectively in $ L^2(\Omega) $, and using identity (2.23) and (2.24), one can obtain the equation satisfied by $ \mathbf{z}^{N} $ as follows,

$ \begin{equation} \left\{ \begin{array}{ll} \frac{d}{dt}\mathbf{z}^{N}(t)+A_{N}\mathbf{z}^{N}(t)=-\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t){\rm diag}(\frac{(k-\lambda_{j})\gamma_{j}^{*}}{k+\gamma_{j}^{*}})_{1 \leq j \leq N}\mathbf{z}^{N}(t_{i}), \\ \ \ \ \ \ \ \ \ \ \ t\in(0, \infty) \setminus \cup_{i=0}^{\infty}\{ t_{i}\}, \\ \mathbf{z}^{N}(t_{i})={\rm diag}(\frac{k+\gamma_{j}^{*}}{k})_{1 \leq j \leq N}\mathbf{z}^{N}(t_{i}^{-})\ -{\rm diag}(\frac{\gamma_{j}^{*}}{k})_{1 \leq j \leq N}\mathbf{z}^{N}(t_{i-1}), \ i=1, 2, \cdots, \\ \mathbf{z}^{N}(0)={\rm diag}(\frac{k+\gamma_{j}^{*}}{k})_{1 \leq j \leq N}\mathbf{y}^{N}(0), \end{array} \right. \end{equation} $ (2.27)

where $ A_{N}=\mathrm{diag}\{\lambda_{j}\}_{1 \leq j \leq N} $. Then, for each $ i \in \mathbb{N} $, by the first equation in (2.27) on the interval $ [t_{i}, t_{i+1}) $, we can obtain by a direct calculation that

$ \begin{equation} \begin{aligned} \mathbf{z}^{N}(t_{i+1}^{-})=e^{-A_{N}T_{i}}\mathbf{z}^{N}(t_{i})- \int_{0}^{T_{i}}e^{-A_{N}s}ds {\rm diag}(\frac{(k-\lambda_{j})\gamma_{j}^{*}}{k+\gamma_{j}^{*}})_{1 \leq j \leq N}\mathbf{z}^{N}(t_{i}). \end{aligned} \end{equation} $ (2.28)

It follows from the above equation and the second equation in (2.27) that

$ \begin{equation} \mathbf{z}^{N}(t_{i+1})=G(T_{i})\mathbf{z}^{N}(t_{i}), \end{equation} $ (2.29)

where

$ \begin{equation} G(T_{i})=e^{-A_{N}T_{i}}-\int_{0}^{T_{i}}e^{-A_{N}s}ds\Lambda^{*}, \; T_{i} \in [\underline{T}, T^{(j_{0})}]. \end{equation} $ (2.30)

Since the matrix $ G(T_{i}) $ is a diagonal matrix, its eigenvalues are

$ \begin{equation} \sigma_{j}=e^{-\lambda_{j}T_{i}}-\int_{0}^{T_{i}}e^{-\lambda_{j}s}ds\gamma_{j}^{*}, \;j=1, 2, \cdots, N. \end{equation} $ (2.31)

It is not difficult to prove that $ \alpha_{j}(t) $ and $ \beta_{j}(t) $, which are defined in (2.12), are decreasing with respect to $ t $. Therefore, $ \forall T_{i} \in [\underline{T}, T^{(j_{0})}] $, $ j=1, 2, \cdots, N $, we have

$ \begin{equation} \alpha_{j}(T_{i}) \leq \alpha_{j}(\underline{T}), \;\beta_{j}(T_{i}) \geq \beta_{j}(T^{(j_{0})}). \end{equation} $ (2.32)

By (2.14) and (2.31), for each $ j=1, 2, \cdots, N $, we have

$ \begin{aligned} e^{-\lambda_{j}T_{i}}-\int_{0}^{T_{i}}e^{-\lambda_{j}s}ds\beta_{j}(T^{(j_{0})}) \leq \sigma_{j} \leq e^{-\lambda_{j}T_{i}}-\int_{0}^{T_{i}}e^{-\lambda_{j}s}ds\alpha_{j}(\underline{T}). \end{aligned} $

It follows by the above inequalities and (2.32) that

$ \begin{equation*} \begin{aligned} e^{-\lambda_{j}T_{i}}-\int_{0}^{T_{i}}e^{-\lambda_{j}s}ds\beta_{j}(T_{i}) \leq \sigma_{j} \leq e^{-\lambda_{j}T_{i}}-\int_{0}^{T_{i}}e^{-\lambda_{j}s}ds\alpha_{j}(T_{i}). \end{aligned} \end{equation*} $

The above inequalities, together with (2.12), imply that

$ \begin{equation} |\sigma_{j}| \leq e^{-\rho T_{i}}. \end{equation} $ (2.33)

Therefore, we have $ |\lambda_{\mathrm{max}}(G(T_{i}))|\leq e^{-\rho T_{i}} $. It follows that

$ \begin{equation*} |\mathbf{z}^{N}(t_{i+1})|_{N} \leq e^{-\rho T_{i}} |\mathbf{z}^{N}(t_{i})|_{N}. \end{equation*} $

Hence, for $ i=1, 2, \cdots $, it follows that

$ \begin{equation} \begin{aligned} |\mathbf{z}^{N}(t_{i})|_{N} &\leq e^{-\rho T_{i-1}} |\mathbf{z}^{N}(t_{i-1})|_{N} \leq e^{-\rho (T_{i-1}+T_{i-2})} |\mathbf{z}^{N}(t_{i-2})|_{N}\\ &\leq \cdots \leq e^{-\rho \sum\limits_{0}^{i-1}T_{i}} |\mathbf{z}^{N}(0)|_{N} =e^{-\rho t_{i}} |\mathbf{z}^{N}(0)|_{N}. \end{aligned} \end{equation} $ (2.34)

Now, for any $ t>0 $, there exists $ i \in \mathbb{N} $, such that $ t \in [t_{i}, t_{i+1}) $, and by (2.27), we know that

$ \begin{equation} \begin{aligned} |\mathbf{z}^{N}(t)|_{N}=&|e^{-A_{N}(t-t_{i})}\mathbf{z}^{N}(t_{i})-\int_{t_{i}}^{t}e^{-A_{N}(t-s)}ds \; {\rm diag}(\frac{(k-\lambda_{j})\gamma_{j}^{*}}{k+\gamma_{j}^{*}})_{1 \leq j \leq N}\mathbf{z}^{N}(t_{i})|_{N}\\ \leq &C_{0}|\mathbf{z}^{N}(t_{i})|_{N} \leq C_{0}e^{-\rho t_{i}} |\mathbf{z}^{N}(0)|_{N} \leq Ce^{-\rho t} |\mathbf{z}^{N}(0)|_{N}. \end{aligned} \end{equation} $ (2.35)

Hence, the first $ N $ modes of $ z $ are stable. It follows immediately by (2.17) and (2.25), that

$ \begin{equation} \|h(t)\| \leq Ce^{-\rho t} |\mathbf{z}^{N}(0)|_{N}, \ \forall t >0. \end{equation} $ (2.36)

Now we consider the high frequency part $ \mathbf{z}^{S}(t)=(I-P_{N})z $. Denote by $ A_{S}=(I-P_{N})A $. Given $ t>0 $, there exists $ i \in \mathbb{N} $, such that $ t \in [t_{i}, t_{i+1}) $, then, we can see from equation (2.27) and the fact $ (I-P_{N})R=0 $ that

$ \begin{equation} \begin{aligned} \mathbf{z}^{S}(t)=&e^{-A_{S}(t-t_{i})}\mathbf{z}^{S}(t_{i})+\int_{t_{i}}^{t}e^{-A_{S}(t-s)}(I-P_{N})Rds\\ =&e^{-A_{S}(t-t_{i-1})}\mathbf{z}^{S}(t_{i-1})+e^{-A_{S}(t-t_{i})}[(I-P_{N})(h(t_{i-1})-h(t_{i}))]. \end{aligned} \end{equation} $ (2.37)

Doing step by step as above, it follows that

$ \begin{equation} \begin{aligned} \mathbf{z}^{S}(t)=&e^{-A_{S}t}\mathbf{z}^{S}(0)+\sum\limits_{j=1}^{i}e^{-A_{S}(t-t_{j})}[(I-P_{N})(h(t_{j-1})-h(t_{j}))]. \end{aligned} \end{equation} $ (2.38)

Using (2.36), and notice that $ \rho < \lambda_{N+1} $, we can obtain that

$ \begin{equation} \begin{aligned} \|\mathbf{z}^{S}(t)\| \leq& e^{-\lambda_{N+1}t}\|\mathbf{z}^{S}(0)\|+Ce^{-\rho t}|\mathbf{z}^{N}(0)|_{N}\leq Ce^{-\rho t}\|z(0)\|, \;\forall t>0. \end{aligned} \end{equation} $ (2.39)

Now we can see from (2.35) and (2.39) that

$ \begin{equation} \|z(t)\| \leq Ce^{-\rho t}\|z(0)\|, \;\forall t>0. \end{equation} $ (2.40)

Finally, by Lemma 2.5, we obtain that

$ \begin{equation} \|y(t)\| \leq Ce^{-\rho t}\|\tilde{y}_{0}\|, \;\forall t>0. \end{equation} $ (2.41)

This implies that $ \overline{T}_{F^{*}} \geq T^{(j_{0})} $.

Step 2    We prove $ \overline{T}^{*} = T^{(j_{0})} $. We shall prove it by contradiction. Obviously, $ \overline{T}^{*} \geq \overline{T}_{F^{*}} \geq T^{(j_{0})} $. We suppose $ \overline{T}^{*} \neq T^{(j_{0})} $, then $ \overline{T}^{*} > T^{(j_{0})} $. Moreover, there exists

$ F^{'}= \mathbb{1}_{\Gamma_{1}}\langle B_{0}^{-1}\Lambda^{'} Q_{N}(w), \frac{\partial \Phi^{N}}{\partial \mathbf{n}} \rangle_{N} \in \mathcal{F}_{\rho}, $

where $ \Lambda^{'}=\{\gamma_{j}^{'}\}_{j=1}^{N} $, such that, $ \overline{T}_{F^{'}} > T^{(j_{0})} $. We choose $ \widetilde{T} \in (T^{(j_{0})}, \overline{T}_{F^{'}}) $. Then, $ \forall \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \widetilde{T}] $, equation (2.6) with $ F=F^{'} $ is exponentially stable with decay rate $ \rho $. Take $ \tilde{y}_{0} $ satisfying $ z(0)= \tilde{y}_{0}-h(0)=\tilde{y}_{0}-DF\tilde{y}_{0}=\phi_{j_{0}} $ (such $ \tilde{y}_{0} $ exists because $ I-DF $ is invertible. See the proof of Theorem 3.1 in [13]). Then, by Lemma 2.5, for any $ T_{i} \in [\underline{T}, \widetilde{T}] $, the solution to equation (2.16) with $ F=F^{'} $, $ z(0)=\phi_{j_{0}} $ satisfies

$ \begin{equation} \begin{aligned} |z_{j_{0}}(t)|& \leq \|z(t)\| \leq Ce^{-\rho t}\|z(0)\|= Ce^{-\rho t}\|\phi_{j_{0}}\|, \; \forall t>0, \end{aligned} \end{equation} $ (2.42)

where $ z_{j}(t)=\langle z(t), \ \phi_{j} \rangle $, for $ j=1, 2, \cdots $.

On the other hand, we claim that, at least one of the following two inequalities holds

$ \begin{equation} |e^{-\lambda_{j_{0}}\underline{T}}-\int_{0}^{\underline{T}}e^{-\lambda_{j_{0}}s}ds \gamma_{j_{0}}^{'}| > e^{-\rho \underline{T}}, \end{equation} $ (2.43)
$ \begin{equation} |e^{-\lambda_{j_{0}}\widetilde{T}}-\int_{0}^{\widetilde{T}}e^{-\lambda_{j_{0}}s}ds \gamma_{j_{0}}^{'}| > e^{-\rho \widetilde{T}}. \end{equation} $ (2.44)

Indeed, if (2.43) and (2.44) are both not correct, then by solving the opposite inequalities of (2.43) and (2.44), we have that

$ \begin{equation} \alpha_{j_{0}}(\underline{T}) \leq \gamma_{j_{0}}^{'} \leq \beta_{j_{0}}(\underline{T}), \end{equation} $ (2.45)

and

$ \begin{equation} \alpha_{j_{0}}(\widetilde{T}) \leq \gamma_{j_{0}}^{'} \leq \beta_{j_{0}}(\widetilde{T}). \end{equation} $ (2.46)

By (2.45) and (2.46), we see that $ \alpha_{j_{0}}(\underline{T}) \leq \beta_{j_{0}}(\widetilde{T}) $. By the definition of $ T^{(j_0)} $ (see (2.10) and (2.11)) and the monotone decreasing of $ \alpha_{j_{0}}(t) $ and $ \beta_{j_{0}}(t) $ with respect to $ t $, we know that $ \alpha_{j_{0}}(\underline{T})=\beta_{j_{0}}(T^{(j_{0})})>\beta_{j_{0}}(\widetilde{T}) $. This leads to contradiction, and our claim is proved.

Without loss of generality, we suppose (2.44) is valid. We take $ T_{i}=\widetilde{T}, i=0, 1, 2, \cdots $. By (2.44), we have $ |\sigma_{j_{0}}^{'}|>e^{-\rho \widetilde{T}}, $ where $ \sigma_{j_{0}}^{'} $ is the real number given by (2.31) with $ \gamma_{j_{0}}^{*}=\gamma_{j_{0}}^{'} $. Now, we can find $ 0<\widetilde{\rho}<\rho $, such that

$ |\sigma_{j_{0}}^{'}|>e^{-\widetilde{\rho} \widetilde{T}}>e^{-\rho \widetilde{T}}. $

For any $ i=1, 2, \cdots $, by (2.29), we have

$ \begin{equation} \begin{aligned} |z_{j_{0}}(t_{i})|=&|\sigma_{j_{0}}^{'}||z_{j_{0}}(t_{i-1})|>e^{-\widetilde{\rho} \widetilde{T}}|z_{j_{0}}(t_{i-1})| >\cdots>e^{-\widetilde{\rho}i\widetilde{T}}|z_{j_{0}}(0)|=e^{-\widetilde{\rho}i\widetilde{T}}\|\phi_{j_{0}}\|. \end{aligned} \end{equation} $ (2.47)

It follows by (2.42) and (2.47) that, for any $ i \in \mathbb{N} $,

$ \begin{equation*} \label{e234} e^{-\widetilde{\rho}i\widetilde{T}}\|\phi_{j_{0}}\|<|z_{j_{0}}(t_{i})| \leq Ce^{-\rho t_{i}}\|\phi_{j_{0}}\|=Ce^{-\rho i \widetilde{T}}\|\phi_{j_{0}}\|. \end{equation*} $

Multiplying both sides of the above equation by $ e^{\rho i \widetilde{T}} $, we have that

$ e^{(\rho-\widetilde{\rho})i\widetilde{T}}\|\phi_{j_{0}}\| < C\|\phi_{j_{0}}\|, \forall i \in \mathbb{N}. $

When $ i $ is large enough, this leads to contradiction. So we obtain that $ \overline{T}^{*} = T^{(j_{0})} $.

Step 3    We prove the optimality of $ F^{*} $. Obviously, $ \overline{T}^{*} \geq \overline{T}_{F^{*}} $. By Step 1 and Step 2, we see that $ \overline{T}^{*} = \overline{T}_{F^{*}}=T^{(j_{0})} $. This implies the optimality of $ F^{*} $. Moveover, from Step 1, $ \forall\ \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \overline{T}^{*}] $, the system (2.6) can be exponentially stabilized with decay rate $ \rho $ by the boundary sampled-data feedback $ F^{*} $.

This completes the proof of Theorem 2.3.

3 Stabilization of Nonlinear Equation

Let $ \varepsilon \in (0, \frac{1}{2}) $ be an arbitrarily given number. Assume that

$ (H_{f}) $ $ |f(x, y+y_{e})-f(x, y_{e})-f_{y}(x, y_{e})y| \leq C\sum_{i=1}^{l}|y|^{p_{i}} $, where $ l $ is a positive integer, $ p_{1} \leq p_{2} \leq \cdots \leq p_{l} $, and $ p_{i}, \;i=1, 2, \cdots, l $, satisfy that

$ 0 < p_{i} <\frac{1}{\varepsilon}, \ if\;n=1;\;0 < p_{i} < \frac{n+1+2\varepsilon}{n-1+2\varepsilon}, \;if\;n>1. $

For the stabilization of the semilinear parabolic equation (1.1), we have the following result.

Theorem 3.1    Given $ 0 < \mu < \rho $, $ \rho $ was given by (2.3). Under assumptions ($ H_{n} $), ($ H_{s} $) and ($ H_{f} $), when $ T_{i} \in [\underline{T}, \overline{T}^{*}] $, $ y_{0} \in H^{\frac{1}{2}-\varepsilon}(\Omega) $ and $ \|y_{0}-y_{e}\|_{\frac{1}{2}-\varepsilon} $ small enough, the feedback $ F^{*} $, given by (2.13), locally stabilizes equation (1.1). More exactly, there exist constants $ C>0 $, and $ \delta>0 $, such that for all $ \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \overline{T}^{*}] $, and all $ y_{0} \in H^{\frac{1}{2}-\varepsilon}(\Omega) $ satisfying $ \|y_{0}-y_{e}\|_{\frac{1}{2}-\varepsilon} \leq \delta $, the solution to the equation

$ \begin{equation*} \label{e301} \left\{ \begin{array}{ll} \frac{\partial y}{\partial t}=\triangle y + f(x, y), \ \mathrm{in}\ (0, +\infty)\times\Omega, \\ y=\sum\limits_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)F^{*}(y(t_{i})-y_{e})+y_{e}\; \; \mathrm{on}\ (0, +\infty)\times\Gamma_{1}, \\ \frac{\partial y}{\partial \mathbf{n}}=0\ \; \; \mathrm{on}\ (0, +\infty)\times\Gamma_{2}, \ y(0, x)=y_{0}(x), \ \mathrm{in}\ \Omega, \end{array} \right. \end{equation*} $

satisfies

$ \begin{equation*} \label{e302} \|y(t)-y_{e}\|_{\frac{1}{2}-\varepsilon} \leq Ce^{-\mu t}\|y_{0}-y_{e}\|_{\frac{1}{2}-\varepsilon}, \;\forall t>0. \end{equation*} $

The proof of this theorem is similar to that of Theorem 3.1 in [6]. Therefore, we omit the proof here.

References
[1]
Logemann H, Rebarber R, Townley S. Stability of infinite-dimensional sampled-data systems[J]. Trans. Amer. Math. Soc., 2003, 355(8): 3301-3328. DOI:10.1090/S0002-9947-03-03142-8
[2]
Karafyllis I, Krstic M. Sampled-data boundary feedback control of 1-d parabolic PDEs[J]. Autom. J. IFAC, 2018, 87: 226-237. DOI:10.1016/j.automatica.2017.10.006
[3]
Fridman E, Blighovsky A. Robust sampled-data control of a class of semilinear parabolic systems[J]. Automatica, 2012, 48: 826-836. DOI:10.1016/j.automatica.2012.02.006
[4]
Fridman E, Bar Am N. Sampled-data distributed H control of transport reaction systems[J]. SIAM J. Control Optim., 2013, 51(2): 1500-1527. DOI:10.1137/120867639
[5]
Chen T W, Francis B. Optimal sampled-data control systems[M]. London: Springer-Verlag, 1996.
[6]
Liu H B, Hu P. Boundary sampled-data feedback stabilization for parabolic equations[J]. Systems Control Lett., 2020, 136: 104618. DOI:10.1016/j.sysconle.2019.104618
[7]
Lin P, Liu H B, Wang G S. Output feedback stablilzation for heat equations with sampled-data controls[J]. J. Differential Equations, 2020, 268(10): 5823-5854. DOI:10.1016/j.jde.2019.11.019
[8]
Barbu V. Boundary stabilization of equilibrium solutions to parabolic equations[J]. IEEE Trans. Automat. Control, 2013, 58(9): 2416-2420. DOI:10.1109/TAC.2013.2254013
[9]
Barbu V. Stailization of Navier-Stokes equations by oblique boundary feedback controllers[J]. SIAM J. Control Optim., 2012, 50(4): 2288-2307. DOI:10.1137/110837164
[10]
Munteanu I. Boundary stabilization of parabolic equations[M]. Cham: Birkh¨ auser/Springer, 2019.
[11]
Triggiani R. Boundary feedback stabilization of parabolic equations[J]. Appl. Math. Optimiz., 1980(6): 201-220.
[12]
Lasiecka I, Triggiani R. Control theory for partial differential equation: Continuous and approximation theory[M]. Cambridge: Cambridge University Press, 2000.
[13]
Liu H B, Hu P, Munteanu I. Boundary feedback stabilization of Fisher's equation[J]. Systems Control Lett., 2016, 97: 55-60. DOI:10.1016/j.sysconle.2016.09.003