In this work, we study the following parabolic equation with boundary control
where $ \Omega $ is a bounded and open domain of $ \mathbb{R}^{d} $ with a smooth boundary $ \partial \Omega=\Gamma_{1}\cup \Gamma_{2} $, $ \Gamma_{1} $, $ \Gamma_{2} $ being connected parts of $ \partial \Omega $, and $ \mathbf{n} $ stands for unit outward normal on the boundary $ \partial \Omega $. $ y_{0} \in L^{2}(\Omega) $ is the initial data, and $ f $ is a nonlinear function defined on $ \bar{\Omega}\times\mathbb{R} $. The sampled-data controller $ u $ is applied on $ \Gamma_{1} $ while $ \Gamma_{2} $ is insulated. By sampled-data control, we mean that the control is a piecewise constant function in time. More exactly, it is of the form
Here $ \chi_{[t_{i}, t_{i+1})} $ is the characteristic function of interval $ [t_{i}, t_{i+1}) $ for $ i=0, 1, 2, ... $, where $ 0=t_{0}<t_{1}<\cdots <t_{i}<t_{i+1}<\cdots $, with $ \lim_{i \to \infty}t_{i}=\infty $, are sampling time instants, the positive numbers $ T_{i}=t_{i+1}-t_{i}, \ (i=0, 1, 2, \cdots) $ are called the sampling intervals.
Sampled-data feedback stabilization is a well-studied topic for finite dimensional systems due to the fact that modern control systems employ digital technology for the implementation of the controller. However, for sampled data feedback stabilization of infinite dimensional systems, the research results are relatively few.
To stabilize the infinite dimensional systems by sampled-data feedback control, most of existed works applied directly the feedback which works for the continuous-time case (see [1-4]). It was shown that the continuous-time stabilizing feedback can stabilize the sampled-data system when the sampling intervals are small enough. However, the requirement that sampling intervals are sufficiently small is not reasonable from two perspectives: smaller sampling intervals require faster, newer and more expensive hardware; performing all control computations might not be feasible, when the sampling interval is too small (see Section 1.2 in [5]). Recently, the second author and collaborators considered the stabilization of parabolic equations with periodic sampled-data control, and developed methods to design feedback laws for any known sampling period (see [6, 7]).
For the case when the sampling intervals are variable and uncertain, usually it is unable to design a feedback to stabilize the system for arbitrary sampling intervals. What we can do is try to find a feedback such that its admissible sampling interval is largest. By admissible sampling interval of a sampled-data feedback, we mean that for all sampling instants whose sampling intervals are less than or equal to it, the feedback stabilizes the equation (It's mathematical definition will be given in Definition 2.1). In general, it is very difficult and almost impossible to find such an optimal one from all feedbacks. In this work, we consider a class of special feedbacks, i.e. the proportional feedbacks. Such kind of feedback controls are simple, and easy to be implemented, and they have been used by V.Barbu and later on developed by I. Munteanu to stabilize various systems under continuous-time boundary feedback control in [8-10]. Giving the decay rate and the lower bound of sampling intervals, under the assumption proposed by V.Barbu, we shall construct an explicit sampled-data feedback to exponentially stabilize the parabolic equations, and the admissible sampling interval of the feedback is maximal among all feasible proportional feedbacks.
The main novelties of this work can be summarized as following: Firstly, compared with [6] and [7], we consider in this work the case that the sampling intervals are variable and uncertain, and the feedbacks do not depend on the sampling intervals but only the lower bound of these intervals. Secondly, comparing with existed literatures, this work makes the first attempt to achieve the largest admissible sampling interval.
The rest of this paper is organized as follows. In Section 2, we shall construct the feedback control, and prove that it stabilizes the linearized equation and it maximizes the admissible sampling interval. In Section 3, we will show the feedback control constructed in Section 2 also locally exponentially stabilizes the semilinear parabolic equations.
The equilibrium solutions $ y_{e} \in C^{2}(\bar{\Omega}) $ is any solution to the equation
Translating $ y_{e} $ into zero via substitution $ y-y_{e} \to y $, we can rewrite (1.1) as
and the stabilization problem reduces to design a sampled-data controller $ u=\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)u_{i} $ to stabilize the corresponding closed loop system.
The first step toward this goal is the stabilization of the linearized system associated with (2.1), that is
where $ f_{y}(x, y_{e})=\frac{\partial f}{\partial y}(x, y_{e}) $. The stabilizing feedback controller $ v=\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)F(y(t_{i})) $ for (2.2) will be used afterwards to stabilize locally system (2.1), and implicitly the equilibrium solution $ y_{e} $.
Everywhere in the following, we shall assume that $ f, f_{y}\in C(\bar{\Omega}, \mathbb{R}) $.
In particular, this implies that $ x \to f_{y}(x, y_{e}(x)) $ is continuous in $ \bar{\Omega} $. We define the linear operator $ A:D(A)\to L^{2}(\Omega) $ by $ Ay=-\triangle y-f_{y}(x, y_{e}(x))y, $ where $ D(A)=\{y \in H^{2}(\Omega);y=0\;on\;\Gamma_{1}, \frac{\partial y}{\partial \mathbf{n}}=0\;on\;\Gamma_{2}\} $. Since the resolvent of $ A $ is compact, it has countable set of eigenvalues. We assume that the operator $ A $ has at least one negative eigenvalue. Given $ \rho>0 $, let $ \{ \lambda_{j} \}_{j=1}^{\infty} $, with
be the family of all eigenvalues of $ A $ and let $ \{ \phi_{j} \}_{j=1}^{\infty} $ be the family of the corresponding eigenfunctions, which forms an orthonormal basis of $ L^{2}(\Omega) $. For each $ M \in \mathbb{N}^{+} $, let $ X_{M}=\mathrm{span}\{\phi_{j}\}_{j=1}^{M} $, and let $ P_{M} $ be the orthogonal projection from $ L^{2}(\Omega) $ onto $ X_{M} $. We denote by $ Q_{N}:L^{2}(\Omega)\to \mathbb{R}^{N} $ the operator $ Q_{N}(y)=(\langle y, \phi_{1} \rangle, \cdots, \langle y, \phi_{N} \rangle )^{'} $. Here $ B' $ stands for the transposition of the matrix of $ B $. In the rest of the paper, we shall denote by $ \Vert\cdot\Vert, \;\Vert\cdot\Vert_{s}, \;|\cdot|_{0} $ and $ |\cdot|_{N} $ the norms of $ L^{2}(\Omega), \;H^{s}(\Omega), \;L^{2}(\Gamma_{1}) $ and $ \mathbb{R}^{N} $ respectively. The inner products in $ L^{2}(\Omega), \;L^{2}(\Gamma_{1}) $ and Euclid space $ \mathbb{R}^{N} $ will be denoted by $ \langle \cdot, \cdot\rangle, \ \langle \cdot, \cdot\rangle_{0} $ and $ \langle \cdot, \cdot\rangle_{N} $.
The notion of the solution to the equation (2.2) with sampled-data Dirichlet boundary condition was given in [6]. It is known that, for each $ v=\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)v_{i} $ with $ v_i\in L^2(\Gamma_1) $, there exists a unique solution $ y\in C([0, +\infty); L^2(\Omega)) $ (see Theorem 2.1 in [6]).
To get the stabilization result, we give the following assumption on $ \{\frac{\partial \phi_{j}}{\partial \mathbf{n}}\}_{j=1}^{N} $, where $ \frac{\partial \phi_{j}}{\partial \mathbf{n}} $ is the normal derivative of $ \phi_{j} $ to $ \partial \Omega $.
($ H_{n} $) The system $ \{ (\frac{\partial \phi_{j}}{\partial \mathbf{n}}), \; 1 \leq j \leq N \} $ is linearly independent on $ \Gamma_{1} $.
It should be mentioned that ($ H_{n} $) is a standard hypothesis in boundary stabilization theory of parabolic-like equations (see e.g., [8, 9, 11]). We note that ($ H_{n} $) holds if $ N=1 $ for $ d=1 $. For the case $ d>1 $, ($ H_{n} $) usually holds.
In practice, the sampling intervals should not tend to zero. Hence, we make the following assumption throughout this work:
$ (H_{s})\;\;T_{i}=t_{i+1}-t_{i} \geq \underline{T}, \;i=0, 1, 2, \cdots, $ where $ \underline{T} > 0 $ is given.
We introduce firstly the proportional feedbacks we shall consider. Let
where $ B_{0}=(\langle \frac{\partial \phi_{i}}{\partial \mathbf{n}}, \frac{\partial \phi_{j}}{\partial \mathbf{n}} \rangle_{0} )_{1 \leq i, j \leq N} $, $ \frac{\partial \Phi^{N}}{\partial \mathbf{n}}=(\frac{\partial \phi_{1}}{\partial \mathbf{n}}, \frac{\partial \phi_{2}}{\partial \mathbf{n}}, \cdots, \frac{\partial \phi_{N}}{\partial \mathbf{n}})^{'} \in (L^{2}(\partial\Omega))^{N} $, $ S=\{ \Lambda \in \mathbb{R}^{N\times N};\;\Lambda={\rm diag}(\gamma_{j})_{1\leq j\leq N}, \gamma_{j}\in \mathbb{R} \} $, and $ \mathbb{1}_{\Gamma_{1}}:L^{2}(\partial\Omega) \to L^{2}(\Gamma_{1}) $ is the restriction operator. According to the assumption ($ H_{n} $), we know that matrix $ B_{0} $ is invertible. From (2.4), we see that the feedback $ F \in \mathcal{F} $ depends on the diagonal matrix $ \Lambda $, and we need to get the feedback $ F $ by designing $ \Lambda:={\rm diag}(\gamma_{j})_{1 \leq j \leq N} $, where $ \gamma_{j}, 1 \leq j \leq N $ are $ N $ real numbles to be designed.
The equation (2.2) with sampled-data feedback control
where $ F \in\mathcal{F} $, can be written as
Let $ \rho $ be given by (2.3) and $ \underline{T} $ be given in ($ H_s $). We give the notion of the admissible sampling interval of a feedback.
Definition 2.1 (ⅰ) We call $ F \in \mathcal{F} $ a feasible feedback if $ F \in \mathcal{F}_{\rho} $, where $ \mathcal{F}_{\rho} $ is given by
(ⅱ) For each $ F \in \mathcal{F}_{\rho} $, we define
and call $ \overline{T}_{F} $ the admissible sampling interval of $ F $.
Remark 2.2 Arbitrarily giving $ \overline{T} < \overline{T}_{F} $, from above definition, we see that, $ \forall\ \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \overline{T}] $, the system (2.6) can be exponentially stabilized by the boundary sampled-data feedback $ F $.
Then the problem we shall study in this section can be formulated as follows.
Problem ($ P_{\underline{T}, \rho} $) Find the optimal feedback $ F^{*}\in \mathcal{F}_{\rho} $, such that the feedback $ F^{*} $ maximizes the admissible sampling interval, that is
The following theorem contains the main results of this section, which give the optimal value and the optimal solution of the Problem ($ P_{\underline{T}, \rho} $).
Theorem 2.3 Assume $ \tilde{y}_{0} \in L^{2}(\Omega) $, under the assumptions $ (H_{n}) $ and $ (H_{s}) $, the following results are true:
(ⅰ) The maximal value $ \overline{T}^{*} $ satisfies $ \overline{T}^{*} < +\infty $, and it is given by
where, for each $ j=1, 2, \cdots, N $, $ T^{(j)} $ is uniquely determined by the following equation
Here, $ \alpha_{j}(t) $ and $ \beta_{j}(t) $ are real functions defined by
(ⅱ) The optimal solution of $ (P_{\underline{T}, \rho}) $ is given by
where
Moreover, $ \forall\ \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \overline{T}^{*}] $, the system (2.6) can be exponentially stabilized with decay rate $ \rho $ by the boundary sampled-data feedback $ F^{*} $.
Remark 2.4 (i) From the above results in Theorem 2.3, we know that $ \gamma_{j_{0}}^{*} = \alpha_{j_{0}}(\underline{T})=\beta_{j_{0}}(\overline{T}^{*}) $. For $ j \in \{ 1, 2, \cdots, N \} \setminus \{j_{0}\} $, we have that $ \gamma_{j}^{*} \in [\alpha_{j}(\underline{T}), \ \beta_{j}(\overline{T}^{*})] $. It might happen that $ \beta_{j}(\overline{T}^{*})>\alpha_{j}(\underline{T}) $, so the optimal solution of $ (P_{\underline{T}, \rho}) $ is not necessary unique.
(ⅱ) We can see from (2.10) and (2.11) that $ \overline{T}^{*} $ depends on $ \underline{T} $ and $ \rho $.
● When we fix the decay rate $ \rho $, the larger $ \underline{T} $, the larger $ \overline{T}^{*} $, but the smaller the difference between $ \overline{T}^{*} $ and $ \underline{T} $. Moreover, we can see that $ \overline{T}^{*}-\underline{T} \to 0 $ when $ \underline{T} \to \infty $.
● When we fix the lower bound of sampling interval $ \underline{T} $, the smaller the decay rate $ \rho $, the larger $ \overline{T}^{*} $, and $ \overline{T}^{*} \to \infty $ when $ \rho \to 0 $.
(ⅲ) The key point to solve Problem $ (P_{\underline{T}, \rho}) $ can be explained roughly as follows: We firstly reduce the stabilization of system (2.6) to solve the following inequalities with respect to $ \gamma_{j} $, $ j=1, 2, \cdots, N $,
For each $ j $, there is a largest $ T^{(j)} $, which is exactly given by (2.11), such that for any $ T_{i} \in [\underline{T}, T^{(j)}] $, the above inequality has solution. Then $ \overline{T}^{*} $ is the minimum of $ \{ T^{(j)} \}_{j=1}^{N} $.
(ⅳ) The above design applies as well to equation (2.6) with homogeneous Dirichlet condition on $ \Gamma_2 $ and Dirichlet actuation on $ \Gamma_1 $, that is $ y=u $ on $ \Gamma_1 $; $ y=0 $ on $ \Gamma_2 $, or to the Neumann boundary control, but we omit the details.
To prove Theorem 2.3, we shall use the Dirichlet map to lift the boundary condition, and transfer the nonhomogeneous problem to a homogeneous one. This preliminary result will be presented in Lemma 2.5, and it will not only be used in the proof of Theorem 2.3, but also useful for the proof of stabilization for the nonlinear equation (1.1).
To state Lemma 2.5, we need to introduce the so-called Dirichlet map. It is well-known that for sufficiently large $ k>0 $, the solution to the equation
exists for any given $ w \in L^{2}(\Gamma_{1}) $. We shall denote by $ D $ the map $ w \rightarrow \psi $, i.e., $ \psi=Dw $. It is known that $ \psi \in H^{\frac{1}{2}+s}(\Omega) $ and $ \|\psi\|_{\frac{1}{2}+s} \leq C\|w\|_{s}, \;\forall s \geq 0 $ (see [12]).
Lemma 2.5 For any given $ F \in \mathcal{F}_{\rho} $, the solution to equation (2.6) is exponentially stable with decay rate $ \rho $, if and only if the solution to equation
is exponentially stable with decay rate $ \rho $, where $ h $ is given by
and
Proof We will divide the proof into two steps.
Step 1 We prove the relation of $ y $ and $ z $. We will find the relationship between $ y $ and $ z $ by lifting the boundary condition for equation (2.6).
Setting $ \tilde{z}(t, x)=y(t, x)-h(t, x) $, by (2.6), (2.15) and (2.17), it is not difficult to prove that $ \tilde{z} $ satisfies exactly equation (2.16) (The second identity of (2.16) holds because $ y $ is continuous). Hence, $ \tilde{z}(t, x)=z(t, x) $ and
Moreover, by the definition of $ h $ and the continuity of $ y(t) $, we know that $ h(t_{i})=DFy(t_{i}^{-})=DF[z(t_{i}^{-})+h(t_{i-1})] $.
Step 2 We prove the equivalence of the stability. Suppose that $ \exists C_{1}>0 $, s.t.
Firstly, we rewrite the control $ v(t)=\sum_{i=0}^{\infty}\chi_{[t_{i}, t_{i+1})}(t)F(y(t_{i})) $ in a feedback form of $ z $. Denote by $ \mathbf{y}^{N} $, $ \mathbf{z}^{N} $ and $ \mathbf{h}^{N} $ respectively the vectors $ Q_{N}y $, $ Q_{N}z $ and $ Q_{N}h $. Using equation (2.15) and the definition of $ \phi_{j} $, by simple calculation, we can get that, for $ 1 \leq j \leq N $,
By the definition of $ v(t) $, it follows from the above identity that
By (2.22), and the relation between $ y $ and $ z $, one can obtain that
Moreover, by the definition of $ v $, we have
where $ \widetilde{F}:L^{2}(\Omega) \to L^{2}(\Gamma_{1}) $ is the operator given by
By (2.17), (2.20) and (2.25), it follows that $ \|h(t)\| \leq C_{2}e^{-\rho t}\|z(0)\|. $ From the relation of $ y $, $ h $, and $ z $, we can get
Using (2.17), we have $ \|h(0)\| \leq C_{4}\|y(0)\| $. In summary, one can obtain that, $ \exists C>0 $, s.t.
On the other hand, suppose that $ \exists C_{5}>0, \rho>0 $, s.t. $ \|y(t)\|\leq C_{5}e^{-\rho t}\| y(0) \| $. For the same reason, one can obtain that $ \exists C>0 $, s.t. $ \|z(t)\|\leq Ce^{-\rho t}\| z(0) \| $.
This completes the proof of Lemma 2.5.
Now, we give the proof of Theorem 2.3.
Proof of Theorem 2.3 We shall prove Theorem 2.3 by three steps: firstly, we show that $ \overline{T}_{F^{*}} \geq T^{(j_{0})} $ by proving that the feedback $ F^{*} $ can stabilize equation (2.6) for all sampling intervals which are less than or equal to $ T^{(j_{0})} $; secondly, we will prove that $ \overline{T}^{*} = T^{(j_{0})} $ by contradiction; finally, we can see from above two steps and the fact $ \overline{T}^{*} \geq \overline{T}_{F^{*}} $ that $ \overline{T}^{*} = \overline{T}_{F^{*}}=T^{(j_{0})} $. Hence, the optimality of $ F^{*} $ follows.
Step 1 We prove $ \overline{T}_{F^{*}} \geq T^{(j_{0})} $. It suffices to prove the stability of equation (2.6) under the feedback $ F^{*} $ for all sampling intervals which are less than or equal to $ T^{(j_{0})} $. To this aim, we show firstly the stability of equation (2.16), where $ F=F^{*} $, $ T_{i} \in [\underline{T}, T^{(j_{0})}], \ i=0, 1, 2, \cdots $.
Multiplying equation (2.16) by $ \phi_{1}, \;\phi_{2}, \cdots, \phi_{N} $ respectively in $ L^2(\Omega) $, and using identity (2.23) and (2.24), one can obtain the equation satisfied by $ \mathbf{z}^{N} $ as follows,
where $ A_{N}=\mathrm{diag}\{\lambda_{j}\}_{1 \leq j \leq N} $. Then, for each $ i \in \mathbb{N} $, by the first equation in (2.27) on the interval $ [t_{i}, t_{i+1}) $, we can obtain by a direct calculation that
It follows from the above equation and the second equation in (2.27) that
Since the matrix $ G(T_{i}) $ is a diagonal matrix, its eigenvalues are
It is not difficult to prove that $ \alpha_{j}(t) $ and $ \beta_{j}(t) $, which are defined in (2.12), are decreasing with respect to $ t $. Therefore, $ \forall T_{i} \in [\underline{T}, T^{(j_{0})}] $, $ j=1, 2, \cdots, N $, we have
By (2.14) and (2.31), for each $ j=1, 2, \cdots, N $, we have
It follows by the above inequalities and (2.32) that
The above inequalities, together with (2.12), imply that
Therefore, we have $ |\lambda_{\mathrm{max}}(G(T_{i}))|\leq e^{-\rho T_{i}} $. It follows that
Hence, for $ i=1, 2, \cdots $, it follows that
Now, for any $ t>0 $, there exists $ i \in \mathbb{N} $, such that $ t \in [t_{i}, t_{i+1}) $, and by (2.27), we know that
Hence, the first $ N $ modes of $ z $ are stable. It follows immediately by (2.17) and (2.25), that
Now we consider the high frequency part $ \mathbf{z}^{S}(t)=(I-P_{N})z $. Denote by $ A_{S}=(I-P_{N})A $. Given $ t>0 $, there exists $ i \in \mathbb{N} $, such that $ t \in [t_{i}, t_{i+1}) $, then, we can see from equation (2.27) and the fact $ (I-P_{N})R=0 $ that
Doing step by step as above, it follows that
Using (2.36), and notice that $ \rho < \lambda_{N+1} $, we can obtain that
Now we can see from (2.35) and (2.39) that
Finally, by Lemma 2.5, we obtain that
This implies that $ \overline{T}_{F^{*}} \geq T^{(j_{0})} $.
Step 2 We prove $ \overline{T}^{*} = T^{(j_{0})} $. We shall prove it by contradiction. Obviously, $ \overline{T}^{*} \geq \overline{T}_{F^{*}} \geq T^{(j_{0})} $. We suppose $ \overline{T}^{*} \neq T^{(j_{0})} $, then $ \overline{T}^{*} > T^{(j_{0})} $. Moreover, there exists
where $ \Lambda^{'}=\{\gamma_{j}^{'}\}_{j=1}^{N} $, such that, $ \overline{T}_{F^{'}} > T^{(j_{0})} $. We choose $ \widetilde{T} \in (T^{(j_{0})}, \overline{T}_{F^{'}}) $. Then, $ \forall \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \widetilde{T}] $, equation (2.6) with $ F=F^{'} $ is exponentially stable with decay rate $ \rho $. Take $ \tilde{y}_{0} $ satisfying $ z(0)= \tilde{y}_{0}-h(0)=\tilde{y}_{0}-DF\tilde{y}_{0}=\phi_{j_{0}} $ (such $ \tilde{y}_{0} $ exists because $ I-DF $ is invertible. See the proof of Theorem 3.1 in [13]). Then, by Lemma 2.5, for any $ T_{i} \in [\underline{T}, \widetilde{T}] $, the solution to equation (2.16) with $ F=F^{'} $, $ z(0)=\phi_{j_{0}} $ satisfies
where $ z_{j}(t)=\langle z(t), \ \phi_{j} \rangle $, for $ j=1, 2, \cdots $.
On the other hand, we claim that, at least one of the following two inequalities holds
Indeed, if (2.43) and (2.44) are both not correct, then by solving the opposite inequalities of (2.43) and (2.44), we have that
By (2.45) and (2.46), we see that $ \alpha_{j_{0}}(\underline{T}) \leq \beta_{j_{0}}(\widetilde{T}) $. By the definition of $ T^{(j_0)} $ (see (2.10) and (2.11)) and the monotone decreasing of $ \alpha_{j_{0}}(t) $ and $ \beta_{j_{0}}(t) $ with respect to $ t $, we know that $ \alpha_{j_{0}}(\underline{T})=\beta_{j_{0}}(T^{(j_{0})})>\beta_{j_{0}}(\widetilde{T}) $. This leads to contradiction, and our claim is proved.
Without loss of generality, we suppose (2.44) is valid. We take $ T_{i}=\widetilde{T}, i=0, 1, 2, \cdots $. By (2.44), we have $ |\sigma_{j_{0}}^{'}|>e^{-\rho \widetilde{T}}, $ where $ \sigma_{j_{0}}^{'} $ is the real number given by (2.31) with $ \gamma_{j_{0}}^{*}=\gamma_{j_{0}}^{'} $. Now, we can find $ 0<\widetilde{\rho}<\rho $, such that
For any $ i=1, 2, \cdots $, by (2.29), we have
It follows by (2.42) and (2.47) that, for any $ i \in \mathbb{N} $,
Multiplying both sides of the above equation by $ e^{\rho i \widetilde{T}} $, we have that
When $ i $ is large enough, this leads to contradiction. So we obtain that $ \overline{T}^{*} = T^{(j_{0})} $.
Step 3 We prove the optimality of $ F^{*} $. Obviously, $ \overline{T}^{*} \geq \overline{T}_{F^{*}} $. By Step 1 and Step 2, we see that $ \overline{T}^{*} = \overline{T}_{F^{*}}=T^{(j_{0})} $. This implies the optimality of $ F^{*} $. Moveover, from Step 1, $ \forall\ \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \overline{T}^{*}] $, the system (2.6) can be exponentially stabilized with decay rate $ \rho $ by the boundary sampled-data feedback $ F^{*} $.
This completes the proof of Theorem 2.3.
Let $ \varepsilon \in (0, \frac{1}{2}) $ be an arbitrarily given number. Assume that
$ (H_{f}) $ $ |f(x, y+y_{e})-f(x, y_{e})-f_{y}(x, y_{e})y| \leq C\sum_{i=1}^{l}|y|^{p_{i}} $, where $ l $ is a positive integer, $ p_{1} \leq p_{2} \leq \cdots \leq p_{l} $, and $ p_{i}, \;i=1, 2, \cdots, l $, satisfy that
For the stabilization of the semilinear parabolic equation (1.1), we have the following result.
Theorem 3.1 Given $ 0 < \mu < \rho $, $ \rho $ was given by (2.3). Under assumptions ($ H_{n} $), ($ H_{s} $) and ($ H_{f} $), when $ T_{i} \in [\underline{T}, \overline{T}^{*}] $, $ y_{0} \in H^{\frac{1}{2}-\varepsilon}(\Omega) $ and $ \|y_{0}-y_{e}\|_{\frac{1}{2}-\varepsilon} $ small enough, the feedback $ F^{*} $, given by (2.13), locally stabilizes equation (1.1). More exactly, there exist constants $ C>0 $, and $ \delta>0 $, such that for all $ \{t_{i}\}_{i=0}^{\infty} $ satisfying $ T_{i}=t_{i+1}-t_{i} \in [\underline{T}, \overline{T}^{*}] $, and all $ y_{0} \in H^{\frac{1}{2}-\varepsilon}(\Omega) $ satisfying $ \|y_{0}-y_{e}\|_{\frac{1}{2}-\varepsilon} \leq \delta $, the solution to the equation
satisfies
The proof of this theorem is similar to that of Theorem 3.1 in [6]. Therefore, we omit the proof here.