In this paper we discuss the existence and nonexistence of solutions to the following MEMS equation
and the following MEMS system
where $ \Omega $ is a bounded domain in $ \mathbb{R}^N $ with $ N\geq 3 $, $ p>1 $ and $ \lambda, \mu , \delta, P $ are all positive parameters. The relevant problems corresponding to (1.1), (1.2) have been studied in various areas such as mathematics, physics and so forth.
In fact, problems like equation (1.1) originally came from the so-called MEMS model which MEMS means Micro-Electro-Mechanical Systems. This model describes the motion of an elastic membrane supported on a fixed ground plate and has been extensively studied for decades. Here we quote some of them and more details can be found in [1-8] and references therein.
MEMS device is composed of an elastic membrane supported on a fixed ground plate. When a voltage $ \lambda $ is applied, the elastic membrane deflects toward the ground plate. While the voltage exceeds a critical value $ \lambda^{*} $, the two plates touch and no longer separate. This state is called unstable. MEMS device is no longer working properly. However, for the sake of research convenience, most researchers regard the parallel plates as infinite in length and ignore electrostatic field at the edges of the plates. After approximation, the authors in papers [6, 7, 8] introduced the following model
where $ \Omega \subset \mathbb{R}^N $ is a bounded domain and $ f(x)\in C(\overline{\Omega}) $ is a non-negative function. In [6], the authors applied analytical and numerical techniques to establish upper and lower bounds for $ \lambda^{*} $ which is a critical value of (1.3). They also obtained some properties of the stable and semistable solution such as regularity, uniqueness, multiplicity and so on. In [7], the authors proved the existence of the second solution by variational approach and got the compactness along the branches of unstable solution. In [8], the authors applied the extend Pohozaev identity and addressed that when $ \lambda $ is a small voltage and the domain $ \Omega $ is bounded and star-shaped, then stable solution is the unique solution.
In [9], Cassani, Marcos and Ghoussoub investigated the existence of biharmonic type as follows:
where $ B_R $ is a ball in $ \mathbb{R}^N $ centered at the origin with radius $ R $, $ 0\leq f(x)\leq 1 $ and $ \eta $ denotes the unit outward normal to $ \partial B_R $. They proved that, there exists a $ \lambda_* = \lambda_*(R, f)>0 $ such that for $ 0<\lambda<\lambda_* $, problem (1.4) possesses a minimal positive and stable solution $ u_\lambda $.
Since the approximation of (1.3) brings some errors in some cases, the models (1.3) need be corrected in several manners. In [10, 11], the authors started to consider the effect of edges of plates and added the corner-corrected term in (1.3). For instance, the authors in [11] studied the following equation
where $ \delta>0 $, $ p>1 $ and $ \lambda>0 $. They obtained the existence and nonexistence result depending on $ \delta $ and some $ \lambda^{*}_\delta>0 $.
If we correct the model (1.3) with external force or pressure, it can be reduced to
where $ P>0 $ is a parameter. In [12], Guo, Zhang and Zhou obtained the existence and nonexistence result of (1.6) which depends on $ \lambda $ and $ P $.
Inspired by the researches in [11-13], we will study the problem (1.1) and get our first result.
Theorem 1.1 For any $ \delta>0 $, we have
(i) There exists a $ P^{*}>0 $ such that for any $ P\geq P^{*} $, (1.1) has no solution in $ H_0^1(\Omega) $.
(ii) For any $ 0<P<P^{*} $, there exists a critical constant $ \lambda^{*}_P>0 $ such that for $ 0<\lambda < \lambda^{*}_P $, (1.1) has at least two positive solutions in $ H_0^1(\Omega) $. Moreover, if $ \lambda = \lambda^{*}_P $ then (1.1) has only one positive solution in $ H_0^1(\Omega) $ and has no solution for $ \lambda > \lambda^{*}_P $.
Since our equation (1.1) has both the corner-corrected term and external pressure term, we will use the upper and lower solution method to find the first solution. After showing that the first solution is exactly the local minimum of the corresponding energy functional of (1.1), we want to find the second solution with the help of Mountain Pass Lemma. However since the lack of Ambrosetti-Rabinowitz condition [14] (i.e., there exists an $ a>0 $ such that for $ |z|\geq a $, $ G(x, z)+H(x, z)\leq \theta (g(x, z)z+h(x, z)z) $ where $ \theta\subset[0, \frac{1}{2}) $ and $ G(x, z) = \int_0^z g(x, t)dt $, $ H(x, z) = \int_0^z h(x, t)dt) $. Therefore we will use the monotonicity trick ([15]) to find a bounded $ (PS)_c $ sequence so that we can get our result.
Recently, the following Lane-Emden system was considered by do Ó and Clemente ([16]):
They obtained a curve $ \Gamma $ that separates the positive quadrant of the $ (\lambda, \mu) $-plane into two connected components $ O_1 $ and $ O_2 $. For $ (\lambda, \mu)\in O_1 $, problem (1.7) has a positive classical minimal solution $ (u_\lambda, v_\lambda) $. If $ (\lambda, \mu)\in O_2 $, there is no solution.
Motivated by the result in [16], we consider the system (1.2) in the second part. With the help of upper and lower solution approach we get the next result.
Theorem 1.2 There exists a curve $ \Gamma $ which splits the parameter area $ (\lambda, \mu) $ of the first quadrant into two connected parts $ D_1 $ and $ D_2 $. When $ (\lambda, \mu) \in D_1 $, (1.2) has at least one solution. There is no solution if $ (\lambda, \mu) \in D_2 $.
The rest of our paper is organized as follows. In Section 2, we will introduce a function transformation to (1.1) and some auxilary results. In Section 3, we give the proof of Theorem 1.1. In Section 4, we show the proof of Theorem 1.2.
Let $ \Omega $ be a bounded domain in $ \mathbb{R}^N $ for $ N\geq 3 $. Throughout the paper we use standard Sobolev space $ H_0^1(\Omega) $ with the usual norm
and the usual Lebesgue space $ L^p(\Omega) $ whose norms are denoted by $ |u|_p $. Since it is hard to write the concrete form of energy functional of (1.1), we introduce a function transformation to overcome this difficulty. Set
for $ u\in (0, 1) $. Then $ v\in (0, \infty) $ and $ \hat{f}^{'}(u) = e^{\frac{\lambda\delta} {(p-1)(1-u)^{p-1}}}>0. $
This shows that $ \hat{f} $ is strictly increasing, and therefore has an inverse function $ f $ such that $ u = f(v) $. Thus we have
Together with (1.1), we know that the existence of solution to (1.1) is equivalent to the existence of the following equation
where $ g(v) = e^{\frac{\lambda\delta} {(p-1)(1-f(v))^{p-1}}} \frac{\lambda} {(1-f(v))^p} $ and $ h(v) = P e^{\frac{\lambda\delta} {(p-1)(1-f(v))^{p-1}}} $. Then it is easy to see that the energy functional associated to problem (2.1) can be denoted by
for $ v\in H_0^1(\Omega) $, where $ G(v), H(v) $ are defined by
and
We will give some properties satisfied by $ g(s) $, $ h(s) $ and $ G(s) $, $ H(s) $ defined by (2.1), (2.3) and (2.4). In the sequel, $ C, C', C'', C_i $ $ (i \in N^+) $ represent different constants in the different circumstances.
Proposition 2.1 (i) We have $ g(s) \in \, C^{2}(\mathbb{R}) $ with $ g(s)>0 $, $ h(s)>0 $, $ g{'}(s)>0 $, $ h{'}(s)>0 $ and $ g{''}(s)>0 $. Moreover, $ g{''}(s) $ is bounded in $ \mathbb{R}. $
(ii) There exist positive constants $ C_1 $, $ C_2 $ such that
(iii) There exist positive constants $ C_1 $, $ C_2 $ such that
(iv) There exist positive constants $ C_1 $, $ C_2 $ such that
(v) There exist positive constants $ C_1 $, $ C_2 $ such that
(vi) For any $ s \in \mathbb{R} $, we have $ |G(s)|\leq C_1 (1+|s|^{q+1}) $ and $ |H(s)|\leq C_2(1+|s|^{\frac{q+1}{2}}) $ for $ q>1 $.
Proof Here we only prove $ \rm(iv) $– $ \rm(vi) $ about $ h(s), H(s) $ and the detailed proof of $ \rm(i) $– $ \rm(iii) $ about $ g(s), G(s) $ can be found in [11]. From the proof of Lemma 2.2 in [11], we know
Therefore, we obtain
Integrating by parts, we obtain $ C_5s^2(\log s)^{\frac{p}{p-1}}\leq H(s)\leq C_6s^2(\log s)^{\frac{p}{p-1}} $. Since if $ s $ large enough, $ (\log s)^{\frac{p}{p-1} }\le s^{\frac{q-1}{2}} $ for any $ q>1 $, $ H(s)\leq C(1+|s|^{\frac{q+1}{2}}) $ for any $ q>1 $.
Proposition 2.2 For any $ m\in N $, there exists $ s_m>0 $ such that
where $ C_{i, m}>0 $ $ (i = 1, 2, 3, 4) $ are positive constants depending on $ m $.
Proof We know that when $ s>e $, $ C_1s(\log s)^\frac{2p}{p-1}\le g(s)\le C_2s(\log s)^\frac{2p}{p-1} $ and $ C_3s(\log s)^\frac{p}{p-1} $ $ \le h(s)\le C_4s(\log s)^\frac{p}{p-1} $. We can choose $ n\in N $ such that $ C_1(m+n-1)^\frac{2p}{p-1}-C_2m^\frac{2p}{p-1}+C_3(m+n-1)^\frac{p}{p-1}-C_4m^\frac{p}{p-1}>0 $. Let
Set $ Y(s): = C_{1, m}s(\log s)^\frac{2p}{p-1}+C_{2, m}s+C_{3, m}s(\log s)^\frac{p}{p-1}+C_{4, m}s $. Then we get
This implies $ Y(e^m)> g(e^m)+h(e^m) $, and we can see
Let $ \tilde{Y}(s) = g(s)+h(s)-Y(s) $, and then $ \tilde{Y}(e^m)<0<\tilde{Y}(e^{m+n-1}) $. According to the continuity of function $ \tilde{Y}(s) $, we can find $ e^m<s_m<e^{m+n-1} $ satisfying (2.5) and (2.6).
Since we will use upper and lower solution method to get the first solution of (1.1), we give the definition of upper and lower solution.
Deflnition 2.1 A function $ \overline{v}\in H_0^1(\Omega) $ is a upper solution to problems (1.1) if the following
holds. Accordingly, if the first inequality in (2.7) is reversed for some $ \underline{v} $, we call $ \underline{v} $ a lower solution of problem (1.1). The upper and lower solution for (2.1) is defined in the same way.
In [17], T. Kusano established the existence of positive solutions in $ C^2(\mathbb{R}^N) $ to the following problem
via the upper and lower solution method.
Proposition 2.3 ([17]) If there exists a upper solution $ \bar{u} $ and a lower solution $ \underline{u} $ of (2.8), $ \bar{u} \geq \underline{u} $ and $ f(x, u) $ are locally Lipschitz continuous, then (2.8) has a solution $ u $ and $ \underline{u} \leq u \leq \bar{u} $.
If we consider (2.8) on some bounded domain $ \Omega \subset \mathbb{R}^N $ with $ u = 0 $ on $ \partial\Omega $, then we quote the following result.
Proposition 2.4([18]) Assume that $ \overline{u} $ is the upper solution and $ \underline{u} $ is the lower solution of (2.8), $ I(u) $ is the energy functional of (2.8). Let $ U = \{u\in H_0^1(\Omega)| \underline{u}\leq u \leq \overline{u} $ a.e on $ \Omega \} $, $ a(u) \in L_{loc}^1(0, \infty) $ for any $ u \in U $, $ b(x) \in L^1(\Omega) $ and $ f(x, u)\leq a(u)b(x) $, then there exists a solution $ u $ of (2.8) in set $ U $ which is the minimum point of $ I(u) $ in $ U $.
Proposition 2.5 ([19]) If $ u\in H_0^1(\Omega) $ is the solution of the following equation
where $ L $ is a linear elliptic operator and $ t\geq 0 $ is a constant. $ J $ is the first eigenfunction of $ L $. The function $ f(x, u) $ is continous and nonnegative function defined on $ \overline{\Omega} \times [0, \infty) $. Moreover, if we assume
(i) $ \varliminf_{u \to \infty}\frac{f(x, u)}{u}>\lambda_1 $, for any $ x \in \overline{\Omega} $, where $ \lambda_1 \rm{ is the first eigenvalue of } L $;
(ii) $ \lim_{u \to \infty}\frac{f(x, u)}{u^\beta} = 0 $, for any $ x \in \overline{\Omega} $ and $ \beta = \frac{N+1}{N-1} $; then there is a constant $ K $ such that $ |u|_{L^\infty}\leq K $.
Lemma 2.1 For any solution $ v $ of (2.1), there exists a constant $ m_0 >0 $ such that $ |v|_{L^\infty}\leq m_0 $.
Proof We prove this by Proposition 2.5. Let $ L = -\Delta $, $ t = 0 $, $ f(x, u) = g(v)+h(v) $. We only need to verify whether the conditions in Proposition 2.5 are satisfied. In fact, we have
and then using $ n $ times of L'Hopital rule we get
Hence $ \lim_{v \to \infty}\frac{g(v)+h(v)}{v^\beta} = 0 $, and by Proposition 2.5, we know that there exists a constant $ m_0 $ such that $ |v|_{L^\infty}\leq m_0 $.
We will use the variational method to find the second solution of (1.1), and therefore we recall some basic results about this method.
Definition 2.2 ([20]) Given a real Banach space $ X $, we say a functional $ I: X\rightarrow \mathbb{R} $ of class $ C^1 $ satisfying the mountain pass geometry if there exists $ u_0, u_1 \in X $ and $ 0<r<\|u_1 -u_0\| $ such that
We define the Palais-Smale sequence at level $ c $ ($ (PS)_c $ sequence for short) and $ (PS)_c $ conditions in $ X $ for $ I $ as follows.
Definition 2.3
(i) For $ c \in \mathbb{R} $, a sequence $ \{u_n\} $ is a $ (PS)_c $ sequence in $ X $ for $ I $ if $ I(u_n)\rightarrow c $, $ I'(u_n)\rightarrow 0 $ as $ n\rightarrow \infty $.
(ii) $ I $ satisfies the $ (PS)_c $ condition in $ X $ if every $ (PS)_c $ sequence in $ X $ for $ I $ contains a convergent subsequence.
Now we quote the Mountain Pass Lemma.
Proposition 2.6 ([20]) If the functional $ I $ satisfies mountain pass geometry, then it has a $ (PS)_c $ sequence $ \{u_n\} $ in $ X $, where $ c = \underset{\gamma\in\Gamma}{\inf}\ \underset{t\in[0, 1]}{\max}I(\gamma(t)), $ and $ \Gamma = \{\gamma \in C([0, 1], X);\gamma(0) = u_0, \gamma(1) = u_1\} $. Moreover, if I satisfies $ (PS)_c $ conditions in $ X $, then it admits a critical point $ u $ such that $ I(u) = c, \, \, I'(u) = 0. $
In order to find a bounded $ (PS)_c $ sequence for the functional $ I $ in $ H_0^1(\Omega) $, we recall the following monotonicity trick.
Proposition 2.7 ([21]) Let $ X $ be a Banach space equipped with a norm $ \|\cdot\|_X $ and $ J\subset \mathbb{R}^+ $ be an interval. We consider a family $ \{I_\mu\}_{\mu\in J} $ of $ C^1 $-functionals on $ X $ of the form
such that $ A(u)\rightarrow \infty $ as $ \|u\|_X\rightarrow \infty $. We assume there are two points $ v_1, v_2 $ in $ X $ such that
where $ \Gamma = \{\gamma \in C([0, 1], X)\, \, |\, \, \gamma(0) = v_1, \gamma(1) = v_2\}. $ Then, for almost every $ \mu \in J $, there is a sequence $ \{v_n\}\subset X $ such that $ \{v_n\} $ is bounded in $ X $, $ I_\mu(v_n)\rightarrow c_\mu $ and $ I'_\mu(v_n)\rightarrow 0 $. Moreover, the map $ \mu\rightarrow c_\mu $ is continuous from the left.
In the sequel we will take $ J = [\frac{1}{2}, 1] $ and prove that our functional $ I_\mu $ satisfies the conditions in Proposition 2.7, and then $ I_{\mu} $ has a bounded $ (PS)_{c_\mu} $ sequence $ \{u_{\mu, n}\} $ at level $ c_{\mu} $ as $ \mu \rightarrow 1 $.
In this section, we will give the proof of Theorem 1.1. First we will show the nonexistence result for (1.1).
Proof of Theorem 1.1 (i) We argue by contradiction. Let $ \varphi \in H_0^1(\Omega) $ is the positive solution of
By Lax-Milgram theorem we know the existence and uniqueness of solution to (3.1) and $ \varphi, \nabla\varphi \in L^{\infty}(\Omega) $ follows by the elliptic regularity results. Then we define $ P^{*} = \frac{1}{|\varphi|_{\infty}} $. Suppose that (1.1) has a solution $ u $ when $ P\geq P^{*} $, and then for any $ \lambda >0 $ we get $ u $ satisfying the following equation
By the maximum principle of elliptic partial differential equation we obtain $ P\varphi<u<1 $. However, the definition of $ P^{*} $ shows that $ P{|\varphi|_{\infty}} = \frac{P}{P^{*}}\geq 1 $. This contradicts with $ P|\varphi|_\infty\leq u<1 $. Therefore when $ P\geq P^{*} $, (1.1) has no solution.
In the following we show the relationship between the existence of solutions of (1.1) and the parameter $ \lambda $.
Lemma 3.1 For any $ \delta>0 $, if (1.1) has a solution for $ \lambda = \lambda_{1} $ with any fixed $ P>0 $, then (1.1) has a solution for $ 0<\lambda_2<\lambda_1 $.
Proof Suppose that (1.1) has a solution $ u_{\lambda_1} $ for $ \lambda = \lambda_1 $, and then for any $ 0<\lambda_2<\lambda_1 $ we have
This implies $ u_{\lambda_1} $ is a upper solution and also 0 is a lower solution of (1.1) with $ \lambda = \lambda_2 $. By Proposition 2.3, we know (1.1) has a solution for any $ 0<\lambda_2<\lambda_1 $.
Now we discuss the relationship between $ \lambda $ and $ P $.
Lemma 3.2 There exists a $ \lambda_P>0 $ such that (1.1) has at least one solution when $ 0<\lambda<\lambda_P $ for any fixed $ 0<P<P^{*} $.
Proof For any fixed $ 0<P<P^{*} $, we choose $ \frac{P}{P^{*}}<s<1 $ such that $ \overline{u} = sP^{*}\varphi $. We can verify easily $ 0<\overline{u}<1 $ and $ \overline{u} $ satisfying
Let $ s = \frac{P^{*}+P}{2P^{*}} $ and $ \lambda_P = \frac{P^{*}}{1+\delta{P^{*}}^2|\nabla \varphi|_{\infty}^2}{(1-\frac{P^{*}+P}{2P^{*}})^p}(\frac{P^{*}+P}{2P^{*}}-\frac{P}{P^{*}}) $. Then we know $ \overline{u} = sP^{*}\varphi $ is a upper solution and 0 is a lower solution of (1.1) with $ \lambda = \lambda_P $. By Proposition 2.3 we get (1.1) has at least one solution in $ H_0^1(\Omega) $. By Lemma 3.1 we know (1.1) has at least one solution for any $ 0<\lambda<\lambda_P $. We get the assertion.
In order to prove Theorem 1.1 $ \rm{(ii)} $, we define $ \lambda^{*}_P $ as follows.
Now we show that $ \lambda^{*}_P $ is well-defined.
Lemma 3.3 $ \lambda^{*}_P $ defined in (3.2) is bounded.
Proof Suppose that $ 0<u<1 $ is the solution of (1.1) for $ \lambda_P > 0 $ and any fixed $ 0<P< P^{*} $. Multiplying (1.1) by $ \varphi $ and integrating on $ \Omega $ on both sides of equation (1.1), we have
where $ |\Omega| $ denotes the measure of $ \Omega $. Therefore we get $ \lambda_P \le \frac{|\Omega|-P\int_\Omega \varphi\, dx}{\int_{\Omega}\varphi\, dx} $ and $ \frac{|\Omega|-P\int_\Omega \varphi\, dx}{\int_{\Omega}\varphi\, dx} $ is positive and finite. This implies $ \lambda^{*}_P $ is bounded.
Lemma 3.4 For any $ \delta>0 $, $ 0<P<P^* $ and $ 0<\lambda<\lambda_P^* $, the equation (1.1) has at least one upper solution and then at least one solution in $ H_0^1(\Omega). $
Proof According to Lemma 3.2 and Lemma 3.3, we know that when $ 0<\lambda_1<\lambda_2<\lambda_P^* $, the equation (1.1) has at least one solution $ u_{\lambda_1, P} $ for $ \lambda = \lambda_1 $ and $ u_{\lambda_2, P} $ for $ \lambda = \lambda_2 $, respectively, for any fixed $ 0<P<P^* $. Moreover, we can regard $ u_{\lambda_2, P} $ as a upper solution of equation (1.1) with $ \lambda = \lambda_1 $. Therefore for any fixed $ 0<P<P^* $, when $ 0<\lambda<\lambda_P^* $, the equation (1.1) has a solution $ u_\lambda $ and an upper solution $ \overline{u}_\lambda $ and a lower solution $ \underline{u}_\lambda = 0 $.
There is a corresponding solution $ v = \hat{f}(u_\lambda) $, an upper solution $ \overline{v} = \hat{f}(\overline{u}_\lambda) $ and a lower solution $ \underline{v} = \hat{f}(\underline{u}_\lambda) = 0 $ to the equation (2.1). By Lemma 2.1 we know $ \overline{v} = \hat{f}(\overline{u}_\lambda)\leq C $. Define $ U = \{v\in H_0^1(\Omega)\, | \, 0 = \underline{v}< v<\overline{v} = \hat{f}(\overline{u}_\lambda) \leq C \, \, a.e. \, \, \rm{on} \, \, \Omega\}. $ If we set $ a(x, v): = g(v)+h(v) = e^{\frac{\lambda\delta} {(p-1)(1-f(v))^{(p-1)}}} \frac{\lambda} {(1-f(v))^p}+P e^{\frac{\lambda\delta} {(p-1)(1-f(v))^{(p-1)}}}, \, \, b(x) = 1, $ in Proposition 2.4, where $ f(v) $ is bounded owing to the boundedness of $ v $, then we have $ a(x, v)\in L_{loc}^1(\Omega) $, and $ v $ is the minimum point of the functional $ I $ in $ U $, where $ I $ is defined in (2.2). In the following, we will show that $ v $ is the local minimum point of $ I $ in $ H_0^1(\Omega) $.
Lemma 3.5 The solution $ v $ is the local minimum point of the functional $ I $ in $ H_0^1(\Omega) $.
Proof We follow the idea in [22] and argue by contradiction. Suppose that $ v $ is not the local minimum point of $ I $ on $ H_0^1(\Omega) $, and then there exists a sequence $ \{v_n\}\subset H_0^1(\Omega) $ such that $ \left\|v_n-v\right\|\to 0 \, \, \rm{and} \, \, I(v_n)<I(v) $ as $ n\to \infty $.
Let $ v_{n, 0} = \max\{\underline{v}, \min\{{\overline{v}, v_n}\}\}, \, \, v_{n, +} = \max\{v_n-\overline{v}, 0\}, \, \, v_{n, -} = \max\{\underline{v}-v_n, 0\}, $ where $ \overline{v} $ is the upper solution and $ \underline{v} $ is the lower solution of (2.1). This implies $ v_n = v_{n, 0}+v_{n, +}-v_{n, -} $. Define $ \Omega_n^0 = \{x\in \Omega\, | \, \underline{v}\le v_n\le\overline{v}\}, \, \, \Omega_n^+ = \rm{supp}\{v_{n, +}\}, \, \, \Omega_n^- = \rm{supp}\{v_{n, -}\}, $ and $ F(v_n): = G(v_n)+H(v_n), \, \, F'(v_n) = g(v_n)+h(v_n), $ where "supp" means the support of functional in $ \Omega $. Then
In $ \Omega_n^+ $, we have
Similarly we obtain
Since $ \underline{v}\le v_n = v_{n, 0} \le \overline{v} $ in $ \Omega_n^0 $, we obtain
Therefore we conclude
Because $ \underline{v} $ is a lower solution, we know $ -\Delta{\underline{v}}\le g(\underline{v})+h(\underline{v}) $ and
Similarly we have
Summing up, we get
Since $ F({\overline{v}+v_{n, +}})-F(\overline{v})-f(\overline{v})v_{n, +} = G({\overline{v}+v_{n, +}})+H({\overline{v}+v_{n, +}})-G(\overline{v})-H(\overline{v})-g(\overline{v})v_{n, +}-h(\overline{v})v_{n, +}, $ by Sobolev embedding theorem, Proposition 2.1 $ \rm{(vi)} $ and Hölder inequality, it follows that
Here $ \beta $ satisfies $ \frac{q+1}{2^\ast}+\frac{1}{\beta} = 1 $ for any $ q>1 $. Due to $ v<\overline{v} $, then for every $ \varepsilon>0 $ there exists $ \theta >0 $, such that meas$ \{x|v(x)+\theta >\overline{v}\}<\varepsilon $, where "meas" means the measure of set.
Since $ \Omega_n^+\subset \{x|v(x)+\theta >\overline{v}\}\cup \{x|v(x)+\theta \le {\overline{v}}\le v_n(x) \} $ and $ v_n \to v $ in $ H^1_0 $ as $ n\rightarrow \infty $, by Poincar$ {\rm \acute{e}} $ inequality, we obtain
This implies $ \lim\limits_{n \rightarrow \infty}|\Omega_n^+| = 0 $. Then for $ n $ large enough, we have
By Proposition 2.1 $ \rm{(vi)} $, Sobolev embedding theorem and Hölder inequality, we have
for $ n $ large enough. Therefore we conclude $ \int_{\Omega_n^+}[F({\overline{v}+v_{n, +}})-F(\overline{v})-f(\overline{v})v_{n, +}]\, dx < C_5 \varepsilon +C_6 \varepsilon<C' \varepsilon. $ By an analogous argument, we obtain
With $ I(v_n)\leq I(v_{n, 0}) $ and (3.4), we have
However, since $ \Omega_n^+\bigcap \Omega_n^- = \emptyset $, we conclude
for any $ \varepsilon>0 $. These happen only if $ v_{n, +}(x) = v_{n, -}(x) = 0\ a.e.\, \, x\in \Omega $. This implies $ v_n = v_{n, 0}\in U \ a.e.\, \, \rm{on}\, \, \Omega $ and $ I(v)\leq I(v_n) $, which conflicts with the hypothesis $ I(v)>I(v_n) $. So $ v $ is a local minimum point of $ I $ on $ H_0^1(\Omega) $.
Lemma 3.6 The energy functional $ I $ in (2.2) has a mountain pass geometry.
Proof Firstly we take $ x_0 $ in $ \Omega $ arbitrarily, and choose a proper $ R $ such that $ B_R(x_0)\subset \Omega $. Let $ \psi\in C_0^{\infty}{(\Omega)} $ be a cut-off function satisfying
Then by Proposition 2.1 $ \rm(iii) $, we know
Therefore we can take a large enough $ t $ such that $ w = t\psi $ satisfies $ \|w\|> \|v\| $, but $ I(w)<I(v) $. From Lemma 3.5 we know $ v $ is a local minimum point of $ I $, and so we can choose a proper $ 0<r<\|v -w\| $ such that
which implies the functional $ I $ has the mountain pass geometry as in Definition 2.2.
Define $ c = \underset{\gamma\in\Gamma}{\inf}\ \underset{t\in[0, 1]}{\max}I(\gamma(t)), $ where $ \Gamma = \{\gamma \in C([0, 1], H_0^1(\Omega))\, | \, \gamma(0) = v, \gamma(1) = w\} $. According to Proposition 2.6, there exists a $ (PS)_c $ sequence $ \{v_n\}\subset H_0^1(\Omega) $ such that
Now we show that if $ \{v_n\} $ is bounded, then $ \{v_n\} $ satisfies $ (PS)_c $ condition.
Lemma 3.7 If the $ (PS)_c $ sequence $ \{v_n\} $ of the functional $ I $ is bounded in $ H_0^1(\Omega) $, then it has a convergent subsequence in $ H_0^1(\Omega) $.
Proof Since $ \{v_n\} $ is bounded in $ H_0^1(\Omega) $, it has a weakly convergent subsequence (still denoted as $ \{v_n\} $). We may assume that as $ n \to \infty $,
for some $ v\in H_0^1(\Omega) $. Moreover, there also exists a $ m(x)\in L^{q+1}(\Omega) $ for $ q \in [1, \frac{N+2}{N-2}) $ such that $ |v(x)|\leq m(x)\ a.e.\, \, x \in \Omega $. From $ {I}{'}(v_n) \to 0 $ in $ H_0^1(\Omega) $ as $ n \to \infty $, we obtain
for any $ \varphi \in H_0^1(\Omega) $. Since $ g(s), h(s)\in C^1(\mathbb{R}) $, it follows that
By Proposition 2.1 $ \rm{(ii)}, \rm{(iv)} $, we have
Then by Lebesgue Dominated Convergence Theorem, we obtain
Moreover, $ v_n \rightharpoonup v \ in \ H_0^1(\Omega) $ implies $ \int_\Omega \nabla{v_n} \nabla{\varphi} \, dx \to \int_\Omega \nabla{v} \nabla{\varphi} \, dx $ and
Then from (3.7) and (3.8) we get $ \int_\Omega \nabla{v} \nabla{\varphi} \, dx-\int_\Omega g(v)\varphi\, dx- \int_\Omega h(v)\varphi \, dx = 0 $. Set $ \varphi = v $ we have
Now by (3.7) and (3.9), it yields that
This means $ v_n\rightarrow v $ in $ H_0^1(\Omega) $ as $ n \rightarrow \infty $ and $ I'(v) = 0 $. We get the assertion.
In order to obtain the existence of a mountain pass solution to (1.1), it suffices to show the boundedness of the $ (PS)_c $ sequence $ \{v_n\} $. However it is difficult to prove it directly since the functions $ g(s) $ and $ h(s) $ do not satisfy any Ambrosetti-Rabinowitz type conditions. Therefore we will apply the monotonicity trick as in [15]. First, we modify the nonlinear terms $ g $ and $ h $ as follows:
where $ m_0<e^m<s_m<e^{m+n-1} $ $ (m, n \in N^{+}) $ such that $ \tilde{g}(s)+\tilde{h}(s) = g(s)+h(s) $ for $ s = s_m $ and $ \tilde{g}(s)+\tilde{h}(s)<g(s)+h(s) $ for $ s>s_m $. Here $ s_m $ and $ C_{i, m} $ $ (i = 1, 2, 3, 4) $ are defined in Proposition 2.2.
We consider the equation
where $ \mu \in [\frac{1}{2}, 1]. $ The energy functional of (3.12) is $ \tilde{I}_\mu:H_0^1(\Omega) \to \mathbb{R} $ defined by
where $ \tilde{G}(v), \tilde{H}(v) $ are defined by
Lemma 3.8 There exists an interval $ J\subset[\frac{1}{2}, 1] $ such that the family of functionals $ \{\tilde{I}_\mu\}_{\mu\in J} $ has a mountain pass geometry.
Proof From Lemma 3.6, we know that $ v $ is the local minimum point of $ I $ and $ I(v)>I(t\psi) $ for $ t $ large enough. Here $ v $ is the solution of (2.1). By Lemma 2.1, we know $ |v|_{L^\infty}\leq m_0 $. This implies when $ \mu = 1 $, we have $ \tilde{I}_\mu(v) = I(v) $. So we obtain for $ \mu = 1 $, $ \tilde{I}_\mu(v) $ has a mountain pass geometry by Lemma 3.6. Furthermore, when $ \frac{1}{2}<\mu<1 $, we have $ \tilde{I}_\mu(v)>I(v) $ for any positive $ v \in H_0^1(\Omega) $. Because $ \tilde{I}_\mu(v) $ is continuous with respect to $ \mu $, there exists $ \frac{1}{2}<\mu_0<1 $ such that $ \tilde{I}_\mu(v)>\tilde{I}_\mu (t\psi) $ for any $ \mu \in [\mu_0, 1] $.
From Lemma 3.5, we may assume that there exists $ \phi_\mu \in H_0^1(\Omega)\bigcap C^1(\overline{\Omega}) $ such that $ I(v+\phi_\mu)>I(v) $. According to the continuity of $ \tilde{I}_\mu $ with respect to $ \mu $, we can choose $ \mu_0 $ close to $ 1 $ properly such that $ \tilde{I}_\mu(v)<I(v+\phi_\mu) $ for $ \mu \in [\mu_0, 1]. $ Since $ I(v+\phi_\mu)<\tilde{I}_\mu(v+\phi_\mu) $, we get $ \tilde{I}_\mu(v+\phi_\mu)>\tilde{I}_\mu(v) $. Again by Lemma 3.5, $ v $ is also the local minimum point of $ \tilde{I}_\mu(v) $ on $ H_0^1(\Omega) $ for any $ \mu \in [\mu_0, 1]. $
Define $ c_\mu = \underset{\gamma\in\Gamma}{\inf}\ \underset{t\in[0, 1]}{\max}\tilde{I}_\mu(\gamma(t)), \, $ where $ \Gamma = \{\gamma \in C([0, 1], H_0^1(\Omega))\, |\, \gamma(0) = v, \gamma(1) = t\psi\} $. According to Proposition 2.6, for almost every $ \mu \in [\mu_0, 1] $, we can find a $ (PS)_{c_\mu} $ sequence $ \{v_{\mu, n}\}\subset H_0^1(\Omega) $ such that
Lemma 3.9 For almost every $ \mu \in [\mu_0, 1] $, the $ (PS)_{c_\mu} $ sequence $ \{v_{\mu, n}\} $ of functional $ \tilde{I}_\mu $ satisfies $ (PS)_{c_\mu} $ condition.
Proof Set $ A(u) = \frac{1}{2}\int_\Omega |\nabla u|^2\, dx $, $ B(u) = \int_\Omega \tilde{G}(u)+\int_\Omega \tilde{H}(u)\, dx $. We see $ A(u)\rightarrow \infty $ as $ \|u\| \rightarrow \infty $ in $ H_0^1(\Omega) $. Therefore by Proposition 2.7, we obtain a $ (PS)_{c_\mu} $ sequence $ \{v_{\mu, n}\} $ which is bounded in $ H_0^1(\Omega) $. Then with the help of Lemma 3.7, we get that there exists a function $ v_\mu \in H_0^1(\Omega) $ such that $ v_{\mu, n} \to v_\mu \ \rm{in} \ H_0^1(\Omega) $ and $ \tilde{I}^{'}(v_\mu) = 0, $ which implies $ (PS)_{c_\mu} $ condition holds.
Now we choose a sequence $ \{v_{\mu_j}\}_{j\in N} $ such that $ \tilde{I}_{\mu_j}(v_{\mu_j}) = c_{\mu_j} $, $ \tilde{I}'_{\mu_j}(v_{\mu_j}) = 0 $ as $ \mu_j\nearrow 1 $. For simplicity, we denote $ \{v_{\mu_j}\} $ as $ \{v_j\} $. Now we show that$ \{v_j\}_{j\in N} $ is bounded.
Lemma 3.10 The sequence $ \{v_j\}_{j\in N} $ is uniformly bounded in $ H_0^1(\Omega) $.
Proof We argue by contradiction. Suppose that $ \|v_j\| \to \infty $ as $ n\rightarrow \infty $. Let $ w_j = \frac {v_j}{\|v_j\|} $, and then $ \|w_j\| = 1 $. Up to a subsequence $ \{w_j\} $ if necessary, there exists a $ w_0 \in H_0^1({\Omega}) $ such that
We can also find a $ d(x) \in L^{q+1}(\Omega) $ for $ q \in [1, \frac{N+2}{N-2}) $ such that $ \left|{w_0}\right| \leq d(x) \ a.e.\, \, x \in \Omega. $ Now we seperate our proof in four steps.
Step 1 We claim $ w_0 \equiv 0 $. Define $ \Omega_0 = \{x \in \Omega:w_0 \ne 0\} $. In fact, if $ \Omega_0 $ is not empty, then $ v_j(x) = w_j(x)\|v_j\| \to \infty $ for $ x \in \ \Omega_0 $. Therefore by Proposition 2.1 $ \rm{(iii)} $, $ \rm{(iv)} $, we have
However since $ \tilde{I}_{\mu_j}(v_j) = c_{\mu_j} $, we know
which contradicts with (3.15). Therefore $ w_0 \equiv 0. $
Step 2 Define $ \tilde{F}(s) = s\tilde{g}(s)-2\tilde{G}(s)+s\tilde{h}(s)-2\tilde{H}(s) $. We claim that there exists a constant $ C>0 $ such that $ \tilde{F}(t)<\tilde{F}(s)+C $ for any $ 0<t<s $. Actually, for $ s>s_m>e $, we have
This implies that $ \tilde{F}(s) $ is increasing when $ s>s_m $. Therefore we have
(i) If $ s_m \leq t<s $, then $ \tilde{F}(t)<\tilde{F}(s)+C_1 $ for any $ C_1 \geq 0 $.
(ii) If $ 0<t<s_m \leq s $ and denote $ C_2 = \max _{s \in [0, s_m]}|\tilde{F}(s)| $, then $ \tilde{F}(t)\leq \tilde{F}(s)+C_2 $.
(iii) If $ 0<t<s<s_m $, then $ \tilde{F}(t)\leq \tilde{F}(s)+2C_2 $.
We take $ C\geq \max\{C_1, C_2\} $ large enough such that $ C+c_\frac{1}{2}>0 $. Then we get $ \tilde{F}(t)<\tilde{F}(s)+C $ for any $ 0<t<s $.
Step 3 Let $ t_j\in [0, 1] $ such that $ \tilde{I}_{\mu_j}(t_j v_j) = \max_{t\in[0, 1]}\tilde{I}_{\mu_j}(t v_j) $. We claim $ 2\tilde{I}_{\mu_j}(t v_j)\leq 2 c_{\frac{1}{2}}+C $. We will have the following cases.
(i) If $ t_j = 0 $, then $ 2\tilde{I}_{\mu_j}(t_j v_j) = 0<2c_\frac{1}{2}+C $.
(ii) If $ t_j = 1 $, then $ 2\tilde{I}_{\mu_j}(t_j v_j) = 2\tilde{I}_{\mu_j}(v_j) = 2c_{\mu_j}<2c_{\frac{1}{2}}<2c_\frac{1}{2}+C $.
(iii) If $ 0<t_j<1 $, then $ \tilde{I}'_{\mu_j}(t_j v_j)t_j v_j = t_j\tilde{I}'_{\mu_j}|_{t = t_j} = 0 $ since $ t_j v_j $ is the maximum point of $ \tilde{I}_{\mu_j}(t v_j) $.
Therefore we get
The last inequality is deduced by the monotonicity of $ \mu \rightarrow c_\mu. $
Step 4 We show that if $ \|v_j\| \to \infty $, a contradiction occurs. For any constant $ T>0 $, by Proposition 2.1 $ \rm{(i)} $ we know $ \tilde{G}(T w_j)+\tilde{H}(T w_j)\rightarrow \tilde{G}(T w_0)+\tilde{H}(T w_0) \, \, a.e. \, \, x \in \Omega $. By Proposition 2.1 $ \rm{(vi)} $, we have
where $ q \in [1, \frac{N+2}{N-2}) $. Thus $ C(1+|T d(x)|^{q+1}+|T d(x)|^{\frac{q+1}{2}})\in L^1(\Omega) $. Since $ w_0\equiv 0 $, by Lebesgue Dominated Convergence Theorem, we obtain
Therefore
Denote $ t = \frac{T}{\|v_j\|} $. We may assume that $ 0<t<1 $ for $ j $ large due to the hypothesis $ \|v_j\| \rightarrow \infty $. Since $ T>0 $ is finite and arbitrary, we can choose a suitable $ T $ such that
Obviously (3.17) and (3.18) contradict each other. Hence the hypothesis is not true and we show the sequence $ \{v_j\} $ is uniformly bounded in $ H_0^1(\Omega) $.
By an analogous argument as in Lemma 3.7, we conclude that $ (PS)_{c_{\mu_j}} $ sequence $ \{v_j\}_{j\in N} $ has a strong convergent subsequence in $ H_0^1(\Omega) $ which is still denoted as $ \{v_j\} $ and $ v_j \rightarrow v_1 $ as $ j\rightarrow \infty $ for some $ v_1 \in H_0^1(\Omega). $ In fact, we can show that $ v_1 $ is the second solution of (2.1).
Proof of Theorem 1.1 (ii) By Lemma 3.4, we get the first solution $ v $ of (2.1) so that $ u = \hat{f}(v) $ is the first solution of (1.1). Now we claim $ u_1 = \hat{f}(v_1) $ is the second solution of equation (1.1) for any $ 0<P<P^* $ and $ 0<\lambda<\lambda_P^* $. In fact, if $ |v_1|_{L^\infty}\leq m_0 $, then $ v_1 $ is the solution of (2.1). We argue by contradiction. Suppose that there exists a $ x_0\in\Omega $ such that $ v_1(x_0)>m_0 $. Then it is easy to see that $ v_1 $ is a lower solution of (2.1) due to $ -\, \Delta v_1 = \tilde{g}(v_1)+\tilde{h}(v_1)\leq g(v_1)+h(v_1). $ By Lemma 3.4 and Proposition 2.3 we get that (2.1) has a solution $ v_2 $ such that $ v_2\geq v_1 $ satisfying $ v_2(x_0)>m_0. $ But this is contradicted with the result $ |v_2|_{L^\infty}\leq m_0 $ by Lemma 2.1. Therefore we obtain $ |v_1|_{L^\infty}\leq m_0 $ and $ v_1 $ is a solution of (2.1) so that $ u_1 = \hat{f}(v_1) $ is the second solution of (1.1).
Now we show the second assertion of Theorem 1.1 (ii). For this we define the set
Then it is clear that $ W $ is bounded in $ H_0^1(\Omega) $ since
where $ q>1 $. Here we use the fact that $ v_{\lambda_P} $ is a solution of (2.1) and Proposition 2.1 $ \rm{(vi)}. $ Then $ v_{\lambda^*_P}: = \lim_{\lambda\rightarrow \lambda^*_P}v_{\lambda_P} $ is a solution of (2.1) for $ \lambda = \lambda^*_P. $ Since $ h(s) = P e^{\frac{\lambda\delta} {(p-1)(1-f(s))^{(p-1)}}}>0 $, and $ h'(s) = h(s)\frac{\lambda\delta}{(1-f(s))^p}f'(s), $ we get
This implies $ h(s) $ is convex. By Proposition 2.3 $ \rm{(i)} $, we know $ g(s) $ is also convex. Therefore by the convexity of $ g(s)+h(s) $, it is classical (see [23] or [11]) to guarantee that $ v_{\lambda^*_P} $ is the unique solution of equation (2.1). So $ u_{\lambda^*_P} = \hat{f}(v_{\lambda^*_P}) $ is the unique solution of (1.1). From the definition of $ \lambda^*_P $, it is clear that (1.1) has no solution for $ \lambda>\lambda^*_P. $
In this section, we focus on the equation (1.2), i.e.,
where $ \lambda, \mu, \delta $ are positive parameters and $ p>1. $ It is interesting to find that the existence of solution of (1.2) depends on the parameter area $ (\lambda, \mu) $ in the first quadrant. We will again apply the upper and lower solution method to get the proof of Theorem 1.2.
Definition 4.1 If a pair $ (\overline{u}, \overline{v}) $ satisfies
then we say the pair $ (\overline{u}, \overline{v}) $ is a upper solution of (1.2). If the first two inequalities in (4.1) are reversed for some $ (\underline{u}, \underline{v}) $, we call $ (\underline{u}, \underline{v}) $ a lower solution of (1.2).
The following result is important for the existence of solution of (1.2). For the proof we refer to Section 3.3 in [24].
Proposition 4.1 ([24]) Let $ \Omega $ be a bounded domain in $ \mathbb{R}^N $ with $ N\geq 2 $. Consider the following equation
Assume $ \overline{u} $ and $ \underline{u} $ are the upper and lower solution which is defined in Definition 2.7. Denote $ \underline{c} = \min \underline{u} $ and $ \overline{c} = \max\overline{u} $ in $ \overline{\Omega} $. If there exists a continuous function $ \psi $ such that $ |f(x, u, \nabla u)|\leq \psi(u)(1+|\nabla u|^2) $ for any $ x\in \overline{\Omega} $, and $ u \in [\underline{c}, \overline{c}] $, then this equation has a classical solution $ u $ with $ \underline{u}\leq u \leq \overline{u} $.
Lemma 4.1 If the system (1.2) has a upper solution $ (\overline{u}, \overline{v}) $, then (1.2) must have a classical solution $ (u, v) $ with $ 0<u\leq \overline{u}<1, 0<v \leq \overline{v}<1 $.
Proof We prove the assertion with iterative method. Let $ (u_1, v_1) = (\overline{u}, \overline{v}) $. For $ n\geq 2 $, by Proposition 4.1, we construct two sequences $ \{u_n\} $ and $ \{v_n\} $ as follows:
For $ n = 2 $, since $ \overline{u} $ ia a upper solution of the following equation
and $ 0 $ is the lower solution of (4.3), by Proposition 4.1, we get a solution $ u_2 $ to (4.3) with $ 0<u_2\leq u_1 $. Analogously we obtain a solution $ v_2 $ to $ {-\Delta v_2 = \frac{\lambda(1+\delta{\left|\nabla v_{2}\right|}^2)}{(1-u_1)^p}} $ such that $ 0<v_2\leq v_1. $ By induction, we suppose $ 0<u_n\leq u_{n-1}, 0<v_n \leq v_{n-1} $ and $ u_{n-1} = u_n = v_{n-1} = v_n = 0 $ on $ \partial\Omega $. Now we claim that $ 0<u_{n+1}(x)\leq u_n(x) $ and $ 0<v_{n+1}(x)\leq v_n(x) $ for $ x \in \Omega $. We argue by contradiction. Suppose that there exists a point $ x' \in \Omega $ such that $ u_n(x')-u_{n+1}(x')<0 $. By the maximum principle, $ u_n-u_{n+1} $ have a minimum point $ x_0 \in \Omega $ such that $ \nabla(u_n-u_{n+1})(x_0) = 0 $ and $ \Delta(u_n-u_{n+1})(x_0)>0 $. However we have
This imlplies $ \Delta (u_n-u_{n+1})(x_0)\leq 0 $ which contradicts with $ \Delta(u_n-u_{n+1})(x_0)>0 $. Hence we conclude $ 0<u_{n+1}\leq u_n\leq \overline{u} $. In the same way we can get $ 0<v_{n+1}\leq v_n\leq \overline{v}. $ This means the sequences $ \{u_n\} $ and $ \{v_n\} $ are decreasing and bounded in $ \Omega $. Therefore $ \{u_n\} $ and $ \{v_n\} $ have the limit functions $ u, v $ such that $ \lim\limits_{n\to \infty}u_n = u, \lim\limits_{n\to \infty}v_n = v $. By a standard compactness argument we know $ (u, v) $ are the classic solution of (1.2) and in particular different from zero.
Define the set $ D_1 = \{(\lambda, \mu)\in\mathbb{R^+}\times\mathbb{R^+}\, \, |\, \, $(1.2) has a classical solution $ (u, v)\}. $ We need to illustrate this set is not empty. It suffices to show that (1.2) has a upper solution $ (\overline{u}, \overline{v}). $
Lemma 4.2 The set $ D_1 $ is not empty when $ \lambda $ and $ \mu $ are properly small.
Proof Let $ B_R $ be a ball of radius $ R $ centered at $ 0 $ such that $ \Omega \subset B_R $. Denote $ \beta_1>0 $ the first eigenvalue of Laplace operator in $ B_R $ with Dirichlet boundary condition and $ \phi $ the corresponding eigenfunction which satisfies $ 0<\phi\leq 1 $ in $ B_R $. We can choose a constant $ \theta \in (0, 1) $ such that $ 0<\psi = \theta\phi<1 $ in $ B_R $. Now we wish $ (\psi, \psi) $ is a upper solution of (1.2), which means
This happens if $ \lambda \le \beta_1\theta\phi(1-\theta\phi)^p $ and $ \mu \le \beta_1\theta\phi(1-\theta\phi)^p $. Since $ 0<\theta\phi<1 $ in $ \Omega $, we can choose $ \lambda, \mu >0 $ small enough such that (4.5) holds and $ (\psi, \psi) $ is a upper solution of (1.2). Then due to Lemma 4.1, the system (1.2) must have a solution.
Lemma 4.3 The set $ D_1 $ is contained in a bounded region.
Proof If the system (1.2) has a solution $ (u, v) $, then
This implies $ \lambda\leq \beta_1 $ and $ \mu\leq \beta_1 $. Hence $ D_1\subset (0, \beta_1] \times (0, \beta_1] $ is bounded.
Lemma 4.4 If the system (1.2) has a solution with the parameter pair $ (\lambda, \mu) \in D_1 $, then $ (\lambda{'}, \mu{'}) $ is still in $ D_1 $ for any $ \lambda{'}\leq \lambda $, $ \mu{'}\leq \mu. $
Proof It is easy to verify that the solution $ (u, v) $ with the parameter pair $ (\lambda, \mu) $ is a upper solution of the system (1.2) with the pair $ (\lambda{'}, \mu{'}) $. Then by Lemma 4.1 we know the system (1.2) must have at least one solution.
Based on the above-mentioned argument, we can find a curve $ \Gamma $ in the first quadrant of $ (\lambda, \mu) $-plane such that the existence of (1.2) depends on the region divided by $ \Gamma $. More precisely, for any $ \sigma>0 $, we define
It is obvious that $ \{(\lambda, \sigma\lambda)\in \mathbb{R^+}\times \mathbb{R^+}\, \, |\, \, 0<\lambda\leq\lambda^{*}(\sigma)\}\subset D_1 $ and $ \{(\lambda, \sigma\lambda)\in \mathbb{R^+}\times \mathbb{R^+}\, \, |\, \, \lambda> \lambda^{*}(\sigma)\}\cap D_1 = \varnothing $. We also can define $ \mu^{*}(\sigma) = \sigma\lambda^{*}(\sigma) $.
Lemma 4.5 The curve $ \Gamma(\sigma) = (\lambda^{*}{(\sigma)}, \mu^{*}(\sigma)) $ is continuous.
Proof We prove this by contradiction. Suppose that $ \Gamma(\sigma) $ is not continuous at some $ \sigma_0>0 $. Then there exists $ \varepsilon_0>0 $ such that for any $ \eta>0 $, when $ 0<|\sigma -\sigma_0|<\eta $ we have $ |\Gamma(\sigma)-\Gamma(\sigma_0)|>\varepsilon_0. $ This implies either the case $ \lambda^{*}(\sigma)>\lambda^{*}(\sigma_0) $, $ \mu^{*}(\sigma)>\mu^{*}(\sigma_0) $ or the case $ \lambda^{*}(\sigma)<\lambda^{*}(\sigma_0) $, $ \mu^{*}(\sigma)<\mu^{*}(\sigma_0) $ appears. Without loss of generality, we just discuss the first case. Let $ \lambda_1, \lambda_2>0 $ such that $ \lambda^{*}(\sigma)>\lambda_2>\lambda_1>\lambda^{*}(\sigma_0) $, $ \mu^{*}(\sigma)>\sigma\lambda_2>\sigma_0\lambda_1>\mu^{*}(\sigma_0) $. By the definition of $ \lambda^{*}(\sigma) $, we obtain
then we have a solution $ (u_{\lambda_2}, v_{\lambda_2}) $. Obviously it is a upper solution of the system (1.2) when parameter pair equals to $ (\lambda_1, \sigma_0\lambda_1 ). $ This implies $ \lambda_1\leq \lambda^{*}(\sigma_0) $ which contradicts with the assumption $ \lambda_1>\lambda^{*}(\sigma_0) $.
Lemma 4.6 $ \lambda^{*}(\sigma) $ is decreasing and $ \mu^{*}(\sigma) $ is increasing with respect to $ \sigma $.
Proof $ \rm(i) $ We first show that $ \lambda^{*}(\sigma) $ is decreasing. We argue by contradiction. Suppose that $ \lambda^{*}(\sigma_1)<\lambda^{*}(\sigma_2) $ for $ \sigma_1<\sigma_2 $. Then $ \mu^{*}(\sigma_1) = \sigma_1\lambda^{*}(\sigma_1)<\sigma_2\lambda^{*}(\sigma_2) = \mu^{*}(\sigma_2) $. We can choose two constants $ \lambda_1, \lambda_2 $ such that $ \lambda^{*}(\sigma_1)<\lambda_1<\lambda_2<\lambda^{*}(\sigma_2) $ and $ \sigma_1\lambda^{*}(\sigma_1)<\sigma_1\lambda_1<\sigma_2\lambda_2<\sigma_2\lambda^{*}(\sigma_2) $. Similar to the proof process as in Lemma 4.5, we can obtain $ \lambda^{*}(\sigma_1)\geq \lambda_1 $ which is contradicted with $ \lambda^{*}(\sigma_1)<\lambda_1 $. Hence the hypothesis is not valid and $ \lambda^{*}(\sigma) $ is decreasing.
$ \rm(ii) $ We next show that $ \mu^{*}(\sigma) $ is increasing. We argue again by contradiction. Suppose that $ \mu^{*}(\sigma_1) = \sigma_1 \lambda^{*}(\sigma_1)>\mu^{*}(\sigma_2) = \sigma_2\lambda^{*}(\sigma_2) $ for $ \sigma_1<\sigma_2 $. By $ \rm(i) $, we know $ \lambda^{*}(\sigma_1)>\lambda^{*}(\sigma_2) $. Therefore we can choose two proper constants $ \lambda_1, \lambda_2 >0 $ such that $ \lambda^{*}(\sigma_1)>\lambda_1>\lambda_2>\lambda^{*}(\sigma_2) $ and $ \sigma_1 \lambda^{*}(\sigma_1)>\sigma_2\lambda_1>\sigma_2 \lambda_2>\sigma_2\lambda^{*}(\sigma_2) $. By Lemma 4.4, we conclude the system (1.2) has one solution with parameter pair $ (\lambda_2, \sigma_2 \lambda_2) $. This implies $ \lambda_2\leq\lambda^{*}(\sigma_2) $ which contradicts with $ \lambda_2>\lambda^{*}(\sigma_2) $. We finish the proof.
Now we can give the proof of Theorem 1.2.
Proof of Theorem 1.2 By Lemma 4.2, 4.3, 4.5 and 4.6, if we take $ \mu^{*}(\sigma) $ as horizontal axis and $ \lambda^{*}(\sigma) $ as vertical axis, then the curve $ \Gamma(\sigma) = (\lambda^{*}{(\sigma)}, \mu^{*}(\sigma)) $ splits the first quadrant of $ (\mu^{*}(\sigma), \lambda^{*}{(\sigma)}) $-plane into two connected parts. When the parameter pair is above the curve $ \Gamma $, there is no solution of (1.2). While the parameter pair is below the curve, there exists at least one solution of (1.2).