The purpose of this work is to study the local well-posedness for the Cauchy problem associated to the Korteweg-de Vries Kuramoto-Sivashinsky equation
where $x\in\mathbb{R}$, $t\in\mathbb{R}_+$, $u$ is a real-valued function and $\delta, \mu$ and $\alpha$ are constants such that $\mu>0$, $\delta\neq 0$ and $\alpha\neq 0$. The KdV-KS equation arises in interesting physical situations, for example as a model for long waves on a viscous fluid flowing down an inclined plane [2] and for deriving drift waves in a plasma [3].
In [1], using the dissipative effect of the linear part, Biagioni, Bona, Iorio and Scialom showed that the Cauchy problem associated to (1.1) is globally well-posed in $H^s(\mathbb{R})~ (s\geqslant 1).$ They also proved that the solutions of the KdV-KS equation converge to the solutions of the Kuramoto-Sivashinsky equation when the dispersive parameter $\delta$ goes to zero. The generalization of KdVKS equation is the following dispersive-dissipative equation
where the linear operator $L$ is defined via the Fourier transform by $\widehat{Lf}(\xi)=-\Phi(\xi)\widehat{f(\xi)}$. The Fourier symbol $\Phi(\xi)$ is of the form
where $p\in\mathbb{R}^+$ and $|\Phi(\xi)|\leqslant C(1+|\xi|^q)$ with $0\leqslant q < p$. In [12], Carvajal and Panthee introduced some time weighted spaces to derive multilinear estimates and used them in the contraction mapping principle argument to prove local well-posedness, they also proved ill-posedness for this type of models and showed that the local well-posedness results are sharp in some particular cases. We remark that the method presented in [12] will not work here. To overcome this difficulty, we use the $[k;Z]$ multiplier norm method of Tao [4] and obtain new bilinear estimates in suitable Bourgain space.
Before presenting the precise statement of our main result, we give the definition of the working space of this paper. Without loss of generality, we will suppose that $\delta=\mu=\alpha=1$ in the rest of this paper.
Deflnition 1.1 For $s, b\in\mathbb{R}$, we have that $X^{s, b}$ denotes the completion of the Schwartz functions with respect to the norm
where $\langle\cdot\rangle=(1+|\cdot|^2)^{\frac{1}{2}}$. For $T>0$, we consider the localized spaces $X^{s, b}_T$ endowed with the norm
As a consequence of this definition, we immediately have for $b > 1/2$, that $X^{s, b}$ is embedded in $C(\mathbb{R};H^s(\mathbb{R}))$.
In sequel, we state the main results of this work.
Theorem 1.1 Let $s>-1$ and $\varphi\in H^s(\mathbb{R})$. Then there exist $b=b(s)\in(1/2, 1)$ and $T=T(\|\varphi\|_{H^s(\mathbb{R})})>0$ such that the Cauchy problem (1.1) has a unique solution $u(t)$ on $[0, T]$ satisfies $u\in C([0, T];H^s(\mathbb{R}))\cap X^{s, b}_T$. Moreover, the map solution
is smooth.
For any positive number $x$ and $y$, the notation $x\lesssim y$ means that there exists a positive constant $c$ such that $x\leqslant cy$; and we denote $x\thicksim y$ when $x\lesssim y$ and $y\lesssim x$. We shall denote by $\hat{f}$ the Fourier transform of $f$.
Now we consider the initial value problem associated to the linear parts of (1.1),
The unique solution of (2.1) is given by the semigroup $V(t)$ defined as follows
By Duhamel's principle, the solution of (2.1) is equivalent to
Actually, to prove the local existence result, we shall apply a fixed point argument to the following truncated version of (2.3)
where $t\in\mathbb{R}$ and $\theta$ is a time cutoff function satisfying
and denote for given $T>0$, $\theta_T(\cdot)=\theta(\cdot/T)$. Indeed, if $u$ solves (2.4) then $u$ is a solution of (2.3) on $[0, T]$.
Here are some fundamental estimates for the operator $V(t)$. Since the proofs of these estimates are standard, we omit the proofs. The reader can find some similar results for other similar operators from [8].
Lemma 2.1 (Homogeneous linear estimate) Let $s\in\mathbb{R}$, $\frac{1}{2} < b < 1$. There exists $C>0$ such that
Lemma 2.2 (Non-homogeneous linear estimate) Let $s\in\mathbb{R}$, there exists $C>0$ such that, for any $f\in X^{s, b-1}$,
In this section, we derive the crucial trilinear estimate to prove the local existence result from Tao's multiplier norm estimate for KdV equation [4].
Let $Z$ be any abelian additive group with an invariant measure ${\rm d} \xi$. For any integer $k\geqslant2$, let $\Gamma_k(Z)$ denote the hyperplane
endowed with the measure
Following Tao [4], we define a $[k;Z]$-multiplier to be a function $m:\Gamma_k(Z)\rightarrow\mathbb{C}$. The multiplier norm $\|m\|_{[k;Z]}$ is the best constant such that
holds for all test functions $f_i$ on $Z$.
Meanwhile, we need to review some of Tao's notations. Any summations over capitalized variables such as $N_i, L_i, H$ are presumed to be dyadic. Let $N_1, N_2, N_3>0$, it will be convenient to define the quantities $N_{\max}\geqslant N_{{\rm med}}\geqslant N_{\min}$ to be the maximum, median, and minimum of $N_1, N_2, N_3$ respectively. Likewise, we have $L_{\max}\geqslant L_{{\rm med}}\geqslant L_{\min}$ whenever $L_1, L_2, L_3>0$. We adopt the following summation convention. Any summation of the form $L_{\max} \thicksim \cdots$ is a sum over the three dyadic variables $L_1, L_2, L_3 \gtrsim 1$, thus for instance
Similarly, any summation of the form $N_{\max}\thicksim\cdots$ sum over the three dyadic variables $N_1, N_2, N_3>0$, thus for instance
If $\tau, \xi$ and $h(\xi)$ are given with $\tau_1+\tau_2+\tau_3=0$, then we write $\lambda:=\tau-\phi(\xi).$ Similarly we have $\lambda_i:=\tau_i-\phi(\xi_i).$ We refer to $\phi:\Gamma_3(Z)\rightarrow \mathbb{R}$ as the resonance function, which is defined by
By the dyadic decomposition of each variable $\xi_i$ or $\lambda_i$, as well as the function $h(\xi)$, we are led to consider
where $X_{N_1, N_2, N_3;H;L_1, L_2, L_3}$ is the multiplier
From the identities $\xi_1+\xi_2+\xi_3=0$ and
on the support of the multiplier, we see that $\chi_{N_1, N_2, N_3;H;L_1, L_2, L_3}$ vanishes unless
and
For the KdV group, where $\phi(\xi)=\xi^3$, from the resonance identity
we may assume that
since the multiplier in (3.3) vanishes otherwise.
Lemma 3.1 (see [6, Propositon 6.1]) Let $H, N_1, N_2, N_3, L_1, L_2, L_3>0$ obey (3.5)-(3.7) and let the dispersion relations be given by (3.4).
(ⅰ) If $N_{\max}\thicksim N_{\min}$ and $L_{\max}\thicksim H$, then we have
(ⅱ) If $N_2\thicksim N_3\gg N_1$ and $H\thicksim L_1\gtrsim L_2, L_3$, then
Similarly for permutations.
(ⅲ) In all other cases, we have
Proposition 3.1 For $s>-1$ and $u, v\in X^{s, b}$, there exists $b\in(1/2, 1)$ such that the bilinear inequality holds
where the implicit constant depending only on $s$ and $b$.
Proof By Plancherel's formula and duality, it suffices to show that
By the dyadic decomposition of the variables $\xi_j, \lambda_j, h(\xi), j=1, 2, 3, $ we may assume that $|\xi_j|\thicksim N_j, |\lambda_j|\thicksim L_j, |h(\xi)| \thicksim H$. Using the translation invariance of the $[3;Z]$-multiplier norm, we can always restrict our estimate on $L_j\gtrsim1$ and $\max(N_1, N_2, N_3)=N\gtrsim1$. The comparison principle and the orthogonality reduce the multiplier norm estimate (3.12) to showing that
for all $N\gtrsim1$. Estimates (3.13) and (3.14) will be accomplished by means of the fundamental estimate Lemma 3.1 and some delicate summation.
Fix $N\gtrsim1$, this implies 3.7. We first prove 3.14. By 3.10, we reduce to
By symmetry we only need to consider two cases: $N_1\sim N_2\sim N, N_3=N_{\min}$ and $N_1\sim N_3\sim N, N_2=N_{\min}$.
(ⅰ) In the first case $N_1\sim N_2\sim N, N_3=N_{\min}$, estimate (3.15) can be further reduced to
then performing the $L$ summations, we reduce to
which is true if $2s+2>0$. So (3.15) is true if $s>-1$.
(ⅱ) In the second case $N_1\sim N_3\sim N, N_2=N_{\min}$, estimate (3.15) can be reduced to
Before performing the $L$ summations, we need pay a little more attention to the summation of $N_{\min}$. So we reduce to
which is obviously true if $s>-\frac{3}{2}$. So (3.15) is true if $s>-\frac{3}{2}$.
Now we show the low modulation case (3.15). We may assume $L_{\max}\thicksim N^2N_{\min}$. We first deal with the contribution where (3.8) holds. In this case, we have $N_1, N_2, N_3\thicksim N\gtrsim 1$, so we reduce to
which holds if $s>-\frac{9}{4}$.
Now we deal with the cases where (3.9) applies. By symmetry we only need to consider two cases
In the first case, we reduce by (3.9) to
Performing the $N_3$ summation, we reduce to
which holds if $s>-1$.
In the second case, we simplify using (3.9) to
Performing the $L$ summation, we reduce to
which holds if $s>-\frac{3}{2}$.
To finish the proof of (3.15), it remains to deal with the cases where (3.10) holds. This reduces to
To estimate 3.18, by symmetry we need to consider two cases: $N_1\sim N_2\sim N, N_3=N_{\min}$ and $N_1\sim N_3\sim N, N_2=N_{\min}$.
(ⅰ) If $N_1\sim N_2\sim N, N_3=N_{\min}$, then estimate (3.18) can be further reduced to
performing the $L$ summations, we reduce to
which is true if $s>-1$.
(ⅱ) If $N_1\sim N_3\sim N, N_2=N_{\min}$, then the estimate (3.15) can be reduced to
which is obviously true if $s>-\frac{3}{2}$.
In this section, we will use the linear and nonlinear estimates to provide proofs of the local well-posedness results stated in Theorem 1.1.
Proof of Theorem 1.1 Let $s>-1$ and $\varphi\in H^s(\mathbb{R})$. We prove the existence of a solution $u$ of the integral formulation (2.3) on some interval $[0, T]$ for $T < 1$ small enough. Define
We want to use the Picard fixed point theorem to find a solution of
in the space $X^{s, b}$.
Using (2.6), (2.7) and (3.11), we deduce that, there exists a constant $C>0$ such that
Since
the same computation leads to
We define
with $M=2c\|\varphi\|_{H^s}$. Then if we choose $T$ such that
(4.3) and (4.4) imply that $\Gamma_T$ is a contraction map on the Banach space $X^{s, \frac{1}{2}}(M)$. Thus we deduce by the fixed point theorem that there exists a unique solution $u\in X^{s, \frac{1}{2}}(M)$ of (4.2).