数学杂志  2018, Vol. 38 Issue (4): 633-642   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
WANG Hong-wei
ZHANG Yuan-yuan
ON THE LOCAL WELL-POSEDNESS FOR THE KDVKS EQUATION
WANG Hong-wei1, ZHANG Yuan-yuan2    
1. School of Mathematics and Statistics, Anyang Normal University, Anyang 455000, China;
2. Teaching and Research Department of Mathematics, Kaifeng University, Kaifeng 475000, China
Abstract: In this paper, we consider the Cauchy problem for the KdVKS equation $u_t+\delta \partial _x^3u+\mu(\partial_x^4u+\partial_x^2u)+\alpha (\partial_x u)^2=0$. By means of the $[k;Z]$ multiplier norm method of Tao, we prove the associated initial value problem is locally well-posed in Sobolev spaces $H^s(\mathbb{R})$ for $s>-1$, which improves the conclusions drawn by Biagioni et al.
Key words: KdVKS equation     local well-posedness     Cauchy problem    
KdVKS方程的局部适定性
王宏伟1, 张媛媛2    
1. 安阳师范学院数学与统计学院, 河南 安阳 455000;
2. 开封大学数学教研部, 河南 开封 475000
摘要:本文研究了KdVKS方程$u_t+\delta \partial _x^3u+\mu(\partial_x^4u+\partial_x^2u)+\alpha (\partial_x u)^2=0$的Cauchy问题.利用Tao的$[k;Z]$乘子范数估计的方法, 在Sobolev空间$H^s(\mathbb{R})$, $s>-1$中证明了初值问题的局部适定性, 结论改进了现有的Biagioni等的结果.
关键词KdVKS方程    局部适定性    Cauchy问题    
1 Introduction

The purpose of this work is to study the local well-posedness for the Cauchy problem associated to the Korteweg-de Vries Kuramoto-Sivashinsky equation

$ \begin{equation} \begin{cases}u_t+\delta \partial _x^3u+\mu(\partial_x^4u+\partial_x^2u)+\alpha (\partial_x u)^2=0, \\ u(0)=\varphi, \end{cases} \end{equation} $ (1.1)

where $x\in\mathbb{R}$, $t\in\mathbb{R}_+$, $u$ is a real-valued function and $\delta, \mu$ and $\alpha$ are constants such that $\mu>0$, $\delta\neq 0$ and $\alpha\neq 0$. The KdV-KS equation arises in interesting physical situations, for example as a model for long waves on a viscous fluid flowing down an inclined plane [2] and for deriving drift waves in a plasma [3].

In [1], using the dissipative effect of the linear part, Biagioni, Bona, Iorio and Scialom showed that the Cauchy problem associated to (1.1) is globally well-posed in $H^s(\mathbb{R})~ (s\geqslant 1).$ They also proved that the solutions of the KdV-KS equation converge to the solutions of the Kuramoto-Sivashinsky equation when the dispersive parameter $\delta$ goes to zero. The generalization of KdVKS equation is the following dispersive-dissipative equation

$ v_t+v_{xxx}-\eta Lv=(v_x)^{k+1}, \ \ \ x\in\mathbb{R}, \ \ \ t\geqslant0, \ \ \ k\in\mathbb{N}, \ \ \ k\geq1, \\ $ (1.2)

where the linear operator $L$ is defined via the Fourier transform by $\widehat{Lf}(\xi)=-\Phi(\xi)\widehat{f(\xi)}$. The Fourier symbol $\Phi(\xi)$ is of the form

$ \Phi(\xi)=-|\xi|^p+\Phi_1(\xi), $ (1.3)

where $p\in\mathbb{R}^+$ and $|\Phi(\xi)|\leqslant C(1+|\xi|^q)$ with $0\leqslant q < p$. In [12], Carvajal and Panthee introduced some time weighted spaces to derive multilinear estimates and used them in the contraction mapping principle argument to prove local well-posedness, they also proved ill-posedness for this type of models and showed that the local well-posedness results are sharp in some particular cases. We remark that the method presented in [12] will not work here. To overcome this difficulty, we use the $[k;Z]$ multiplier norm method of Tao [4] and obtain new bilinear estimates in suitable Bourgain space.

Before presenting the precise statement of our main result, we give the definition of the working space of this paper. Without loss of generality, we will suppose that $\delta=\mu=\alpha=1$ in the rest of this paper.

Deflnition 1.1  For $s, b\in\mathbb{R}$, we have that $X^{s, b}$ denotes the completion of the Schwartz functions with respect to the norm

$ \|u\|_{X^{s, b}}= \left\|{\langle i(\tau-\xi^3)+(\xi^4-\xi^2) \rangle^b\langle\xi\rangle^s\widehat{u}(\xi, \tau)}\right\|_{L^2_{\xi, \tau}}, $

where $\langle\cdot\rangle=(1+|\cdot|^2)^{\frac{1}{2}}$. For $T>0$, we consider the localized spaces $X^{s, b}_T$ endowed with the norm

$ \|u\|_{X^{s, b}_T}=\inf\limits_{w\in X^{s, b}}\{\|u\|_{X^{s, b}}, w(t)=u(t)\ \ {\rm on} \ [0, T]\}. $

As a consequence of this definition, we immediately have for $b > 1/2$, that $X^{s, b}$ is embedded in $C(\mathbb{R};H^s(\mathbb{R}))$.

In sequel, we state the main results of this work.

Theorem 1.1  Let $s>-1$ and $\varphi\in H^s(\mathbb{R})$. Then there exist $b=b(s)\in(1/2, 1)$ and $T=T(\|\varphi\|_{H^s(\mathbb{R})})>0$ such that the Cauchy problem (1.1) has a unique solution $u(t)$ on $[0, T]$ satisfies $u\in C([0, T];H^s(\mathbb{R}))\cap X^{s, b}_T$. Moreover, the map solution

$ \Phi:H^s(\mathbb{R})\mapsto C([0, T];H^s(\mathbb{R}))\cap X^{s, b}_T, \ \ \ \varphi\mapsto u $

is smooth.

2 Notations and Preliminary Results

For any positive number $x$ and $y$, the notation $x\lesssim y$ means that there exists a positive constant $c$ such that $x\leqslant cy$; and we denote $x\thicksim y$ when $x\lesssim y$ and $y\lesssim x$. We shall denote by $\hat{f}$ the Fourier transform of $f$.

Now we consider the initial value problem associated to the linear parts of (1.1),

$ \begin{cases}u_t+\partial _x^3u+(\partial_x^4u+\partial_x^2u)=0, \\ u(0)=\varphi_0.\end{cases} $ (2.1)

The unique solution of (2.1) is given by the semigroup $V(t)$ defined as follows

$ \widehat{V(t)\varphi_0}(\xi)=e^{t(i\xi^3-\xi^4+\xi^2)}\widehat{\varphi_0}(\xi). $ (2.2)

By Duhamel's principle, the solution of (2.1) is equivalent to

$ u(t)=V(t)\varphi-\int_0^tV(t-t')(u_x^2(t')){\rm d} t'. $ (2.3)

Actually, to prove the local existence result, we shall apply a fixed point argument to the following truncated version of (2.3)

$ u(t)=\theta_T(t)\left[V(t)\varphi-\int_0^tV(t-t')(u_x^2(t')){\rm d} t'\right], $ (2.4)

where $t\in\mathbb{R}$ and $\theta$ is a time cutoff function satisfying

$ \theta\in C_0^{\infty}(\mathbb{R}), \ \ \ 0\leqslant\theta\leqslant1\ \ \ {\rm supp}(\theta)\subset[-2, 2], \ \ \ \theta=1 \ \ {\rm on} \ \ [-1, 1], $ (2.5)

and denote for given $T>0$, $\theta_T(\cdot)=\theta(\cdot/T)$. Indeed, if $u$ solves (2.4) then $u$ is a solution of (2.3) on $[0, T]$.

Here are some fundamental estimates for the operator $V(t)$. Since the proofs of these estimates are standard, we omit the proofs. The reader can find some similar results for other similar operators from [8].

Lemma 2.1  (Homogeneous linear estimate) Let $s\in\mathbb{R}$, $\frac{1}{2} < b < 1$. There exists $C>0$ such that

$ \left\|\theta_T(t)V(t)\varphi\right\|_{X^{s, b}}\leqslant CT^{\frac{1-2b}{2}}\|\varphi\|_{H^s}, \ \ \ \forall \varphi\in H^s(\mathbb{R}). $ (2.6)

Lemma 2.2 (Non-homogeneous linear estimate) Let $s\in\mathbb{R}$, there exists $C>0$ such that, for any $f\in X^{s, b-1}$,

$ \left\|\theta_T(t)\int_0^tV(t-t')f(t'){\rm d} t'\right\|_{X^{s, b}}\leqslant CT^{\frac{1-2b}{2}}\|f\|_{X^{s, b-1}}. $ (2.7)
3 Binilear Estimate

In this section, we derive the crucial trilinear estimate to prove the local existence result from Tao's multiplier norm estimate for KdV equation [4].

Let $Z$ be any abelian additive group with an invariant measure ${\rm d} \xi$. For any integer $k\geqslant2$, let $\Gamma_k(Z)$ denote the hyperplane

$ \Gamma_k(Z):=\{(\xi_1, \cdots, \xi_k)\in Z^k : \xi_1+\cdots+\xi_k=0\} $

endowed with the measure

$ \int_{\Gamma_k(Z)}f:=\int_{Z^{k-1}}f(\xi_1, \cdots, \xi_{k-1}, -\xi_1-\cdots-\xi_{k-1}){\rm d} \xi_1\cdots {\rm d} \xi_{k-1}. $

Following Tao [4], we define a $[k;Z]$-multiplier to be a function $m:\Gamma_k(Z)\rightarrow\mathbb{C}$. The multiplier norm $\|m\|_{[k;Z]}$ is the best constant such that

$ \left|\int_{\Gamma_k(Z)}m(\xi)\prod\limits_{j=1}^kf_i(\xi_i)\right|\leqslant\|m\|_{[k;Z]}\prod\limits_{j=1}^k\|f_i\|_{L^2(Z)} $

holds for all test functions $f_i$ on $Z$.

Meanwhile, we need to review some of Tao's notations. Any summations over capitalized variables such as $N_i, L_i, H$ are presumed to be dyadic. Let $N_1, N_2, N_3>0$, it will be convenient to define the quantities $N_{\max}\geqslant N_{{\rm med}}\geqslant N_{\min}$ to be the maximum, median, and minimum of $N_1, N_2, N_3$ respectively. Likewise, we have $L_{\max}\geqslant L_{{\rm med}}\geqslant L_{\min}$ whenever $L_1, L_2, L_3>0$. We adopt the following summation convention. Any summation of the form $L_{\max} \thicksim \cdots$ is a sum over the three dyadic variables $L_1, L_2, L_3 \gtrsim 1$, thus for instance

$ \sum\limits_{L_{\max}\thicksim H}:=\sum\limits_{L_1, L_2, L_3\gtrsim1:L_{\max}\thicksim H}. $

Similarly, any summation of the form $N_{\max}\thicksim\cdots$ sum over the three dyadic variables $N_1, N_2, N_3>0$, thus for instance

$ \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}:=\sum\limits_{N_1, N_2, N_3>0:N_{\max}\thicksim N_{{\rm med}}\thicksim N}. $

If $\tau, \xi$ and $h(\xi)$ are given with $\tau_1+\tau_2+\tau_3=0$, then we write $\lambda:=\tau-\phi(\xi).$ Similarly we have $\lambda_i:=\tau_i-\phi(\xi_i).$ We refer to $\phi:\Gamma_3(Z)\rightarrow \mathbb{R}$ as the resonance function, which is defined by

$ h(\xi):=\phi(\xi_1)+\phi(\xi_2)+\phi(\xi_3)=-\lambda_1-\lambda_2-\lambda_3. $ (3.1)

By the dyadic decomposition of each variable $\xi_i$ or $\lambda_i$, as well as the function $h(\xi)$, we are led to consider

$ \left\|X_{N_1, N_2, N_3;H;L_1, L_2, L_3}\right\|_{[3;\mathbb{R}\times\mathbb{R}]}, $ (3.2)

where $X_{N_1, N_2, N_3;H;L_1, L_2, L_3}$ is the multiplier

$ X_{N_1, N_2, N_3;H;L_1, L_2, L_3}(\xi, \tau)=\chi_{|h(\xi)|\thicksim H}\prod\limits_{j=1}^3\chi_{|\xi_j|\thicksim N_j}\chi_{|\lambda_j|\thicksim L_j}. $ (3.3)

From the identities $\xi_1+\xi_2+\xi_3=0$ and

$ \lambda_1+\lambda_2+\lambda_3+h(\xi)=0 $ (3.4)

on the support of the multiplier, we see that $\chi_{N_1, N_2, N_3;H;L_1, L_2, L_3}$ vanishes unless

$ N_{\max}\thicksim N_{{\rm med}} $ (3.5)

and

$ L_{\max}\thicksim \max(H, L_{{\rm med}}). $ (3.6)

For the KdV group, where $\phi(\xi)=\xi^3$, from the resonance identity

$ h(\xi)=\xi_1^3+\xi_2^3+\xi_3^3=3\xi_1\xi_2\xi_3, $

we may assume that

$ H\thicksim N_{\max}^2N_{\min}, $ (3.7)

since the multiplier in (3.3) vanishes otherwise.

Lemma 3.1 (see [6, Propositon 6.1]) Let $H, N_1, N_2, N_3, L_1, L_2, L_3>0$ obey (3.5)-(3.7) and let the dispersion relations be given by (3.4).

(ⅰ) If $N_{\max}\thicksim N_{\min}$ and $L_{\max}\thicksim H$, then we have

$ (3.3)\lesssim L_{\min}^{\frac{1}{2}}N_{\max}^{-\frac{1}{4}}L_{{\rm med}}^{\frac{1}{4}}. $ (3.8)

(ⅱ) If $N_2\thicksim N_3\gg N_1$ and $H\thicksim L_1\gtrsim L_2, L_3$, then

$ (3.3)\lesssim L_{\min}^{\frac{1}{2}}N_{\max}^{-1}\min\left(H, \frac{N_{\max}}{N_{\min}}L_{{\rm med}}\right)^{\frac{1}{2}}. $ (3.9)

Similarly for permutations.

(ⅲ) In all other cases, we have

$ (3.10)\lesssim L_{\min}^{\frac{1}{2}}N_{\max}^{-1}\min(H, L_{{\rm med}})^{\frac{1}{2}}. $ (3.10)

Proposition 3.1  For $s>-1$ and $u, v\in X^{s, b}$, there exists $b\in(1/2, 1)$ such that the bilinear inequality holds

$ \left\|u_xv_x\right\|_{X^{s, b-1}}\lesssim \|u\|_{X^{s, b}}\|v\|_{X^{s, b}}, $ (3.11)

where the implicit constant depending only on $s$ and $b$.

Proof  By Plancherel's formula and duality, it suffices to show that

$ \left\|\frac{\langle\xi_1\rangle^{-s}\langle\xi_2\rangle^{-s}\langle\xi_3\rangle^s\xi_1\xi_2}{\langle i(\tau_1-\xi_1^3)+\xi_1^4-\xi_1^2\rangle^{b}\langle i(\tau_2-\xi_2^3)+\xi_2^4-\xi_2^2\rangle^{b}\langle i(\tau_3-\xi_3^3)+\xi_3^4-\xi_3^2\rangle^{1-b}}\right\|_{[3, \mathbb{R}\times\mathbb{R}]}\lesssim1. $ (3.12)

By the dyadic decomposition of the variables $\xi_j, \lambda_j, h(\xi), j=1, 2, 3, $ we may assume that $|\xi_j|\thicksim N_j, |\lambda_j|\thicksim L_j, |h(\xi)| \thicksim H$. Using the translation invariance of the $[3;Z]$-multiplier norm, we can always restrict our estimate on $L_j\gtrsim1$ and $\max(N_1, N_2, N_3)=N\gtrsim1$. The comparison principle and the orthogonality reduce the multiplier norm estimate (3.12) to showing that

$ \begin{align} &\sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\sum\limits_{L_1, L_2, L_3\gtrsim1}\frac{\langle N_1\rangle^{-s}\langle N_2\rangle^{-s}\langle N_3\rangle^{s}N_1N_2}{\langle L_1+N_1^4-N_1^2\rangle^{b}\langle L_2+N_2^4-N_2^2\rangle^{b}\langle L_3+N_3^4-N_3^2\rangle^{1-b}}\nonumber\\ &\times\left\|\chi_{N_1, N_2, N_3;L_{\max};L_1, L_2, L_3}\right\|_{[3;\mathbb{R}\times\mathbb{R}]}\lesssim1 \end{align} $ (3.13)

and

$ \begin{align} &\sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\sum\limits_{L_{\max}\thicksim L_{{\rm med}}}\sum\limits_{H\ll L_{\max}}\frac{\langle N_1\rangle^{-s}\langle N_2\rangle^{-s}\langle N_3\rangle^{s}N_1N_2}{\langle L_1+N_1^4-N_1^2\rangle^{b}\langle L_2+N_2^4-N_2^2\rangle^{b}\langle L_3+N_3^4-N_3^2\rangle^{1-b}}\nonumber\\ &\times\left\|\chi_{N_1, N_2, N_3;L_{\max};L_1, L_2, L_3}\right\|_{[3;\mathbb{R}\times\mathbb{R}]}\lesssim1 \end{align} $ (3.14)

for all $N\gtrsim1$. Estimates (3.13) and (3.14) will be accomplished by means of the fundamental estimate Lemma 3.1 and some delicate summation.

Fix $N\gtrsim1$, this implies 3.7. We first prove 3.14. By 3.10, we reduce to

$ \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\sum\limits_{L_{\max}\thicksim L_{{\rm med}}\gtrsim N^2N_{\min}}\frac{\langle N_1\rangle^{-s}\langle N_2\rangle^{-s}\langle N_3\rangle^{s}N_1N_2L_{\min}^{\frac{1}{2}}N_{\min}^{\frac{1}{2}}}{\langle L_1+N_1^4-N_1^2\rangle^{b}\langle L_2+N_2^4-N_2^2\rangle^{b}\langle L_3+N_3^4-N_3^2\rangle^{1-b}}\lesssim1. $ (3.15)

By symmetry we only need to consider two cases: $N_1\sim N_2\sim N, N_3=N_{\min}$ and $N_1\sim N_3\sim N, N_2=N_{\min}$.

(ⅰ) In the first case $N_1\sim N_2\sim N, N_3=N_{\min}$, estimate (3.15) can be further reduced to

$ \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\sum\limits_{L_{\max}\thicksim L_{{\rm med}}\gtrsim N^2N_{\min}}\frac{N^{-2s+2}\langle N_{\min}\rangle^{s}L_{\min}^{\frac{1}{2}}N_{\min}^{\frac{1}{2}}}{\langle L_1+N^4\rangle^{b}\langle L_2+N^4\rangle^{b}\langle L_3\rangle^{1-b}}\lesssim1, $

then performing the $L$ summations, we reduce to

$ \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\frac{N^{-2s+2}\langle N_{\min}\rangle^{s}N_{\min}^{\frac{1}{2}}}{\langle L_1+N^4\rangle^{b}\langle L_2+N^4\rangle^{1-b}}\lesssim1, $

which is true if $2s+2>0$. So (3.15) is true if $s>-1$.

(ⅱ) In the second case $N_1\sim N_3\sim N, N_2=N_{\min}$, estimate (3.15) can be reduced to

$ \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\sum\limits_{L_{\max}\thicksim L_{{\rm med}}\gtrsim N^2N_{\min}}\frac{NN_{\min}\langle N_{\min}\rangle^{s}L_{\min}^{\frac{1}{2}}N_{\min}^{\frac{1}{2}}}{\langle L_1+N^4\rangle^{b}\langle L_2\rangle^{b}\langle L_3+N^4\rangle^{1-b}}\lesssim1. $

Before performing the $L$ summations, we need pay a little more attention to the summation of $N_{\min}$. So we reduce to

$ \begin{align*} &\sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N, N_{\min}\leqslant 1}\sum\limits_{L_{\max}\thicksim L_{{\rm med}}\gtrsim N^2N_{\min}}\frac{N}{L_{\min}^{b-\frac{1}{2}}N^4}\\ &+\sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N, 1\leqslant N_{\min}\leqslant N}\sum\limits_{L_{\max}\thicksim L_{{\rm med}}\gtrsim N^2N_{\min}}\frac{NN_{\min}^{\frac{3}{2}+s}}{L_{\min}^{b-\frac{1}{2}}N^4}\lesssim1, \end{align*} $

which is obviously true if $s>-\frac{3}{2}$. So (3.15) is true if $s>-\frac{3}{2}$.

Now we show the low modulation case (3.15). We may assume $L_{\max}\thicksim N^2N_{\min}$. We first deal with the contribution where (3.8) holds. In this case, we have $N_1, N_2, N_3\thicksim N\gtrsim 1$, so we reduce to

$ \begin{align*} &\sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\sum\limits_{L_{\max}\thicksim N^3}\frac{N^{2-s}}{\langle L_1+N^4\rangle^{b}\langle L_2+N^4\rangle^{b} \langle L_3+N^4\rangle^{1-b}}L_{\min}^{\frac{1}{4}}N^{-\frac{1}{4}}L_{{\rm med}}^{\frac{1}{4}}\\ \lesssim&\sum\limits_{N_{\max}\thicksim N_{\min}\thicksim N}\frac{N^{2-s}N^{-\frac{1}{4}}}{N^4}\lesssim1, \end{align*} $

which holds if $s>-\frac{9}{4}$.

Now we deal with the cases where (3.9) applies. By symmetry we only need to consider two cases

$ \begin{align} N\thicksim N_1\thicksim N_2\gg N_3 &\qquad\mbox{with}\quad H\thicksim L_3\gtrsim L_1, L_2\label{n3}, \\ N\thicksim N_1\thicksim N_3\gg N_2 &\qquad\mbox{with}\quad H\thicksim L_2\gtrsim L_3, L_1. \end{align} $ (3.16)

In the first case, we reduce by (3.9) to

$ \sum\limits_{N\gg N_3}\sum\limits_{1\lesssim L_1, L_2\lesssim N^2N_3}\frac{N^{-2s+2}\langle N_{\min}\rangle^{s}L_{\min}^{\frac{1}{2}}N^{-1}\min\left(N^2N_3, \frac{N}{N_3}L_{{\rm med}}\right)^{\frac{1}{2}}}{\langle L_1+N^4\rangle^{b}\langle L_2+N^4\rangle^{b}\langle L_3\rangle^{1-b}}\lesssim1. $

Performing the $N_3$ summation, we reduce to

$ \begin{align*} \sum\limits_{1\lesssim L_1, L_2\lesssim N^2N_3}\frac{N^{-2s+2}L_{\min}^{\frac{1}{2}}N^{-1}N^{\frac{3}{4}}L_{{\rm med}}^{\frac{1}{4}}}{\langle L_1+N^4\rangle^{b}\langle L_2+N^4\rangle^{b}\langle L_3\rangle^{1-b}} \lesssim\sum\limits_{1\lesssim L_1, L_2\lesssim N^2N_3}\frac{N^{-2s+2}L_{\min}^{\frac{1}{2}}N^{-\frac{1}{4}}L_{{\rm med}}^{\frac{1}{4}}}{\langle L_1+N^4\rangle^{b}\langle L_2+N^4\rangle^{1-b}}\lesssim1, \end{align*} $

which holds if $s>-1$.

In the second case, we simplify using (3.9) to

$ \sum\limits_{N\gg N_2}\sum\limits_{1\lesssim L_1, L_3\lesssim N^2N_2}\frac{NN_2\langle N_2\rangle^{-s}L_{\min}^{\frac{1}{2}}N_2^{\frac{1}{2}}}{\langle L_2+N^4\rangle^{b}\langle L_1\rangle^{b}\langle L_3+N^4\rangle^{1-b}}\lesssim1. $

Performing the $L$ summation, we reduce to

$ \begin{align*} \sum\limits_{N\gg N_2}\frac{\langle N_2\rangle^{-s}NN_2^{\frac{3}{2}}}{N^4} \lesssim \sum\limits_{N\gg N_2\geqslant1}\frac{\langle N\rangle^{-s}NN^{\frac{3}{2}}}{N^4}+\sum\limits_{N_2\leqslant1}\frac{\langle N_2\rangle^{-s}NN_2^{\frac{3}{2}}}{N^4}\lesssim 1, \end{align*} $

which holds if $s>-\frac{3}{2}$.

To finish the proof of (3.15), it remains to deal with the cases where (3.10) holds. This reduces to

$ \begin{equation} \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\sum\limits_{L_1, L_2, L_3\gtrsim1}\frac{\langle N_1\rangle^{-s}\langle N_2\rangle^{-s}\langle N_3\rangle^{s}N_1N_2L_{\min}^{\frac{1}{2}}N^{-1}\min(H, L_{{\rm med}})^{\frac{1}{2}}}{\langle L_1+N_1^4-N_1^2\rangle^{b}\langle L_2+N_2^4-N_2^2\rangle^{b}\langle L_3+N_3^4-N_3^2\rangle^{1-b}} \lesssim1. \end{equation} $ (3.18)

To estimate 3.18, by symmetry we need to consider two cases: $N_1\sim N_2\sim N, N_3=N_{\min}$ and $N_1\sim N_3\sim N, N_2=N_{\min}$.

(ⅰ) If $N_1\sim N_2\sim N, N_3=N_{\min}$, then estimate (3.18) can be further reduced to

$ \begin{equation*} \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\sum\limits_{L_1, L_2, L_3\gtrsim1}\frac{N^{-2s+2}\langle N_{\min}\rangle^{s}L_{\min}^{\frac{1}{2}}N_{\min}^{\frac{1}{2}}}{\langle L_1+N^4\rangle^{b}\langle L_2+N^4\rangle^{b}\langle L_3\rangle^{1-b}}\lesssim1, \end{equation*} $

performing the $L$ summations, we reduce to

$ \begin{equation*} \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\frac{N^{-2s+2}}{\langle L_1+N^4\rangle^{b}\langle L_2+N^4\rangle^{1-b}}\lesssim1, \end{equation*} $

which is true if $s>-1$.

(ⅱ) If $N_1\sim N_3\sim N, N_2=N_{\min}$, then the estimate (3.15) can be reduced to

$ \begin{equation*} \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\sum\limits_{L_1, L_2, L_3\gtrsim1}\frac{NN_{\min}\langle N_{\min}\rangle^{s}L_{\min}^{\frac{1}{2}}N_{\min}^{\frac{1}{2}}}{\langle L_1+N^4\rangle^{b}\langle L_2\rangle^{b}\langle L_3+N^4\rangle^{1-b}}\lesssim1, \end{equation*} $

performing the $L$ summations, we reduce to

$ \begin{equation*} \sum\limits_{N_{\max}\thicksim N_{{\rm med}}\thicksim N}\frac{N^{-s+\frac{5}{2}}}{N^4}\lesssim1, \end{equation*} $

which is obviously true if $s>-\frac{3}{2}$.

4 Proof of Main Result

In this section, we will use the linear and nonlinear estimates to provide proofs of the local well-posedness results stated in Theorem 1.1.

Proof of Theorem 1.1  Let $s>-1$ and $\varphi\in H^s(\mathbb{R})$. We prove the existence of a solution $u$ of the integral formulation (2.3) on some interval $[0, T]$ for $T < 1$ small enough. Define

$ \Gamma_T(u)(t)=\theta_T(t)\left[V(t)\varphi-\int_0^tV(t-t')(u_x^2(t')){\rm d} t'\right]. $ (4.1)

We want to use the Picard fixed point theorem to find a solution of

$ \Gamma_T(u)=u $ (4.2)

in the space $X^{s, b}$.

Using (2.6), (2.7) and (3.11), we deduce that, there exists a constant $C>0$ such that

$ \left\|\Gamma_T(u)\right\|_{X^{s, b}}\leqslant CT^{\frac{1-2b}{2}} \left(\|\varphi\|_{H^s(\mathbb{R})}+\|u\|^2_{X^{s, b}}\right). $ (4.3)

Since

$ u^2_x-v^2_x=(u_x+v_x)(u_x-v_x), $

the same computation leads to

$ \left\|\Gamma_T(v)-\Gamma_T(u)\right\|_{X^{s, b}}\leqslant CT^{\frac{1-2b}{2}}\|u-v\|_{X^{s, b}}\|u+v\|_{X^{s, b}}. $ (4.4)

We define

$ X^{s, b}(M)=\{v\in X^{s, b}:\|v\|_{X^{s, b}}\leqslant M\} $

with $M=2c\|\varphi\|_{H^s}$. Then if we choose $T$ such that

$ CT^{\frac{1-2b}{2}}\left(\frac{M}{2C}+M^2\right) < \frac{1}{2}M, $ (4.5)

(4.3) and (4.4) imply that $\Gamma_T$ is a contraction map on the Banach space $X^{s, \frac{1}{2}}(M)$. Thus we deduce by the fixed point theorem that there exists a unique solution $u\in X^{s, \frac{1}{2}}(M)$ of (4.2).

References
[1] Biagioni H A, Bona J L, Iorio R, Scialom M. On the Korteweg-de Vries-Kuramoto-Sivashinsky equation[J]. Adv. Diff. Eq., 1996, 1: 1–20.
[2] Topper J, Kawahara T. Approximate equations for long nonlinear waves on a viscous fluid[J]. J. Phys. Soc. Japan, 1978, 44: 663–666. DOI:10.1143/JPSJ.44.663
[3] Cohen B I, Krommes J A, Tang W M, Rosenbluth M N. Nonlinear saturation of the dissipative trapped-ion mode by mode coupling[J]. Nuclear Fusion, 1976, 16: 971–992. DOI:10.1088/0029-5515/16/6/009
[4] Tao T. Multilinear weighted convolution of L2 functions and applications to nonlinear dispersive equation[J]. Amer. J. Math., 2001, 123: 839–908. DOI:10.1353/ajm.2001.0035
[5] Alvarez B. The Cauchy problem for a nonlocal perturbation of the KdV equation[J]. Diff. Integ. Equ., 2003, 16(10): 1249–1280.
[6] Carvajal X, Scialom M. On the well-posedness for the generalized Ostrovsky, Stepanyams and Tsimring equation[J]. Nonl. Anal., 2005, 62: 1277–1287. DOI:10.1016/j.na.2005.04.032
[7] Zhao X. On low regularity of the Ostrovsky, Stepanyams and Tsimring equation[J]. J. Math. Anal. Appl., 2011, 378: 687–699. DOI:10.1016/j.jmaa.2011.01.021
[8] Molinet L, Ribaud F. On the low regularity of the Korteweg-de Vries-Burgers equation[J]. Int. Math. Res. Not., 2002, 37: 1979–2005.
[9] Molinet L, Vento S. Sharp ill-posedness and well-posedness results for the KdV-Burgers equation:the real line case[J]. Ann. Scuola Norm. Sup. Pisa Cl. Sci., 2011, X(5): 531–560.
[10] Esfahani A. Sharp well-posedness of the Ostrovsky, Stepanyams and Tsimring equation[J]. Math. Commun., 2013, 18: 323–335.
[11] Dix D B. Nonuniqueness and uniqueness in the initial-value problem for Burgers' equation[J]. SIAM J. Math. Anal., 1996, 27: 708–724. DOI:10.1137/0527038
[12] Carvajal X, Panthee M. On the well-posedness of higher order viscous Burgers' equations[J]. J. Math. Anal. Appl., 2014, 417: 1–22. DOI:10.1016/j.jmaa.2014.02.056
[13] Guo Z, Wang B. Global well-posedness and inviscid limit for the Korteweg-de Vries-Burgers equation[J]. J. Diff. Equa., 2009, 246: 3864–3901. DOI:10.1016/j.jde.2009.03.006
[14] Chen W, Li J. On the low regularity of the modified Korteweg-de Vries equation with a dissipative term[J]. J. Diff. Equa., 2007, 240: 125–144. DOI:10.1016/j.jde.2007.05.030
[15] Pastran R A. On a perturbation of the Benjamin-Ono equation[J]. Nonl. Anal., 2013, 93: 273–296. DOI:10.1016/j.na.2013.07.014
[16] Vento S. Global well-posedness for dissipative Korteweg-de Vries equations[J]. Funk. Ekvac., 2011, 54: 119–138. DOI:10.1619/fesi.54.119
[17] Yang Yang, Zhang Jihui, Shang Xudong, Shao Yixin. Multiple solutions for symmetric and nonsymmetric quasilinear elliptic equation:an Orlicz-Sobolev space setting[J]. J. Math., 2015, 35(4): 779–788.
[18] Zhang Zhongxiang. On singular integral equations with translations[J]. J. Math., 2001, 21(2): 161–167.