数学杂志  2015, Vol. 35 Issue (5): 1017-1025   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
WANG Xiao-feng
SHI Dong-yang
A NOVEL AND PRECISE SEVENTH-ORDER NEWTON'S ITERATIVE METHOD FOR SOLVING NONLINEAR EQUATIONS
WANG Xiao-feng1,2, SHI Dong-yang1    
1. School of Math. and Statistics, Zhengzhou University, Zhengzhou 450001, China;
2. Department of Math., Henan Institute of Science and Technology, Xinxiang 453003, China
Abstract: In this paper, we study the problem of solving nonlinear equations. By using Taylor formulas and cupling method, we get a novel and robust three-step seventh-order iterative scheme. The contributed without memory method does not need to calculate higher order derivatives and has a large radius of convergence and higher efficiency of calculation.
Key words: iterative method     nonlinear equations     extended computational index     efficiency index     convergence radius    
求解非线性方程七阶收敛的牛顿迭代修正格式
王晓峰1,2, 石东洋1    
1. 郑州大学数学与统计学院, 河南 郑州 450001;
2. 河南科技学院数学科学学院, 河南 新乡 453003
摘要:本文研究了非线性方程求解的问题.利用泰勒公式和耦合方法, 获得了一种求解非线性方程的加速收敛的七阶迭代改进格式, 该格式不需要计算高阶导数, 且具有更大的收敛半径, 大大提高了计算效率.
关键词迭代法    非线性方程    扩展指数    效率指数    收敛半径    
1 Introduction

In numerical analysis and scientific work, one of the most important occurring problems is solving nonlinear equations. To solve these equations, we can use iterative methods such as Newton's method and its variants. In this paper, we consider iterative methods to find a real simple root $x^{*}$ of a nonlinear equation $f(x)$=0, where $f: D\rightarrow R$ is a scalar function and it is sufficiently smooth in a neighborhood of $x^{*}$, and $D$ is an open interval.

Being quadratically convergent, Newton's method is probably the best known and most used algorithm. To improve the local order of convergence of Newton's method, many modified methods were proposed. One of the third-order modifications of Newton's method [1] was given by

$ \begin{eqnarray} x_{n+1}=x_{n}-\frac{f(x_{n})+f(y_{n})}{f'(x_{n})}, \end{eqnarray} $ (1.1)

where $y_{n}=x_{n}-f(x_{n})/f'(x_{n}), ~n\geq0$. Also, another third-order method [2], that is defined by

$ \begin{eqnarray} x_{n+1}=x_{n}-\frac{f^{2}(x_{n})}{f(x_{n})-f(y_{n})}\cdot\frac{1}{f'(x_{n})}. \end{eqnarray} $ (1.2)

Combining eqs. (1.1) and (1.2), we have

$ \begin{eqnarray*} x_{n+1}=\alpha\Big[x_{n}-\frac{f(x_{n})+f(y_{n})}{f'(x_{n})}\Big]+\beta\Big[x_{n}-\frac{f^{2}(x_{n})}{f(x_{n})-f(y_{n})}\cdot\frac{1}{f'(x_{n})}\Big]. \end{eqnarray*} $

Taking $\alpha=-1, \beta=2$, we can obtain the following fourth-order iterative method [3]

$ \begin{eqnarray} x_{n+1}=x_{n}-\frac{f^{2}(x_{n})+f^{2}(y_{n})}{f'(x_{n})[f(x_{n})-f(y_{n})]}. \end{eqnarray} $ (1.3)

The present method requires two evaluations of the function and one evaluation of its first derivative. We consider the definition of efficiency index [3] as $p^{1/m}$, where $p$ is the order of the method and $m$ is the number of function evaluations in per iteration required by the method. The present methods defined by (1.3) has the efficiency index equals to $\sqrt[3]{4}\approx1.587$, which is better than the ones of the mentioned third-order methods $\sqrt[3]{3}\approx1.442$ and Newton's method $\sqrt[]{2}\approx1.414$.

In order to derive more new high-order iterative methods, we present and analyze a new three-step iterative method by approximating the first derivative of the function in the third step for solving nonlinear equations. According to the next theorem, the local order of convergence of the composed method will be at least seven and one iteration this method requires three evaluations of the function and one evaluation of its first derivative with its efficiency index $\sqrt[4]{7}\approx1.627$. Several numerical examples are given to show the performance of the new method.

2 Description of the Method

In this section, for construction of the new high-order iterative method, we use the iterative method given by eq. (1.3).

Theorem 2.1  Let $\psi_{1}(x)$ and $\psi_{2}(x)$ be two iterative methods with order of convergence $P$ and $Q$, respectively, then the order of convergence of the iterative method $\psi(x)=\psi_{2}(\psi_{1}(x))$ is $PQ$ [4].

We consider the iteration scheme of the form

$ \begin{equation} \left\{ \begin {array}{lll} \displaystyle y_{n}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}, \\ \displaystyle z_{n}=x_{n}-\frac{f^{2}(x_{n})+f^{2}(y_{n})}{f'(x_{n})[f(x_{n})-f(y_{n})]}, \\ \displaystyle x_{n+1}=z_{n}-\frac{f(z_{n})}{f'(z_{n})}. \end {array}\right. \end{equation} $ (2.1)

From Theorem 2.1, we can see that the above formulas have an efficiency index of $\sqrt[5]{8}\approx1.516$. To improve the efficiency index, we approximate the first-appeared derivative in the last step $f'(z_{n})$ by a combination of already evaluated function values using divided differences.

We consider the Taylor polynomial of degree 2 for the function $f(z_{n})$:

$ \begin{equation} f({z_{n}})\approx f(y_{n})+f'(y_{n})(z_{n}-y_{n})+\frac{1}{2}f''(y_{n})(z_{n}-y_{n})^{2}, \end{equation} $ (2.2)

which implies

$ \begin{equation} f'({y_{n}})\approx f[z_{n}, y_{n}]-\frac{f''({y_{n}})}{2}(z_{n}-y_{n}). \end{equation} $ (2.3)

But $f''({y_{n}})$ can be expressed as [4]

$ \begin{equation} f''({y_{n}})\approx\frac{2f[z_{n}, x_{n}]-2f'(x_{n})}{z_{n}-x_{n}}=2f[z_{n}, x_{n}, x_{n}]. \end{equation} $ (2.4)

It is easy to know that

$ \begin{equation} f'(z_{n})\approx f[z_{n}, y_{n}]+f[z_{n}, x_{n}, x_{n}](z_{n}-y_{n}). \end{equation} $ (2.5)

And consequently, our contributed iterative method is presented in the following three-step view

$ \begin{equation} \left\{ \begin {array}{lll} \displaystyle y_{n}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}, \\ \displaystyle z_{n}=x_{n}-\frac{f^{2}(x_{n})+f^{2}(y_{n})}{f'(x_{n})[f(x_{n})-f(y_{n})]}, \\ \displaystyle x_{n+1}=z_{n}-\frac{f(z_{n})}{f[z_{n}, y_{n}]+f[z_{n}, x_{n}, x_{n}](z_{n}-y_{n})}. \end {array}\right. \end{equation} $ (2.6)

In the next section, we will show that our new proposed method (2.6) has seventh-order of convergence. Per one iteration, method (2.6) requires three evaluations of the function and one evaluation of its first derivative, so its efficiency index attains $\sqrt[4]{7}\approx1.627$ and this is the main motivation of our paper.

3 Convergence Analysis

Theorem 3.1  Let $x^{*}\in D$ be a simple zero of a sufficiently differentiable function $f:D\rightarrow R$ for an open interval $D$. Then, the new three-step method that is defined by eq. (2.6) has the seventh-order convergence and satisfies the following error equation,

$ \begin{eqnarray*} e_{n+1}=4c_{2}c_{3}(4c_{2}c_{3}-c_{4}-4c_{2}^{3})e_{n}^{7}+O(e_{n}^{8}), \end{eqnarray*} $

where $e_{n}=x_{n}-x^{*}$, $c_{k}=(1/k!)f^{(k)}(x^{*})/f'(x^{*})$.

Proof  Let $e_{n}=x_{n}-x^{*}$ and $x^{*}$ be a simple zero of $f$. Using Taylor expansion and taking into account $f(x^{*})$=0, we have

$ \begin{eqnarray} f(x_{n})=f'(x^{*})\Big[e_{n}+c_{2}e_{n}^{2}+c_{3}e_{n}^{3}+c_{4}e_{n}^{4}+c_{5}e_{n}^{5}+O(e_{n}^{6})\Big] \end{eqnarray} $ (3.1)

and

$ \begin{eqnarray} f'(x_{n})=f'(x^{*})\Big[1+2c_{2}e_{n}+3c_{3}e_{n}^{2}+4c_{4}e_{n}^{3}+5c_{5}e_{n}^{4}+6c_{6}e_{n}^{5}+O(e_{n}^{6})\Big]. \end{eqnarray} $ (3.2)

Dividing eq. (3.1) by eq. (3.2) gives us

$ \begin{eqnarray} \displaystyle \frac{f(x_{n})}{f'(x_{n})}&=&\displaystyle e_{n}-c_{2}e_{n}^{2}+2(c_{2}^{2}-c_{3})e_{n}^{3}+(7c_{2}c_{3}-4c_{2}^{3}-3c_{4})e_{n}^{4}\nonumber\\ &&+2(4c_{2}^{4}-10c_{2}^{2}c_{3}+3c_{3}^{2}+5c_{2}c_{4}-2c_{5})e_{n}^{5}+O(e_{n}^{6}), \end{eqnarray} $ (3.3)

and hence

$ \begin{eqnarray} y_{n}&=& x^{*}+c_{2}e_{n}^{2}-2(c_{2}^{2}-c_{3})e_{n}^{3}-(7c_{2}c_{3}-3c_{4}-4c_{2}^{3})e_{n}^{4}\nonumber\\ &&-2(4c_{2}^{4}-10c_{2}^{2}c_{3}+3c_{3}^{2}+5c_{2}c_{4}-2c_{5})e_{n}^{5}+O(e_{n}^{6}). \end{eqnarray} $ (3.4)

Again, expanding $f(y_{n})$ about $x^{*}$ and then using eq.(3.4), we have

$ \begin{eqnarray} f(y_{n})&=& f'(x^{*})\Big[c_{2}e_{n}^{2}-2(c_{2}^{2}-c_{3})e_{n}^{3}-(7c_{2}c_{3}-3c_{4}-5c_{2}^{3})e_{n}^{4}\nonumber\\ &&-2(6c_{2}^{4}-12c_{2}^{2}c_{3}+3c_{3}^{2}+5c_{2}c_{4}-2c_{5})e_{n}^{5}+O(e_{n}^{6})\Big]. \end{eqnarray} $ (3.5)

After an elementary calculation from eqs. (3.1) and (3.5), we obtain

$ \begin{eqnarray} f(x_{n})-f(y_{n})&=& f'(x^{*})\Big[e_{n}+(2c_{2}^{2}-c_{3})e_{n}^{3}+(7c_{2}c_{3}-5c_{2}^{3}-2c_{4})e_{n}^{4}\nonumber\\ &&+2(6c_{2}^{4}-12c_{2}^{2}c_{3}+3c_{3}^{2}+5c_{2}c_{4}-1.5c_{5})e_{n}^{5}+O(e_{n}^{6})\Big] \end{eqnarray} $ (3.6)

and

$ \begin{eqnarray} f^{2}(x_{n})+f^{2}(y_{n})&=& f'^{2}(x^{*})\Big[e_{n}^{2}+2c_{2}e_{n}^{3}+2(c_{3}+c_{2}^{2})e_{n}^{4}+2(c_{4}+3c_{2}c_{3}-2c_{2}^{3})e_{n}^{5}\nonumber\\ &&+(14c_{2}^{4}-22c_{2}^{2}c_{3}+5c_{3}^{2}+8c_{2}c_{4}+2c_{5})e_{n}^{6}+O(e_{n}^{7})\Big]. \end{eqnarray} $ (3.7)

From eqs. (3.2), (3.6) and (3.7), we can obtain

$ \begin{eqnarray} \frac{f^{2}(x_{n})+f^{2}(y_{n})}{f'(x_{n})[f(x_{n})-f(y_{n})]}&=& e_{n}+2(4c_{2}c_{3}-c_{4}-4c_{2}^{3})e_{n}^{4}\nonumber\\ &&+(40c_{2}^{4}-58c_{2}^{2}c_{3}+8c_{3}^{2}+16c_{2}c_{4}-3c_{5})e_{n}^{5}+O(e_{n}^{6}). \end{eqnarray} $ (3.8)

From eq. (3.8), we obtain

$ \begin{eqnarray} z_{n}-x^{*}&=-2(4c_{2}c_{3}-c_{4}-4c_{2}^{3})e_{n}^{4}-(40c_{2}^{4}-58c_{2}^{2}c_{3}+8c_{3}^{2}+16c_{2}c_{4}\\ &-3c_{5})e_{n}^{5}+O(e_{n}^{6}) \end{eqnarray} $ (3.9)

and

$ \begin{eqnarray} f(z_{n})=f'(x^{*})\Big[\alpha_{n}+O(\alpha_{n}^{2})\Big], ~~\alpha_{n}=z_{n}-x^{*}. \end{eqnarray} $ (3.10)

Thus,

$ \begin{eqnarray} \displaystyle \frac{f(z_{n})}{f[z_{n}, y_{n}]+f[z_{n}, x_{n}, x_{n}](z_{n}-y_{n})}=\alpha_{n}\Big[1+2c_{2}c_{3}e_{n}^{3}+O(e_{n}^{4})\Big]. \end{eqnarray} $ (3.11)

Hence, we obtain the following error equation

$ \begin{eqnarray} e_{n+1}&=& \alpha_{n}-\frac{f(z_{n})}{f[z_{n}, y_{n}]+f[z_{n}, x_{n}, x_{n}](z_{n}-y_{n})}\nonumber\\ &=&4c_{2}c_{3}(4c_{2}c_{3}-c_{4}-4c_{2}^{3})e_{n}^{7}+O(e_{n}^{8}). \end{eqnarray} $ (3.12)

This means that the method defined by eq. (2.6) has the seventh-order convergence.

Remark 1  The efficiency index of our scheme defined by eq. (2.6) is $\sqrt[4]{7}\approx1.627$ which is bigger than $\sqrt[]{2}\approx1.414$ of Newton's method. Although the new scheme has lower efficiency index in contrast with $\sqrt[4]{8}\approx1.682$ of optimal eighth-order methods in [6-7], its convergence radius and accuracy for not so close starting points are better than the accuracy and convergence radii of the optimal eighth-order methods, see Section 4.

4 Numerical Examples

To show the reliability of the new method (2.6), we compare the results with Zhang's seventh-order methods [5] and the eighth-order methods in [6-7]. Zhang's seventh-order methods are expressed as

$ \begin{eqnarray} \left\{ \begin {array}{lll} \displaystyle y_{n}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}, \\ \displaystyle z_{n}=y_{n}-\frac{1}{1-2\alpha+\beta\alpha^{2}+\gamma\alpha^{3}}\frac{f(y_{n})}{f'(x_{n})}, \\ \displaystyle x_{n+1}=z_{n}-\frac{f(z_{n})}{\lambda}, \end {array}\right.\end{eqnarray} $ (4.1)
$\begin{eqnarray} \left\{ \begin {array}{lll} \displaystyle y_{n}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}, \\ \displaystyle z_{n}=y_{n}-(1+2\alpha)\frac{f(y_{n})}{f'(x_{n})}, \\ \displaystyle x_{n+1}=z_{n}-\frac{f(z_{n})}{\lambda}, \end {array}\right.\end{eqnarray} $ (4.2)
$\begin{eqnarray} \left\{ \begin {array}{lll} \displaystyle y_{n}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}, \\ \displaystyle z_{n}=y_{n}-(1+\alpha)^{2}\frac{f(y_{n})}{f'(x_{n})}, \\ \displaystyle x_{n+1}=z_{n}-\frac{f(z_{n})}{\lambda}, \end {array}\right. \end{eqnarray} $ (4.3)

where $\alpha=f(y_{n})/f(x_{n})$, $\lambda=f[z_{n}, y_{n}]+f[z_{n}, x_{n}, x_{n}](z_{n}-y_{n})$, $\beta, \gamma\in R$. The optimal efficient eighth-order method of Thukral and Petkovic [6] and the optimal novel eighth-order method of Neta and Petkovic [7] are expressed as

$ \begin{eqnarray} \left\{ \begin {array}{lll} \displaystyle y_{n}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}, \\ \displaystyle z_{n}=y_{n}-\frac{f(x_{n})+bf(y_{n})}{f(x_{n})+(b-2)f(y_{n})}\cdot\frac{f(y_{n})}{f'(x_{n})}, \\ \displaystyle x_{n+1}=z_{n}-\Big[\varphi(n)+\frac{f(z_{n})}{f(y_{n})-af(z_{n})}+\frac{4f(z_{n})}{f(x_{n})}\Big]\cdot\frac{f(z_{n})}{f'(x_{n})} \end {array}\right. \end{eqnarray} $ (4.4)

and

$ \begin{eqnarray} \left\{ \begin {array}{lll} \displaystyle y_{n}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}, \\ \displaystyle z_{n}=y_{n}-\frac{f(x_{n})+tf(y_{n})}{f(x_{n})+(t-2)f(y_{n})}\cdot\frac{f(y_{n})}{f'(x_{n})}, \\ \displaystyle x_{n+1}=y_{n}+c[f(x_{n})]^{2}-d[f(x_{n})]^{3}, \end {array}\right. \end{eqnarray} $ (4.5)

where where $a, b, t\in R$ and

$ \begin{eqnarray} \left\{ \begin {array}{lll} \displaystyle\varphi(n)=1+2\kappa_{n}+(5-2b)(\kappa_{n})^{2}+2(6-6b+b^{2})(\kappa_{n})^{3}, \\ ~~\kappa_{n}=f(y_{n})/f(x_{n}), \\ \displaystyle c=\frac{1}{f(y_{n})-f(x_{n})}\Big(\frac{1}{f[y_{n}, x_{n}]}-\frac{1}{f'(x_{n})}\Big)-d\Big(f(y_{n})-f(x_{n})\Big), \\ \displaystyle d=\frac{1}{f(y_{n})-f(x_{n})}\cdot\frac{1}{f(y_{n})-f(z_{n})}\Big(\frac{1}{f[y_{n}, x_{n}]}-\frac{1}{f'(x_{n})}\Big)\\ \displaystyle ~~~~~~~~-\frac{1}{f(z_{n})-f(x_{n})}\cdot\frac{1}{f(y_{n})-f(z_{n})}\Big(\frac{1}{f[z_{n}, x_{n}]}-\frac{1}{f'(x_{n})}\Big). \end {array}\right. \end{eqnarray} $ (4.6)

All computations were performed in MATLAB 7.0 with 200 digit floating arithmetic (VPA=200). When the stopping criterion

$ \begin{eqnarray} |x_{n+1}-x_{n}|+|f(x_{n+1})|<\varepsilon \end{eqnarray} $ (4.7)

is satisfied, $x_{n+1}$ is taken as an approximate of the exact root $x^{*}$. We use the following functions:

$\bullet$ $\displaystyle~~f_{1}(x)=\sqrt{x^{4}+8}\sin(\frac{\pi}{x^{2}+2})+\frac{x^{3}}{x^{4}+1}-\sqrt{6}+\frac{8}{17}, ~~x^{*}=-2$,

$\bullet$ $\displaystyle~~f_{2}(x)=\sin x-0.5, ~~x^{*}=0.523598775598299$,

$\bullet$ $\displaystyle~~f_{3}(x)=e^{x}-4x^{2}=0, ~~x^{*}=0.7148059123627778$,

$\bullet$ $\displaystyle~~f_{4}(x)=(\sin x-\frac{\sqrt{2}}{2})^{2}(x+1), ~~x^{*}=-1$,

$\bullet$ $\displaystyle~~f_{5}(x)=\cos x-x, ~~x^{*}=0.739085133215161$.

Displayed in Table 1 is the number of iterations $(n)$ and the number of function evaluations (NFE) required such that the stopping criteria satisfied, where $\varepsilon$ is taken to be $10^{-15}$. Moreover, displayed in Table 2 is the distance of two consecutive approximations $\delta=|x_{n+1}-x_{n}|$, the value of $|f(x_{n+1})|$ after the required iterations and the computational order of convergence (COC) which can be approximated using the formula [8]

$ \begin{eqnarray} COC=\frac{\ln|(x_{n+1}-x_{n})(x_{n}-x_{n-1})^{-1}|}{\ln|(x_{n}-x_{n-1})(x_{n-1}-x_{n-2})^{-1}|}. \end{eqnarray} $ (4.8)
Table 1
Comparison of various methods in terms of needed iterations to obtain the root x*

Table 2
Comparison of various iterative methods

The results in Table 1 and Table 2 show that for most of the functions we tested, the present method improves the computational efficiency of Newton's method, the modified Jarratt's method defined by eq. (1.2), where the initial guesses of $f_{1}, f_{2}, f_{3}, f_{4}$ in Table 2 are -4.0, 1.4, -0.42 and -1.9, respectively. Note that for the eighth-order methods defined by eqs. (4.4)-(4.5), where $a$= $b$= $t$=0, we should pull the attention toward this, which they have a very low convergence radii and that is why in most of the case; they turn out to divergence when the starting points are in the vicinity of the zero but not so close. The applications of such eighth-order methods as eqs. (4.4)-(4.5) whose convergence radii, are low, are indeed restricted in practice.

We also provide the Total Number of Evaluations (TNE) for each method to obtain the root up to 15 decimal places in Table 3 and Table 4. As we can see, the contributed method (1.3) is robust and accurate in comparison with other efficient schemes. By comparisons with eqs. (4.4)-(4.5), we could claim that the method compete the optimal eighth-order schemes in [6-7] while its computational complexity is less and its convergence radius is bigger too. Of course, if the initial guesses be enough close (very close) to the sought zeros, then the optimal eighth-order methods (4.4)-(4.5) will perform better than (1.3).

Table 3
Comparison the distance of two consecutive approximations δ of various methods to find the root x* with the same Total Number of Evaluation(TNE=3)

Table 4
Comparison the values of |f(xn+1)| of various methods to find the root x* with the same Total Number of Evaluation(TNE=3)

Remark 2  Let $r$ be the total operations (including additions, subtractions, divisions, multiplications and so on) of an iterative method per iteration, then the (extended) computational index (also known as extended operational index) is defined by $\sqrt[r]{p}$, where $p$ is the order of convergence. Now, we can compare the computational index of some well known high-order methods with our scheme. The computational index of our method is $\sqrt[25]{6}\approx1.074$ which is bigger than $\sqrt[39]{8}\approx1.054$ of (4.4) and $\sqrt[58]{8}\approx1.036$ of (4.5).

5 Conclusions

In this work we present an efficient Newton-type method with eighth-order convergence by combining previous methods for solving nonlinear equations. Analysis of efficiency shows that the new algorithm is more efficient and it performs better than classical Newton's method and some other methods.

References
[1] Chun C. Iterative methods improving Newton's methods by the decomposition method[J]. Comput. Math. Appl., 2005, 50(10-12): 1559–1568. DOI:10.1016/j.camwa.2005.08.022
[2] Chun C. Construction of Newton-like iteration methods for solving non-linear equations[J]. Numer. Math., 2006, 104(3): 297–315. DOI:10.1007/s00211-006-0025-2
[3] Kou Jisheng, Wang Xiuhua, Li Yitian. Some eight-order root-finding three-step methods[J]. Commun. Nonlinear Sci. Numer. Simulat., 2010, 15(3): 536–544. DOI:10.1016/j.cnsns.2009.04.013
[4] Al-Subaihi I. A. A ninth-order iterative method free from second derivative for solving nonlinear equations[J]. Int. J. Math. Analysis, 2011, 5(47): 2337–2347.
[5] Zhang Y X, Ding H F, He W S, Yang X Y. A new family of methods for nonlinear equations[J]. ICICA 2010, LNCS 6377, 2010: 387–394.
[6] Thukral R, Petkovic M S. A family of three-point methods of optimal order for solving nonlinear equations[J]. J. Comput. Appl. Math., 2010, 233(9): 2278–2284. DOI:10.1016/j.cam.2009.10.012
[7] Neta B, Petkovic M S. Construction of optimal order nonlonear solvers using inverse interpolation[J]. Appl. Math. Comput., 2010, 217(6): 2448–2455.
[8] Al-Subaihi I A. A ninth-order iterative method free from second derivative for solving nonlinear equations[J]. Int. J. Math. Anal., 2011, 5(47): 2337–2347.