数学杂志  2014, Vol. 34 Issue (2): 205-213   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
WANG Xiao-feng
ZHANG Tie
OPTIMAL EIGHTH-ORDER STEFFENSEN TYPE METHODS FOR NONLINEAR EQUATIONS
WANG Xiao-feng1,2, ZHANG Tie2    
1. School of Mathematics and Physics, Bohai University, Jinzhou 121013, China;
2. College of Sciences, Northeastern University, Shenyang 110819, China
Abstract: In this paper, we present a family of three-step eighth-order Steffensen type meth-ods for solving nonlinear equations by using weight function methods. Numerical experiments show that these methods require less time to converge than the other optimal methods.
Key words: Steffensen's method     derivative free     eighth-order convergence    
求解非线性方程的最优8阶史蒂芬森方法
王晓锋1,2, 张铁2    
1. 渤海大学数理学院, 辽宁 锦州 121013;
2. 东北大学理学院, 辽宁 沈阳 110819
摘要:本文研究了非线性方程求根问题.利用权函数方法, 获得了一种三步8阶收敛的史蒂芬森型方法.实验结果表明本文提出的方法计算时间少于其它同阶的最优方法.
关键词史蒂芬森法    无导数    8阶收敛    
1 Introduction

In this paper, we consider iterative methods to find a simple root of a nonlinear equation $f(x)=0, $ where $f:I\subset R\to R$ for an open interval $I$ is a scalar function. The classical Newton's method [1] with second-order convergence is written as

${x_{n + 1}} = {x_n} - f({x_n})f'{({x_n})^{ - 1}}.$ (1.1)

However, when the first order derivative of the function $f(x)$ is unavailable or is expensive to compute, the Newton's method is still restricted in practical applications. In order to avoid computing the first order derivative, Steffensen [2] proposed the following second-order method

${x_{n + 1}} = {x_n} - f({x_n})f{[{x_n},{z_n}]^{ - 1}},$ (1.2)

where $z_n =x_n +f(x_n ), $ and $f[\cdot \mbox{ }, \mbox{ }\cdot]$ is the first order divided difference. To improve the local order of convergence, many high-order methods were proposed in open literatures, see [3-12] and references therein. Ren et al. [3] proposed the following fourth-order methods

$\begin{equation} \label{eq3} \left\{ \begin{aligned} y_n =&x_n -f(x_n )f[x_n, z_n]^{-1}, \\ x_{n+1} =&y_n -f(y_n )[f[x_n, y_n]+f[y_n, z_n]-f[x_n, z_n]+\beta (y_n -x_n )(y_n -z_n )]^{-1}, \end{aligned} \right. \end{equation}$ (1.3)

where $z_n =x_n +f(x_n )$ and $\beta \in R$ is a constant. Zheng et al. [4] presented an eighth-order method by using the direct Newtonian interpolation, which is given by

$\begin{equation} \label{eq4} \left\{ \begin{aligned} y_n =&x_n -f(x_n )f[x_n, z_n]^{-1}, \\ u_n =&y_n -f(y_n )\{f[x_n, y_n]+f[z_n, x_n, y_n](y_n -x_n )\}^{-1}, \\ x_{n+1} =&u_n -f(u_n )\{f[u_n, y_n]+f[u_n, x_n, y_n](u_n -y_n )\\ &+f[u_n, z_n, x_n, y_n](u_n -y_n )(u_n -x_n )\}^{-1}, \end{aligned} \right. \end{equation}$ (1.4)

where $z_n =x_n +\gamma f(x_n )$ and $\gamma \in R$ is a constant. Furthermore, Soleymani et al. [5] also presented the following eight-order method

$\begin{equation} \label{eq5} \left\{ \begin{aligned} y_n =&x_n -f(x_n )f[x_n, z_n]^{-1}, \\ u_n =&y_n -f(y_n )f[x_n, y_n]^{-1}\left\{ {1+t_n +t_n^2 -t_n^3 /2} \right\}, \\ x_{n+1} =&u_n -f(u_n )f[u_n, y_n]^{-1}\left\{ {1-(f[x_n, z_n]-1)^{-1}t_n^2 +(2-f[x_n, z_n])\lambda _n } \right\}, \end{aligned} \right. \end{equation}$ (1.5)

where $z_n =x_n -f(x_n ), t_n =f(y_n )/f(z_n )$ and $\lambda _n =f(u_n )/f(z_n ).$ Other Steffensen type methods and their applications were discussed in [6-12]. All these methods are derivative-free in per full iteration.

The purpose of this paper is to develop a new family of eighth-order derivative-free methods and give the convergence analysis. This paper is organized as follows. In Section 2, we present a family of three-step eighth-order iterative methods for solving nonlinear equations. The new methods are free from any derivatives and require four evaluations of the function $f(x), $ therefore the new methods have the efficiency index of $\sqrt[4]{8}\approx 1.682$. The new methods agree with the conjecture of Kung and Traub [13] for the case $n=4$. We prove that the order of convergence of the new methods is eight for nonlinear equations. In Section 3, we give some specific iterative methods which can be used in practical computations. Numerical examples are given in Section 4 to illustrate convergence behavior of our methods for simple roots. Section 5 is a short conclusion.

2 The Methods and Analysis of Convergence

Now, we consider the iteration scheme of the form,

$\begin{equation} \label{eq6} \left\{ \begin{aligned} y_n =&x_n -f(x_n )f[x_n, z_n]^{-1}, \\ u_n =&y_n -K(s_n, t_n )f(y_n )f[x_n, z_n]^{-1}, \\ x_{n+1} =&u_n -H(\lambda _n )f(u_n ){f}'(u_n )^{-1}, \end{aligned} \right. \end{equation}$ (2.1)

where $z_n =x_n +\gamma f(x_n ), \mbox{ }s_n =f(y_n )/f(x_n ), \mbox{ }t_n =f(y_n )/f(z_n ), \mbox{ }\lambda _n =f(u_n )/f(z_n )$ and $\gamma \in R$ is a constant. $H(\lambda ) \mbox{ and } K(s, t)$ are some functions of one and two variables, respectively. It is quite obvious that formula (2.1) requires five functional evaluations per iteration. To derive a scheme with a higher efficiency index and reduce the number of functional evaluations, we approximate ${f}'(u_n )$ by considering the approximation of $f(x)$ by a rational nonlinear function of the form

$\phi (x) = \{ {a_1} + {a_2}(x - {x_n})\} {\{ 1 + {a_3}(x - {x_n})\} ^{ - 1}},$ (2.2)

where the parameters $a_1, a_2 $ and $a_3 $ are determined by the condition that $f$ and $\phi $ coincide at $x_n, y_n $ and $u_n $. That means $\phi (x)$ satisfies the conditions

$\phi ({x_n}) = f({x_n}),{\rm{ }}\phi ({y_n}) = f({y_n}),{\rm{ }}\phi ({u_n}) = f({u_n}).$ (2.3)

From (2.2) and (2.3), we can obtain

${a_1} = f({x_n}), $ (2.4)
${a_2} = \{ f[{x_n}, {u_n}]f({y_n}) - f[{x_n}, {y_n}]f({u_n})\} {\{ f({y_n}) - f({u_n})\} ^{ - 1}}, $ (2.5)
${a_3} = f[{x_n}, {u_n}] - f[{x_n}, {y_n}]{\{ f({y_n}) - f({u_n})\} ^{ - 1}}.{\rm{ }}$ (2.6)

Differentiation of (2.2) gives

$\phi '(x) = ({a_2} - {a_1}{a_3}){(1 + {a_3}(x - {x_n}))^{ - 2}}.$ (2.7)

We can now approximate the derivative ${f}'(x)$ with the derivative ${\phi }'(x)$ and obtain

$f'({u_n}) \approx \phi '({u_n}).$ (2.8)

From (2.4) and (2.5)-(2.8), it follows that

$f'({u_n}) \approx f({x_n})f[{x_n},{u_n}]f[{u_n},{y_n}]{(f({x_n}) - f({y_n}))^{ - 1}}f{[{x_n},{z_n}]^{ - 1}}.$ (2.9)

Substituting (2.9) into (2.1), we obtain a new family of eighth-order methods:

$\begin{equation} \label{eq15} \left\{ \begin{aligned} y_n =&x_n -f(x_n )f[x_n, z_n]^{-1}, \\ u_n =&y_n -K(s_n, t_n )f(y_n )f[x_n, z_n]^{-1}, \\ x_{n+1} =&u_n -H(\lambda _n )f[x_n, z_n]f(u_n )(1-s_n )\{f[u_n, x_n]f[u_n, y_n]\}^{-1}. \end{aligned} \right. \end{equation}$ (2.10)

The functions $H(\lambda ) \mbox{ and } K(s, t)$ should be determined so that the iterative method (2.10) is of the order eight. To do that, we will use the Taylor's series about (0) for $H(\lambda )$ and $(0, 0)$ for $K(s, t)$ thus,

$H(\lambda ) = H(0) + H'(0)\lambda + \cdots ,$ (2.11)
$K(s,t) = K(0,0) + {K_s}s + {K_t}t + [{K_{ss}}{s^2} + 2{K_{st}}st + {K_{tt}}{t^2}]/2 + \cdots .$ (2.12)

Here the subscribes denote respective partial derivatives; for example, $K_{st} =\left. {\frac{\partial ^2K(s, t)}{\partial s\partial t}} \right|_{(s, t)=(0, 0)}, $ etc. Petković et al. [10] proved that the first two steps of (2.10) are fourth order methods with $K(0, 0)=1, \mbox{ }K_s =1$ and $K_t =1.$ We could find another coefficients $H(0), \ {H}'(0), \ K_{ss}, K_{st}, \ K_{tt} $ by Theorem 2.1.

Theorem 2.1 Let $a\in I$ be a simple zero of a sufficiently differentiable function $f:\mbox{ }I\in R\to R$ for an open interval $I.$ If $x_0 $ is sufficiently close to $a$, then the sequence $\{x_n \}$ generated by any method of the family (2.10) converges to $a$. If $H(0)=1, \ {H}'(0)=1, \ \left| {{H}''(0)} \right|<\infty, K(0, 0)=1, K_s =1, $ $K_t =1, \ K_{ss} =2, \ K_{tt} =2, K_{st} =2, $ then the family of methods defined by (2.10) is of eighth-order.

Proof Let $e_n =x_n -a$, $c_n =(1/n!)f^{(n)}(a)/{f}'(a)$, $n=2, 3, \cdots $. Using the Taylor expansion and taking into account $f(a)=0, $ we have

$f({x_n}) = f'(a)[{e_n} + {c_2}e_n^2 + {c_3}e_n^3 + {c_4}e_n^4 + {c_5}e_n^5 + {c_6}e_n^6 + O(e_n^7)].$ (2.13)

From (2.13), noting that $z_n =x_n +\gamma f(x_n )$, we can obtain

$\begin{array}{l} f({z_n}) = f'(a)[(1 + \gamma f'(a)){e_n} + {c_2}(\gamma f'(a) + {(1 + \gamma f'(a))^2})e_n^2\\ + ({c_3}(\gamma f'(a) + {(1 + \gamma f'(a))^3}) + 2\gamma f'(a)(1 + \gamma f'(a))c_2^2)e_n^3 + (c_2^3{\gamma ^2}f'{(a)^2}\\ + {c_2}{c_3}\gamma f'(a)(1 + \gamma f'(a))(5 + 3\gamma f'(a))\\ + {c_4}(\gamma f'(a) + {(1 + \gamma f'(a))^4}))e_n^4 + O(e_n^5), \end{array}$ (2)
$\begin{array}{l} f[{x_n}, {z_n}] = f'(a) + (2 + \gamma f'(a)){c_2}f'(a){e_n} + (3{c_3} + (c_2^2 + 3{c_3})\gamma f'(a)\\ + {c_3}{\gamma ^2}f'{(a)^2})f'(a)e_n^2 + ({c_4}(4 + 6\gamma f'(a) + 4{\gamma ^2}f'{(a)^2} + {\gamma ^3}f'{(a)^3})\\ + {c_2}{c_3}(4\gamma f'(a) + 2{\gamma ^2}f'{(a)^2}))f'(a)e_n^3 + O(e_n^4), \end{array}$ (2.15)

then,

${y_n} - a = Ae_n^2 + Be_n^3 + Ce_n^4 + O(e_n^5), $ (2.16)

where

$A = {c_2}(1 + \gamma f'(a)), $ (2.17)
$B = {c_3}(2 + 3\gamma f'(a) + {\gamma ^2}f'{(a)^2}) - c_2^2(2 + 2\gamma f'(a) + {\gamma ^2}f'{(a)^2}), $ (2.18)
$\begin{array}{l} C = {c_4}(3 + 6\gamma f'(a) + 4{\gamma ^2}f'{(a)^2} + {\gamma ^3}f'{(a)^3}) - {c_2}{c_3}(7 + 10\gamma f'(a)\\ + 7{\gamma ^2}f'{(a)^2} + 2{\gamma ^3}f'{(a)^3}) + c_2^3(4 + 5\gamma f'(a) + 3{\gamma ^2}f'{(a)^2} + {\gamma ^3}f'{(a)^3}).{\rm{ }} \end{array}$ (2.19)

With (2.13), (2.14) and (2.16), we have

$\begin{array}{l} f({y_n}) = f'(a)[{y_n}-a + {c_2}{({y_n}-a)^2} + O({e^5})]\\ = f'(a)[Ae_n^2 + Be_n^3 + (C + {A^2})e_n^4 + O(e_n^5)], \end{array}$ (2.20)
$\begin{array}{l} {s_n} = f'(a) + f'(a){c_2}{e_n} + f'(a)(c_2^2 + {c_3} + \gamma f'(a)c_2^2)e_n^2 + f'(a)( - 2c_2^3 + 3{c_2}{c_3} + {c_4}\\ - 2\gamma f'(a)c_2^3 + 4\gamma f'(a){c_2}{c_3} + ({c_2}{c_3} - c_2^3){\gamma ^2}f'{(a)^2})e_n^3 + f'(a)(4c_2^4 - 8c_2^2{c_3} + 2c_3^2\\ + 4c_2^4 + {c_5} + \gamma f'(a)(5c_2^4 - 10c_2^2{c_3} + 3c_3^2 + 7{c_2}{c_4}) + {\gamma ^2}f'{(a)^2}(3c_2^4 - 7c_2^2{c_3}\\ + c_3^2 + 4{c_2}{c_4}) + {\gamma ^3}f'{(a)^3}(c_2^4 - 2c_2^2{c_3} + {c_2}{c_4}))e_n^4 + O(e_n^5), \end{array}$ (2.21)
$\begin{array}{l} {t_n} = (1 + \gamma f'(a)){c_2}{e_n} + ( - 3c_2^2 + 2{c_3} + \gamma f'(a)(3{c_3} - 3c_2^2)\\ + {\gamma ^2}f'{(a)^2}({c_3} - c_2^2))e_n^2 + (8c_2^3 - 10{c_2}{c_3} + 3{c_4} + \gamma f'(a)(10c_2^3 - 14{c_2}{c_3} + 6{c_4})\\ + (5c_2^3 - 8{c_2}{c_3} + 4{c_4}){\gamma ^2}f'{(a)^2} + {\gamma ^3}f'{(a)^3}(c_2^3 - 2{c_2}{c_3} + {c_4}))e_n^3\\ + ( - 20c_2^4 + 37c_2^2{c_3} - 8c_3^2 - 14{c_2}{c_4} + 4{c_5} + \gamma f'(a)\\ (60c_2^2{c_3} - 30c_2^4 - 15c_3^2 - 25{c_2}{c_4} + 10{c_5}) + {\gamma ^2}f'{(a)^2}(44c_2^2{c_3} - 20c_2^4 - 13c_3^2 - 20{c_2}{c_4} + 10{c_5})\\ + {\gamma ^3}f'{(a)^3}(17c_2^2{c_3} - 7c_2^4 - 6c_3^2 - 9{c_2}{c_4} + 5{c_5})\\ + {\gamma ^4}f'{(a)^4}(3c_2^2{c_3} - c_2^4 - c_3^2 - 2{c_2}{c_4} + {c_5}))e_n^4 + O(e_n^5).{\rm{ }} \end{array}$ (2.22)

Using the Taylor expansion (2.12) with $K(0, 0)=K_s =K_t =1, $ we get

$K({s_n},{t_n}) = 1 + {s_n} + {t_n} + [{K_{ss}}s_n^2 + 2{K_{st}}{s_n}{t_n} + {K_{tt}}t_n^2]/2 + O(e_n^5).$ (2.23)

Together with (2.15)-(2.23), we have

${u_n} - a = De_n^4 + Ee_n^5 + O(e_n^6),$ (2.24)

where

$\begin{array}{l} D = - 1/2{c_2}(1 + \gamma f'(a))(2{c_3}(1 + \gamma f'(a)) + c_2^2( - 10 + {K_{tt}} + 2{K_{st}} + {K_{ss}}\\ + \gamma f'(a)( - 10 + 2{K_{st}} + 2{K_{ss}}) + {\gamma ^2}f'{(a)^2}( - 2 + {K_{ss}})), \end{array}$ (2.25)
$\begin{array}{l} E = 1/2( - {\rm{2}}{c_2}{c_4}{({\rm{1}} + \gamma f'(a))^2}({\rm{2}} + \gamma f'(a)) - {\rm{2}}c_3^2({\rm{2}} + \gamma f'(a)){(1 + \gamma f'(a))^2}\\ - c_2^2{c_3}(1 + \gamma f'(a))( - {\rm{64}} + {\rm{6}}{K_{tt}} + \gamma f'(a)(15{K_{ss}} - {\rm{92}}) + {\gamma ^2}f'{(a)^2}( - {\rm{44}} + {\rm{12}}{K_{ss}})\\ + {\gamma ^3}f'{(a)^3}( - {\rm{6}} + {\rm{3}}{K_{ss}}) + {\rm{3}}{K_{tt}}({\rm{2}} + \gamma f'(a)) + {\rm{6}}{K_{st}}({\rm{2}} + {\rm{3}}\gamma f'(a) + {\gamma ^2}f'{(a)^2}))\\ + c_2^4( - {\rm{72}} + {\rm{1}}0{K_{ss}} + \gamma f'(a)( - {\rm{16}}0 + {\rm{31}}{K_{ss}}) + {\gamma ^2}f'{(a)^2}( - {\rm{132}} + {\rm{36}}{K_{ss}})\\ + {\gamma ^3}f'{(a)^3}( - {\rm{48}} + {\rm{19}}{K_{ss}}) + {\gamma ^4}f'{(a)^4}( - {\rm{6}} + {\rm{4}}{K_{ss}}) + {K_{tt}}({\rm{1}}0 + {\rm{15}}\gamma f'(a) + {\rm{6}}{\gamma ^2}f'{(a)^2})\\ + {\rm{2}}{K_{st}}({\rm{1}}0 + {\rm{23}}\gamma f'(a) + {\rm{18}}{\gamma ^2}f'{(a)^2} + {\rm{5}}{\gamma ^3}f'{(a)^3})))e_n^5 + O(e_n^6).{\rm{ }} \end{array}$ (2.26)

Using (2.13), (2.14), (2.16), (2.20) and (2.24), we have

$f({u_n}) = f'(a)(De_n^4 + Ee_n^5 + O(e_n^6)), $ (2.27)
$f[{x_n}, {u_n}] = f'(a)[1 + {c_2}{e_n} + {c_3}e_n^2 + {c_4}e_n^3 + ({c_5} + {c_2}D)e_n^4 + O(e_n^5)], $ (2.28)
$f[{u_n}, {y_n}] = f'(a)[1 + {c_2}Ae_n^2 + {c_2}Be_n^3 + ((C + D){c_2} + {c_3}{A^2})e_n^4 + O(e_n^5)], $ (2.29)
$\begin{array}{l} f[{u_n}, a] = f'(a) + f''(a)({u_n} - a)/2 + O(e_n^6)\\ = f'(a)[1 + {c_2}De_n^4 + {c_2}Ee_n^5 + O(e_n^6)], \end{array}$ (2.30)
$\begin{array}{l} {\lambda _n} = D{(1 + \gamma f'(a))^{ - 1}}e_n^3 + [E(1 + \gamma f'(a))\\ -{c_2}D(1 + 3\gamma f'(a) + {\gamma ^2}f'{(a)^2})]{(1 + \gamma f'(a))^{ - 2}}e_n^4 + O(e_n^5).{\rm{ }} \end{array}$ (2.31)

With (2.15), (2.21), and (2.28)-(2.30), we have

$\begin{array}{l} \frac{{(1 - {s_n})f[{u_n}, a]f[{x_n}, {z_n}]}}{{f[{u_n}, {x_n}]f[{u_n}, {y_n}]}} = 1 + ({c_3} - c_2^2)Ae_n^3 + (c_2^3(2 + 3\gamma f'(a) + {\gamma ^2}f'{(a)^2})\\ + {c_2}({c_4} - D + {c_4}\gamma f'(a)) + c_2^4(3 + 3\gamma f'(a) + {\gamma ^2}f'{(a)^2})\\ - c_2^2{c_3}(6 + 7\gamma f'(a) + 2{\gamma ^2}f'{(a)^2}))e_n^4 + O(e_n^5).{\rm{ }} \end{array}$ (2.32)

Hence, together with (2.11), (2.24) and (2.32), we obtain

$\begin{array}{l} {e_{n + 1}} = ({u_n} - a)\left( {1 - H(0) + (({c_3} - c_2^2)AH(0) - DH'(0){{\{ 1 + \gamma f'(a)\} }^{ - 1}})e_n^3} \right.\\ + ( - ({c_2}({c_4} - D + {c_4}\gamma f'(a)) + c_3^2(2 + 3\gamma f'(a) + {\gamma ^2}f'{(a)^2}) + c_2^4(3 + 3\gamma f'(a)\\ + {\gamma ^2}f'{(a)^2}) - c_2^2{c_3}(6 + 7\gamma f'(a) + 2{\gamma ^2}f'{(a)^2}))H(0) + H'(0)[{c_2}D(1 + 3\gamma f'(a)\\ \left. { + {\gamma ^2}f'{{(a)}^2})-(1 + \gamma f'(a))E]{{(1 + \gamma f'(a))}^{ - 2}})e_n^4 + O(e_n^5)} \right).{\rm{ }} \end{array}$ (2.33)

Now, using (2.33) with $H(0)=1, \mbox{ }{H}'(0)=1, K(0, 0)=1, K_s =1, K_t =1, K_{ss} =2, K_{tt} =2$ and $K_{st} =2, $ we obtain the error equation

$\begin{array}{l} {e_{n + 1}} = {(1 + \gamma f'(a))^2}c_2^2({c_3} - c_2^2)[(4{c_2}{c_3}-{c_4}){(1 + \gamma f'(a))^2}\\ + c_2^3(5 + 6\gamma f'(a) + 3{\gamma ^2}f'{(a)^2} + {\gamma ^3}f'{(a)^3})]e_n^8 + O(e_n^9).{\rm{ }} \end{array}$ (2.34)

This means that the convergence order of any method of family (2.10) is eighth-order, when $K(0, 0)=K_s =K_t =1, \mbox{ }K_{ss} =K_{tt} =K_{st} =2, \mbox{ }H(0)={H}'(0)=1, \mbox{ and }\left| {{H}''(0)} \right|<\infty .$ The proof is completed.

3 The Concrete Iterative Methods

The functions $H(\lambda )\mbox{ and }K(s, t)$ can take many forms satisfying the conditions of Theorem 2.1. In order to reduce the computational cost, we should choose $H(\lambda )\mbox{ and }K(s, t)$ as simple as possible. In what follows, we give some iterative forms:

$\begin{array}{l} {H_1}(\lambda ) = 1 + \lambda + \eta {\lambda ^2}, {\rm{ }}{H_2}(\lambda ) = \{ 1 + \lambda (\eta + 1)\} {(1 + \eta \lambda )^{ - 1}}, \eta \in R, \\ {K_1}(s, t) = {(1 - s - t)^{ - 1}}, {\rm{ }}{K_2}(s, t) = 1 + s + t + {(s + t)^2}.{\rm{ }} \end{array}$

Method 1  Taking $H_1 (\lambda )(\eta =0)\mbox{ and }K_1 (s, t)$ into the iterative formula (2.10), we get a family of eighth-order methods

$\begin{equation} \label{eq40} \left\{ \begin{aligned} y_n =&x_n -f(x_n )f[x_n, z_n]^{-1}, \\ u_n =&y_n -\left( {1-s_n -t_n } \right)^{-1}f(y_n )f[z_n, x_n]^{-1}, \\ x_{n+1} =&u_n -\left( {1+\lambda _n } \right)\left( {1-s_n } \right)f[x_n, z_n]f(u_n )\{f[u_n, x_n]f[u_n, y_n]\}^{-1}, \end{aligned} \right. \end{equation}$ (3.1)

where $z_n =x_n +\gamma f(x_n ), s_n =\frac{f(y_n )}{f(x_n )}, t_n =\frac{f(y_n )}{f(z_n )}, \lambda _n =\frac{f(u_n )}{f(z_n )}, $ and $\gamma \in R$ is a constants.

Method 2  Taking $H_2 (\lambda )(\eta =-1) \mbox{ and } K_2 (s, t)$ into the iterative formula (2.10), we get another family of eighth-order methods

$\begin{equation} \label{eq41} \left\{ \begin{aligned} y_n =&x_n -f(x_n )f[x_n, z_n]^{-1}, \\ u_n =&y_n -\left( {1+s_n +t_n +\left( {s_n +t_n } \right)^2} \right)f(y_n )f[z_n, x_n]^{-1}, \\ x_{n+1} =&u_n -f(z_n )f(u_n )\left( {1-s_n } \right)f[x_n, z_n]\{f[u_n, x_n]f[u_n, y_n](f(z_n )-f(u_n ))\}^{-1}, \end{aligned} \right. \end{equation}$ (3.2)

where $z_n =x_n +\gamma f(x_n ), s_n =\frac{f(y_n )}{f(x_n )}, \ t_n =\frac{f(y_n )}{f(z_n )}, \lambda _n =\frac{f(u_n )}{f(z_n )}, $ and $\gamma \in R$ is a constants.

In terms of computational cost, the developed methods require evaluations of four functions per iteration. Consider the definition of efficiency index [14] as $p^{1/w}$, where $p$ is the order of the method and $w$ is the number of function evaluations per iteration required by the method. The new methods have the efficiency index of $\sqrt[4]{8}\approx 1.682$, which is higher than $\sqrt 2 \approx 1.414$ of Steffensen's method (1.2), $\sqrt[3]</sup>{4}\approx 1.587$ of Ren's method (1.3).

4 Numerical Results

In this section, all experiments have been carried out on a personal computer equipped with an Intel(R) Celeron(R) 430 CPU, 1.79 GHz and WinXp 32-bit operating system. Using the symbolic computation in the programming package Matlab 7.0 Now, Method 1 (M81, (3.1)) and Method 2 (M82, (3.2)) are employed to solve some nonlinear equations and compared with Steffensen's method (S2, (1.2)), Ren's method (R4, (1.3))$(\beta =1)$, Zheng's method (Z8, (1.4)) with $\gamma =1$ and Soleymani's method (S8, (1.5)). Table 1 shows the absolute values $\vert x_k -x_{k-1} \vert (k=1, 2, \cdots 7)$ and the approximation $x_n $ to $a$, where $a$ is computed with 2400 significant digits and $x_n $ is calculated by using the same total number of function evaluation (TNFE) for all methods. The absolute values of function $(\left| {f(x_n )} \right|)$ and the computational order of convergence $\rho $ are also shown in Table 1. Here, the TNFE for all methods is 12. The computational order of convergence $\rho $ [15] is defined by

$ \rho \approx \ln (\left| {(x_{n+1} -x_n )/(x_n -x_{n-1} )} \right|)\{\ln (\left| {(x_n -x_{n-1} )/(x_{n-1} -x_{n-2} )} \right|)\}^{-1}. $

Following functions are used:

$\begin{array}{l} {f_1}(x) = \cos (x) - x{e^x} + {x^2}, {\rm{ }}a \approx 0.{\rm{639154, }}{x_0}{\rm{ = 0}}{\rm{.5, }}\\ {f_2}(x) = \sqrt x - 1/x - 3, {\rm{ }}a \approx {\rm{9}}{\rm{.633596, }}{x_0} = 8, \\ {f_3}(x) = x{e^{{x^3}}} - 4x - 2, {\rm{ }}a \approx - {\rm{0}}{\rm{.622256, }}{x_0}{\rm{ = }} - {\rm{0}}{\rm{.5, }}\\ {f_4}(x) = {\ln ^{( - {x^2} + x + 2)}} - x + 1, {\rm{ }}a \approx {\rm{1}}{\rm{.384123, }}{x_0} = 1.{\rm{ }} \end{array}$

On the other hand, in Table 2 the mean elapsed time, after 100 performances of the program, appears. The stopping criterion used is $\left| {x_{k+1} -x_k } \right|+\left| {f(x_k )} \right|<10^{-300}$.

5 Conclusions

By theoretical analysis and numerical experiments, we confirm that the new Steffensen type methods only use four evaluations of the function per iteration to achieve eighth-order convergence for solving a simple root of nonlinear functions. The new methods are free from any derivatives. The efficiency index of the new methods is $\sqrt[4]{8}\approx 1.682$. Table 1 show that the results of the new methods are similar to that of the other eighth-order optimal methods. Table 2 show that the new methods require less time to converge than the other optimal methods. Thus, our methods in this contribution can be considered as improvements of Steffensen's method.

Table 1
Comparison of various iterative methods (TNFE=12)

Table 2
Mean e-time in 100 performances of the program.
References
[1] Ortega J M, Rheinbolt W C. Iterative solution of nonlinear equations in several variables[M]. New York: Academic Press, 1970.
[2] Traub J F. Iterative methods for the solution of equations[M]. New Jersey: Prentice-Hall, 1964.
[3] Ren Hongmin, Wu Qingbiao, Bi Weihong. A class of two-step Steffensen type methods with fourth-order convergence[J]. Appl. Math. Comput., 2009, 209(2): 206–210.
[4] Zheng Quan, Li Jingya, Huang Fengxi. An optimal Steffensen-type family for solving nonlinear equations[J]. Appl. Math. Comput., 2011, 217(23): 9592–9597.
[5] Soleymani F, Karimi V S. Optimal Steffensen-type methods with eighth order of convergence[J]. Comp. Math. Appl., 2011, 62(12): 4619–4626. DOI:10.1016/j.camwa.2011.10.047
[6] Liu Zhongli, Zheng Quan, Zhao Peng. A variant of Steffensen's method of fourth-order convergence and its applications[J]. Appl. Math. Comput., 2010, 216(7): 1978–1983.
[7] Wang Xiaofeng, Zhang Tie. A family of Steffensen type methods with seventh-order convergence[J]. Numer. Algor., 2013, 62(3): 429–444. DOI:10.1007/s11075-012-9597-3
[8] Wang Xiaofeng, Džunić J, Zhang Tie. On an efficient family of derivative free three-point methods for solving nonlinear equations[J]. Appl. Math. Comp., 2012, 219(4): 1749–1760. DOI:10.1016/j.amc.2012.05.069
[9] Petković M S. Remarks on"on a general class of multipoint root-finding methods of high compu-tational efficiency"[J]. SIAM J. Numer. Anal., 2011, 49(3): 1317–1319. DOI:10.1137/100805340
[10] Petković M S, Ilić S, Džunić J. Derivative free two-point methods with and without memory for solving nonlinear equations[J]. Appl. Math. Comp., 2010, 217(5): 1887–1895. DOI:10.1016/j.amc.2010.06.043
[11] Džunić J, Petkovi M S. On generalized multipoint root-solvers with memory[J]. J. Comput. Appl. Math., 2012, 236(11): 2909–2920. DOI:10.1016/j.cam.2012.01.035
[12] Peng Yehui, Feng Heying, Li Qiyong, Zhang Xiaoqing. A fourth-order derivative-free algorithm for nonlinear equations[J]. J. Comput. Appl. Math., 2011, 235(8): 2551–2559. DOI:10.1016/j.cam.2010.11.007
[13] Kung H T, Traub J F. Optimal order of one-point and multipoint iteration[J]. J. Assoc. Comput. Math., 1974, 21(4): 634–651.
[14] Gautschi W. Numerical analysis: an introduction[M]. Boston: Birkhãuser, 1997.
[15] Cordero A, Torregrosa J R. Variants of Newton's method using fifth-order quadrature formulas[J]. Appl. Math. Comput., 2007, 190(1): 686–698.