In this paper, we consider iterative methods to find a simple root of a nonlinear equation $f(x)=0, $ where $f:I\subset R\to R$ for an open interval $I$ is a scalar function. The classical Newton's method [1] with second-order convergence is written as
However, when the first order derivative of the function $f(x)$ is unavailable or is expensive to compute, the Newton's method is still restricted in practical applications. In order to avoid computing the first order derivative, Steffensen [2] proposed the following second-order method
where $z_n =x_n +f(x_n ), $ and $f[\cdot \mbox{ }, \mbox{ }\cdot]$ is the first order divided difference. To improve the local order of convergence, many high-order methods were proposed in open literatures, see [3-12] and references therein. Ren et al. [3] proposed the following fourth-order methods
where $z_n =x_n +f(x_n )$ and $\beta \in R$ is a constant. Zheng et al. [4] presented an eighth-order method by using the direct Newtonian interpolation, which is given by
where $z_n =x_n +\gamma f(x_n )$ and $\gamma \in R$ is a constant. Furthermore, Soleymani et al. [5] also presented the following eight-order method
where $z_n =x_n -f(x_n ), t_n =f(y_n )/f(z_n )$ and $\lambda _n =f(u_n )/f(z_n ).$ Other Steffensen type methods and their applications were discussed in [6-12]. All these methods are derivative-free in per full iteration.
The purpose of this paper is to develop a new family of eighth-order derivative-free methods and give the convergence analysis. This paper is organized as follows. In Section 2, we present a family of three-step eighth-order iterative methods for solving nonlinear equations. The new methods are free from any derivatives and require four evaluations of the function $f(x), $ therefore the new methods have the efficiency index of $\sqrt[4]{8}\approx 1.682$. The new methods agree with the conjecture of Kung and Traub [13] for the case $n=4$. We prove that the order of convergence of the new methods is eight for nonlinear equations. In Section 3, we give some specific iterative methods which can be used in practical computations. Numerical examples are given in Section 4 to illustrate convergence behavior of our methods for simple roots. Section 5 is a short conclusion.
Now, we consider the iteration scheme of the form,
where $z_n =x_n +\gamma f(x_n ), \mbox{ }s_n =f(y_n )/f(x_n ), \mbox{ }t_n =f(y_n )/f(z_n ), \mbox{ }\lambda _n =f(u_n )/f(z_n )$ and $\gamma \in R$ is a constant. $H(\lambda ) \mbox{ and } K(s, t)$ are some functions of one and two variables, respectively. It is quite obvious that formula (2.1) requires five functional evaluations per iteration. To derive a scheme with a higher efficiency index and reduce the number of functional evaluations, we approximate ${f}'(u_n )$ by considering the approximation of $f(x)$ by a rational nonlinear function of the form
where the parameters $a_1, a_2 $ and $a_3 $ are determined by the condition that $f$ and $\phi $ coincide at $x_n, y_n $ and $u_n $. That means $\phi (x)$ satisfies the conditions
From (2.2) and (2.3), we can obtain
Differentiation of (2.2) gives
We can now approximate the derivative ${f}'(x)$ with the derivative ${\phi }'(x)$ and obtain
From (2.4) and (2.5)-(2.8), it follows that
Substituting (2.9) into (2.1), we obtain a new family of eighth-order methods:
The functions $H(\lambda ) \mbox{ and } K(s, t)$ should be determined so that the iterative method (2.10) is of the order eight. To do that, we will use the Taylor's series about (0) for $H(\lambda )$ and $(0, 0)$ for $K(s, t)$ thus,
Here the subscribes denote respective partial derivatives; for example, $K_{st} =\left. {\frac{\partial ^2K(s, t)}{\partial s\partial t}} \right|_{(s, t)=(0, 0)}, $ etc. Petković et al. [10] proved that the first two steps of (2.10) are fourth order methods with $K(0, 0)=1, \mbox{ }K_s =1$ and $K_t =1.$ We could find another coefficients $H(0), \ {H}'(0), \ K_{ss}, K_{st}, \ K_{tt} $ by Theorem 2.1.
Theorem 2.1 Let $a\in I$ be a simple zero of a sufficiently differentiable function $f:\mbox{ }I\in R\to R$ for an open interval $I.$ If $x_0 $ is sufficiently close to $a$, then the sequence $\{x_n \}$ generated by any method of the family (2.10) converges to $a$. If $H(0)=1, \ {H}'(0)=1, \ \left| {{H}''(0)} \right|<\infty, K(0, 0)=1, K_s =1, $ $K_t =1, \ K_{ss} =2, \ K_{tt} =2, K_{st} =2, $ then the family of methods defined by (2.10) is of eighth-order.
Proof Let $e_n =x_n -a$, $c_n =(1/n!)f^{(n)}(a)/{f}'(a)$, $n=2, 3, \cdots $. Using the Taylor expansion and taking into account $f(a)=0, $ we have
From (2.13), noting that $z_n =x_n +\gamma f(x_n )$, we can obtain
then,
where
With (2.13), (2.14) and (2.16), we have
Using the Taylor expansion (2.12) with $K(0, 0)=K_s =K_t =1, $ we get
Together with (2.15)-(2.23), we have
Using (2.13), (2.14), (2.16), (2.20) and (2.24), we have
With (2.15), (2.21), and (2.28)-(2.30), we have
Hence, together with (2.11), (2.24) and (2.32), we obtain
Now, using (2.33) with $H(0)=1, \mbox{ }{H}'(0)=1, K(0, 0)=1, K_s =1, K_t =1, K_{ss} =2, K_{tt} =2$ and $K_{st} =2, $ we obtain the error equation
This means that the convergence order of any method of family (2.10) is eighth-order, when $K(0, 0)=K_s =K_t =1, \mbox{ }K_{ss} =K_{tt} =K_{st} =2, \mbox{ }H(0)={H}'(0)=1, \mbox{ and }\left| {{H}''(0)} \right|<\infty .$ The proof is completed.
The functions $H(\lambda )\mbox{ and }K(s, t)$ can take many forms satisfying the conditions of Theorem 2.1. In order to reduce the computational cost, we should choose $H(\lambda )\mbox{ and }K(s, t)$ as simple as possible. In what follows, we give some iterative forms:
Method 1 Taking $H_1 (\lambda )(\eta =0)\mbox{ and }K_1 (s, t)$ into the iterative formula (2.10), we get a family of eighth-order methods
where $z_n =x_n +\gamma f(x_n ), s_n =\frac{f(y_n )}{f(x_n )}, t_n =\frac{f(y_n )}{f(z_n )}, \lambda _n =\frac{f(u_n )}{f(z_n )}, $ and $\gamma \in R$ is a constants.
Method 2 Taking $H_2 (\lambda )(\eta =-1) \mbox{ and } K_2 (s, t)$ into the iterative formula (2.10), we get another family of eighth-order methods
where $z_n =x_n +\gamma f(x_n ), s_n =\frac{f(y_n )}{f(x_n )}, \ t_n =\frac{f(y_n )}{f(z_n )}, \lambda _n =\frac{f(u_n )}{f(z_n )}, $ and $\gamma \in R$ is a constants.
In terms of computational cost, the developed methods require evaluations of four functions per iteration. Consider the definition of efficiency index [14] as $p^{1/w}$, where $p$ is the order of the method and $w$ is the number of function evaluations per iteration required by the method. The new methods have the efficiency index of $\sqrt[4]{8}\approx 1.682$, which is higher than $\sqrt 2 \approx 1.414$ of Steffensen's method (1.2), $\sqrt[3]</sup>{4}\approx 1.587$ of Ren's method (1.3).
In this section, all experiments have been carried out on a personal computer equipped with an Intel(R) Celeron(R) 430 CPU, 1.79 GHz and WinXp 32-bit operating system. Using the symbolic computation in the programming package Matlab 7.0 Now, Method 1 (M81, (3.1)) and Method 2 (M82, (3.2)) are employed to solve some nonlinear equations and compared with Steffensen's method (S2, (1.2)), Ren's method (R4, (1.3))$(\beta =1)$, Zheng's method (Z8, (1.4)) with $\gamma =1$ and Soleymani's method (S8, (1.5)). Table 1 shows the absolute values $\vert x_k -x_{k-1} \vert (k=1, 2, \cdots 7)$ and the approximation $x_n $ to $a$, where $a$ is computed with 2400 significant digits and $x_n $ is calculated by using the same total number of function evaluation (TNFE) for all methods. The absolute values of function $(\left| {f(x_n )} \right|)$ and the computational order of convergence $\rho $ are also shown in Table 1. Here, the TNFE for all methods is 12. The computational order of convergence $\rho $ [15] is defined by
Following functions are used:
On the other hand, in Table 2 the mean elapsed time, after 100 performances of the program, appears. The stopping criterion used is $\left| {x_{k+1} -x_k } \right|+\left| {f(x_k )} \right|<10^{-300}$.
By theoretical analysis and numerical experiments, we confirm that the new Steffensen type methods only use four evaluations of the function per iteration to achieve eighth-order convergence for solving a simple root of nonlinear functions. The new methods are free from any derivatives. The efficiency index of the new methods is $\sqrt[4]{8}\approx 1.682$. Table 1 show that the results of the new methods are similar to that of the other eighth-order optimal methods. Table 2 show that the new methods require less time to converge than the other optimal methods. Thus, our methods in this contribution can be considered as improvements of Steffensen's method.