In numerical analysis and scientific work, one of the most important occurring problems is solving nonlinear equations. To solve these equations, we can use iterative methods such as Newton's method and its variants. In this paper, we consider iterative methods to find a real simple root $x^{*}$ of a nonlinear equation $f(x)$=0, where $f: D\rightarrow R$ is a scalar function and it is sufficiently smooth in a neighborhood of $x^{*}$, and $D$ is an open interval.
Being quadratically convergent, Newton's method is probably the best known and most used algorithm. To improve the local order of convergence of Newton's method, many modified methods were proposed. One of the third-order modifications of Newton's method [1] was given by
where $y_{n}=x_{n}-f(x_{n})/f'(x_{n}), ~n\geq0$. Also, another third-order method [2], that is defined by
Combining eqs. (1.1) and (1.2), we have
Taking $\alpha=-1, \beta=2$, we can obtain the following fourth-order iterative method [3]
The present method requires two evaluations of the function and one evaluation of its first derivative. We consider the definition of efficiency index [3] as $p^{1/m}$, where $p$ is the order of the method and $m$ is the number of function evaluations in per iteration required by the method. The present methods defined by (1.3) has the efficiency index equals to $\sqrt[3]{4}\approx1.587$, which is better than the ones of the mentioned third-order methods $\sqrt[3]{3}\approx1.442$ and Newton's method $\sqrt[]{2}\approx1.414$.
In order to derive more new high-order iterative methods, we present and analyze a new three-step iterative method by approximating the first derivative of the function in the third step for solving nonlinear equations. According to the next theorem, the local order of convergence of the composed method will be at least seven and one iteration this method requires three evaluations of the function and one evaluation of its first derivative with its efficiency index $\sqrt[4]{7}\approx1.627$. Several numerical examples are given to show the performance of the new method.
In this section, for construction of the new high-order iterative method, we use the iterative method given by eq. (1.3).
Theorem 2.1 Let $\psi_{1}(x)$ and $\psi_{2}(x)$ be two iterative methods with order of convergence $P$ and $Q$, respectively, then the order of convergence of the iterative method $\psi(x)=\psi_{2}(\psi_{1}(x))$ is $PQ$ [4].
We consider the iteration scheme of the form
From Theorem 2.1, we can see that the above formulas have an efficiency index of $\sqrt[5]{8}\approx1.516$. To improve the efficiency index, we approximate the first-appeared derivative in the last step $f'(z_{n})$ by a combination of already evaluated function values using divided differences.
We consider the Taylor polynomial of degree 2 for the function $f(z_{n})$:
which implies
But $f''({y_{n}})$ can be expressed as [4]
It is easy to know that
And consequently, our contributed iterative method is presented in the following three-step view
In the next section, we will show that our new proposed method (2.6) has seventh-order of convergence. Per one iteration, method (2.6) requires three evaluations of the function and one evaluation of its first derivative, so its efficiency index attains $\sqrt[4]{7}\approx1.627$ and this is the main motivation of our paper.
Theorem 3.1 Let $x^{*}\in D$ be a simple zero of a sufficiently differentiable function $f:D\rightarrow R$ for an open interval $D$. Then, the new three-step method that is defined by eq. (2.6) has the seventh-order convergence and satisfies the following error equation,
where $e_{n}=x_{n}-x^{*}$, $c_{k}=(1/k!)f^{(k)}(x^{*})/f'(x^{*})$.
Proof Let $e_{n}=x_{n}-x^{*}$ and $x^{*}$ be a simple zero of $f$. Using Taylor expansion and taking into account $f(x^{*})$=0, we have
and
Dividing eq. (3.1) by eq. (3.2) gives us
and hence
Again, expanding $f(y_{n})$ about $x^{*}$ and then using eq.(3.4), we have
After an elementary calculation from eqs. (3.1) and (3.5), we obtain
From eqs. (3.2), (3.6) and (3.7), we can obtain
From eq. (3.8), we obtain
Thus,
Hence, we obtain the following error equation
This means that the method defined by eq. (2.6) has the seventh-order convergence.
Remark 1 The efficiency index of our scheme defined by eq. (2.6) is $\sqrt[4]{7}\approx1.627$ which is bigger than $\sqrt[]{2}\approx1.414$ of Newton's method. Although the new scheme has lower efficiency index in contrast with $\sqrt[4]{8}\approx1.682$ of optimal eighth-order methods in [6-7], its convergence radius and accuracy for not so close starting points are better than the accuracy and convergence radii of the optimal eighth-order methods, see Section 4.
To show the reliability of the new method (2.6), we compare the results with Zhang's seventh-order methods [5] and the eighth-order methods in [6-7]. Zhang's seventh-order methods are expressed as
where $\alpha=f(y_{n})/f(x_{n})$, $\lambda=f[z_{n}, y_{n}]+f[z_{n}, x_{n}, x_{n}](z_{n}-y_{n})$, $\beta, \gamma\in R$. The optimal efficient eighth-order method of Thukral and Petkovic [6] and the optimal novel eighth-order method of Neta and Petkovic [7] are expressed as
where where $a, b, t\in R$ and
All computations were performed in MATLAB 7.0 with 200 digit floating arithmetic (VPA=200). When the stopping criterion
is satisfied, $x_{n+1}$ is taken as an approximate of the exact root $x^{*}$. We use the following functions:
$\bullet$ $\displaystyle~~f_{1}(x)=\sqrt{x^{4}+8}\sin(\frac{\pi}{x^{2}+2})+\frac{x^{3}}{x^{4}+1}-\sqrt{6}+\frac{8}{17}, ~~x^{*}=-2$,
$\bullet$ $\displaystyle~~f_{2}(x)=\sin x-0.5, ~~x^{*}=0.523598775598299$,
$\bullet$ $\displaystyle~~f_{3}(x)=e^{x}-4x^{2}=0, ~~x^{*}=0.7148059123627778$,
$\bullet$ $\displaystyle~~f_{4}(x)=(\sin x-\frac{\sqrt{2}}{2})^{2}(x+1), ~~x^{*}=-1$,
$\bullet$ $\displaystyle~~f_{5}(x)=\cos x-x, ~~x^{*}=0.739085133215161$.
Displayed in Table 1 is the number of iterations $(n)$ and the number of function evaluations (NFE) required such that the stopping criteria satisfied, where $\varepsilon$ is taken to be $10^{-15}$. Moreover, displayed in Table 2 is the distance of two consecutive approximations $\delta=|x_{n+1}-x_{n}|$, the value of $|f(x_{n+1})|$ after the required iterations and the computational order of convergence (COC) which can be approximated using the formula [8]
The results in Table 1 and Table 2 show that for most of the functions we tested, the present method improves the computational efficiency of Newton's method, the modified Jarratt's method defined by eq. (1.2), where the initial guesses of $f_{1}, f_{2}, f_{3}, f_{4}$ in Table 2 are -4.0, 1.4, -0.42 and -1.9, respectively. Note that for the eighth-order methods defined by eqs. (4.4)-(4.5), where $a$= $b$= $t$=0, we should pull the attention toward this, which they have a very low convergence radii and that is why in most of the case; they turn out to divergence when the starting points are in the vicinity of the zero but not so close. The applications of such eighth-order methods as eqs. (4.4)-(4.5) whose convergence radii, are low, are indeed restricted in practice.
We also provide the Total Number of Evaluations (TNE) for each method to obtain the root up to 15 decimal places in Table 3 and Table 4. As we can see, the contributed method (1.3) is robust and accurate in comparison with other efficient schemes. By comparisons with eqs. (4.4)-(4.5), we could claim that the method compete the optimal eighth-order schemes in [6-7] while its computational complexity is less and its convergence radius is bigger too. Of course, if the initial guesses be enough close (very close) to the sought zeros, then the optimal eighth-order methods (4.4)-(4.5) will perform better than (1.3).
Remark 2 Let $r$ be the total operations (including additions, subtractions, divisions, multiplications and so on) of an iterative method per iteration, then the (extended) computational index (also known as extended operational index) is defined by $\sqrt[r]{p}$, where $p$ is the order of convergence. Now, we can compare the computational index of some well known high-order methods with our scheme. The computational index of our method is $\sqrt[25]{6}\approx1.074$ which is bigger than $\sqrt[39]{8}\approx1.054$ of (4.4) and $\sqrt[58]{8}\approx1.036$ of (4.5).
In this work we present an efficient Newton-type method with eighth-order convergence by combining previous methods for solving nonlinear equations. Analysis of efficiency shows that the new algorithm is more efficient and it performs better than classical Newton's method and some other methods.