Among its applications the EMA is used is often used in reflectometry and ellipsometry to model the surface roughness of film or to model voids or other kind of inclusions.
In this context the Bruggeman Effective Medium Approximation (BEMA) is a specific recipe to compute the approximation of \(\epsilon\). The formula that should be satisfied for \(\epsilon\) is:\[\sum_i f_i \frac{\epsilon_i - \epsilon}{\epsilon_i + 2 \epsilon}=0\]where \(f_i\) are the fractions of each material and \(\epsilon_i\) the corresponding dielectric constant. The terms \(f_i\) obey to the equation\[\sum_i f_i = 1\]
The \(\epsilon\) of the Bruggeman model is defined implicitly by this first equation but computing the solution is not as simple as it may appear.
The problem is that the unknown \(\epsilon\) appear both on the numerator and on the denominator of each fraction and this make the equation non-trivial to solve. In the attempt of solving the equation one can observe that it can be transformed into a polynomial equation of degree n where n is the number of fractions. If one consider the the terms \(\epsilon_i\) can have an imaginary part it turns out that the polynomial has complex coefficients.
One problem of transforming the equation in polynomial form is that is that the coefficients are difficult to compute in the general case. In addition the numerical solution of a complex polynomial of degree n is computationally expensive and we will have an additional problem. The polynomial equation will have n complex roots while we are looking for just one solution, the only one that is physically meaningful.
So the problem is, how the equation can be practically solved in a reasonable way and how we can be sure to obtain the physical solution ?
My first guess was to use a simple iterative approach. I make the ansatz that the solution will be close to the weighted average of \(\epsilon_i\), i.e. \(\epsilon = \sum f_i \epsilon_i\) which is the solution the equation would have if the denominators were all equals. As we know that the \epsilon given will not be a solution we can just assume that it is just an approximation and solve the equation above replacing the \(\epsilon\) in the denominator by our guess. In this way we will obtain a new estimation of \(\epsilon\) and we can iterate using therefore the following schema:\[\epsilon^{(i+1)} = \frac{\sum \frac{f_i \epsilon_i}{\epsilon_i + 2 \epsilon^{(i)}}}{\sum \frac{f_i}{\epsilon_i + 2 \epsilon^{(i)}}}\]
It turns out that the idea is not so bad and it will converge to the solution in many cases. In addition the approach above can be replaced using the Newton-Raphson method that may converge more rapidly.
The problem with the approach above in that is some cases, when mixing a dielectric with an absorbing material like a metal or silicon at some wavelengths, the iterative procedure converges very slowly or it can converge to the wrong solution so the method is not reliable.
What happens is that when mixing an highly absorbing material the initial estimate we use, the weighted average of epsilons, is a really bad estimation of the actual solution and this explain the convergence problem of the iterative approach.
The requirement I had for Regress Pro, an ellipsometry and reflectometry application, was to have a reliable computation of epsilon that converge in all cases. It was not acceptable to say, "sorry, in this case I was unable to found a solution" !
By thinking about the problem I finally had a good idea that let me solve reliably the equation with the guarantee that it will converge to the real physical solution. In addition the amount computations involved is quite reasonable which is also important.
The idea is quite simple. Let us consider the \(\epsilon\) that satisfies the first equation but with a set of fractions that depends on a parameter we will call \(t\):\[\sum_i f_i(t) \frac{\epsilon_i - \epsilon}{\epsilon_i + 2 \epsilon}=0\]by dropping the constraint that the fractions add up to one.
The fractions are defined with \(t\) as follow:\[\begin{array}{ll}f_1(t) = f_1 & \\ f_i(t) = t \, f_i & \textrm{for} \, i = 2 \dots n \end{array}\]In practice this schema can be seen like if the fractions \(f_2, \dots f_n\) were gradually turned on when \(t\) grows from 0 to 1. One interesting thing is that we know the solution for \(t = 0\), it is just \(\epsilon = \epsilon_1\) since all the other fractions are zero. On the other hand, for \(t = 1\) \(\epsilon(t)\) will be just the solution to the BEMA equation we are searching.
The point now is, ok, we know \(\epsilon(t)\) for \(t=0\) but we need to obtain \(\epsilon(t)\) for \(t=1\) so we need some way to bring our solution for \(t = 0\) to \(t = 1\). How it can be done ? Well, it happens that a simple solution exists, one just needs to find out the the differential equations to which \(\epsilon(t)\) obey over the variations of the parameter \(t\) and solve the differential equation numerically using \(\epsilon(0)\) as our starting point.
In the language on numerical computation the differential equation we want to solve is an Ordinary Differential Equation (ODE) of a complex variable \(\epsilon\). In most case a solver can be found for arrays of real variables, let say \(y_i(t)\) so in our case we can just pretend that the real and imaginary part of \(\epsilon\) are two real-valued components of an array of size two.
The differential equation for \(\epsilon(t)\) can be found just by differentiating wrt \(t\) the BEMA equation. One easily founds:\[\epsilon'(t) = \frac{1}{3} \, \frac{\sum_{i=2}^n f_i \frac{\epsilon_i - \epsilon}{\epsilon_i + 2 \epsilon}}{\sum_{i=1}^n f_i \, \alpha_i(t) \, \frac{\epsilon_i}{(\epsilon_i + 2 \epsilon)^2}}\]where the coefficients \(\alpha_i(t)\) are defined as:\[\begin{array}{ll}\alpha_1(t) = 1 & \\ \alpha_i(t) = t & \textrm{for} \, i = 2 \dots n \end{array}\]
I have opted to solve the differential equation working directly on the complex \(\epsilon\) and using the Runge-Kutta-Fehlberg (4, 5) method by adapting of the code found in the GNU Scientific Library. My implementation can be found in the Regress Pro's github repository in the disp-bruggeman.c file.
I have also added a final step, after the ODE integration, to "refine" the solution using the Newton-Raphson method. In this way we can use a more relaxed requirement on the accuracy of the ODE integrator and refine the solution using the quickly converging Newton-Raphson method. In this approach we are actually using the ODE integration to obtain an \(\epsilon\) which is just close to the solution and leave the final determination of \(\epsilon\) to the Newton-Raphson iterations.
The formula for the Newton-Raphson iterations is:\[\epsilon^{\textrm{(next)}} = \epsilon + \frac{1}{3} \, \frac{\sum_i f_i \frac{\epsilon_i - \epsilon}{\epsilon_i + 2 \epsilon}}{\sum_i f_i \frac{\epsilon_i}{(\epsilon_i + 2 \epsilon)^2}}\]
Some Final Remarks
The solution of the differential equation exists and is unique under the condition that \(f_1 \neq 0\). To ensure the condition of \(f_1\) and also to have more treatable equation I have chosen to reorder the fractions so the \(f_1\) is always the biggest coefficient.
It should be noted that \(\epsilon(t)\) has a physical meaning for each value of \(t\) even if the fractions are not normalized. One just need to observe that the equation of \(\epsilon\) can be multiplied by a normalizing constant without changing its solution.
Final Thoughts
The solution I present here for the Bruggeman EMA equation can have a real interest for people implementing the Bruggeman model but it is otherwise a very specific application. What I found more interesting and gave me the motivation to write a post was the elegance and the effectiveness of the approach that I found. It does leverage the knowledge we have about the equation and is able to bring a solution by starting from a configuration where a solution is known.