Home Examples Screenshots User manual Bluesky logo
OghmaNano Simulate organic/Perovskite Solar Cells, OFETs, and OLEDs DOWNLOAD

Advanced fitting techniques

1. Choosing the right minimizer

OghmaNano fitting window showing the Minimizer tab with options for fitting variables, duplicate variables, fit rules, and minimizer selection (Nelder–Mead highlighted).
The Minimizer tab of the fitting window, where you can select the optimization algorithm (e.g. Nelder–Mead) and configure variables, duplication rules, and fitting constraints.

The Minimizer ribbon is where you choose and configure the algorithm that drives the fitting process. Selecting the right minimizer can make the difference between a fast, robust convergence and a slow or unstable fit. OghmaNano provides several well-established algorithms, each with different strengths, which can be selected from the Fitting method drop-down menu.

Broadly speaking, fitting algorithms fall into two categories. The first are downhill or gradient-based methods, which try to “roll a ball down a hill” until it reaches the minimum error. Some of these require explicit gradient calculations, which can be costly and fragile for complex models; others are derivative-free and usually more robust. The second category are statistical methods, which do not only seek the best fit but also provide a probability distribution that indicates the confidence and uniqueness of the solution. These methods are more computationally demanding but can give deeper insights into parameter uncertainty.

Method Downhill Gradient Statistical Comment
Nelder-Mead Robust, slow, reliable
Newton Fragile, sometimes fast
Thermal Annealing Surprisingly good
MCMC ?
HMC ?

Nelder–Mead (Simplex Downhill)

The Nelder–Mead simplex algorithm is the most widely used fitting method in OghmaNano — in fact, all published papers up to 2024 relied on it. A general introduction to the method can be found here: https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method . In practice, this minimizer is robust and does not require gradients, making it well suited to complex problems. Its main configuration options are:

The main advantage of Nelder–Mead is its simplicity and robustness. Conceptually, it “rolls a ball downhill” toward the minimum error surface without requiring gradient calculations, which is particularly valuable for noisy or discontinuous models.

💡 Practical tips for using Nelder–Mead

Thermal Annealing

Thermal annealing is a stochastic optimization method inspired by the physical process of cooling a material from high temperature to low temperature. In OghmaNano, this algorithm explores the parameter space defined by the variable limits you specify. Correctly setting those bounds is essential — the minimizer will not search beyond them.

In practice, thermal annealing often performs surprisingly well and can be faster than Nelder–Mead at finding a reasonable solution. However, the final fits are sometimes less precise or “polished,” and additional refinement with Nelder–Mead may still be needed. Thermal annealing is particularly useful for escaping local minima and performing global exploration of the parameter space.

💡 Tips for using Thermal Annealing

Newton

Newton’s method is included in OghmaNano for completeness, but it is rarely the best choice for most fitting problems. As a gradient-based minimizer, it requires derivatives to be calculated at each step. While this can occasionally make it faster than Nelder–Mead for certain smooth, well-behaved problems, it also makes the algorithm fragile: small numerical errors in derivative evaluation can cause the fit to diverge or stall.

In practice, Newton’s method is highly sensitive to the initial guess and the scaling of variables. Unless the problem is very simple and well-conditioned, convergence is often poor. For these reasons, it is generally not recommended as a primary method but can be useful as a diagnostic tool or for experimentation in controlled cases.

💡 Tips for using Newton’s method

Markov chain Monte Carlo (MCMC)

Markov chain Monte Carlo (MCMC) is a statistical fitting method that samples parameter space randomly but in a way that builds up the correct probability distribution over time. Unlike Nelder–Mead or Newton, which return a single “best-fit” set of parameters, MCMC produces a distribution of solutions that shows how probable different parameter values are. This makes it particularly powerful for quantifying uncertainty and identifying correlations between variables. In OghmaNano, support is implemented but has not been robustly tested.

Hamiltonian Monte Carlo (HMC)

Hamiltonian Monte Carlo (HMC) extends the MCMC idea by using gradient information to propose more efficient jumps through parameter space. Instead of moving randomly, HMC simulates the “motion” of a particle through the likelihood landscape, guided by gradients, which can dramatically improve sampling efficiency in high-dimensional problems. Like MCMC, HMC generates a probability distribution over fitted parameters rather than a single solution. In OghmaNano, support is implemented but has not been robustly tested.

No-U-Turn Sampler (NUTS)

The No-U-Turn Sampler (NUTS) is an adaptive variant of HMC that automatically decides when to stop a trajectory through parameter space to avoid wasted computation or retracing paths. This makes NUTS more user-friendly, since it reduces the need to manually tune algorithm parameters. NUTS is widely regarded as one of the most efficient and robust methods for Bayesian parameter estimation. In OghmaNano, support is implemented but has not been robustly tested.

👉 Next step: Now continue to Part C for more advanced fitting methods.