The Particle Swarm Optimization (PSO) method is a global optimization algorithm inspired by the collective behavior of swarms in nature (e.g., flocks of birds, schools of fish). Unlike local refinement algorithms, PSO explores the search space broadly, making it well suited for discovering high-quality solutions in complex optimization landscapes.
In PSO, each particle represents a candidate thin-film design. Particles “fly” through the search space by updating their positions based on two factors: their own best-known solution and the best-known solution of the swarm. Over time, the swarm converges towards promising regions of the search space.
Since PSO generates candidate designs with arbitrary thicknesses, FilmOptima combines it with nested refinement using Adam.
Advantages
- Global Exploration: Escapes local minima via swarm-based search.
- Nonlinearity Ready: Handles highly non-linear or discontinuous merit landscapes effectively.
- Highly Parallelizable: Particles can be evaluated independently.
- Refinement Synergy: Integrates naturally with local refinement for fine-tuning.
Limitations
- Compute Intensive: More expensive than local methods (many particles × many simulations).
- Parameter Sensitivity: Requires careful tuning of swarm parameters to balance exploration and convergence.
- Premature Convergence Risk: Can collapse early if swarm diversity diminishes.
- No global guarantees: No mathmatical guarantees regarding global optimality.
In FilmOptima
In FilmOptima, PSO belongs to the Global Optimization category of algorithms.
| Parameter | Description |
|---|---|
| ↓ Thickness | The lower bound for candidate layer thicknesses. |
| ↑ Thickness | The upper bound for candidate layer thicknesses. |
| c1 | Cognitive coefficient – weight given to a particle’s own best-known position. |
| c2 | Social coefficient – weight given to the swarm’s best-known position. |
| w | Inertia weight – controls the balance between exploration (high w) and exploitation (low w). |
| # Particles | Number of particles (candidate designs) in the swarm. |
| # Iterations | Number of swarm update steps (iterations) performed during the optimization. |
| LearningRate | Controls the step size in the Adam optimizer during refinement. Higher values make updates faster but risk overshooting, while lower values are more stable but slower to converge. |
| Patience | Defines how many iterations the Adam optimizer will continue without improvement in the merit function before halving the learning rate. |
| MaxEpoch | Sets the maximum number of training cycles for the Adam optimizer in each refinement step. Acts as a hard limit to keep optimization runs bounded. |