The ClusterRefinement method is a refinement algorithm, proprietary to FilmOptima. It improves an existing stack by systematically exploring joint variations of multiple layers (“clusters”) and polishing the best candidates with a local optimizer.
The key idea is a two-stage enumeration:
- First, the algorithm generates all possible combinations of up to #LayersToCluster layer indices from the current stack.
- For each index combination, the algorithm enumerates all possible permutations of thickness candidates, using discretized thickness values.
Once the “clustered” candidate stacks are generated, they are refined using Adam.
By jointly perturbing clusters of layers instead of adjusting them individually, ClusterRefinement can escape shallow local minima while still keeping the total number of layers fixed.
Advantages
- Joint layer optimization: Captures multi-layer interactions by modifying clusters instead of single layers.
- Systematic enumeration: Guarantees coverage of all index combinations and discretized thickness assignments.
- Precise local improvement: Nested Adam ensures each candidate is fully refined.
- No topology change: Layer count stays fixed, ideal when design structure is constrained.
- Highly Parallelizable: Candidate stacks can be evaluated independently.
Limitations
- Combinatorial growth: Number of candidates rises quickly with number of thickness values and the number of layers to cluster.
- Runtime cost: Full enumeration can be computationally expensive.
- Local scope: Broader than single-layer refinement but still not a global optimization method.
In FilmOptima
In FilmOptima, ClusterRefinement belongs to the Refinement category of algorithms.
| Parameter | Description |
|---|---|
| ↓ Thickness | The lower bound for candidate layer thicknesses. |
| ↑ Thickness | The upper bound for candidate layer thicknesses. |
| # Thicknesses | Number of equally spaced candidate thickness values between the lower and upper bounds. |
| # LayerToCluster | Specifies the number of layers to group together for joint optimization. |
| LearningRate | Controls the step size in the Adam optimizer during refinement. Higher values make updates faster but risk overshooting, while lower values are more stable but slower to converge. |
| Patience | Defines how many iterations the Adam optimizer will continue without improvement in the merit function before halving the learning rate. |
| MaxEpoch | Sets the maximum number of training cycles for the Adam optimizer in each refinement step. Acts as a hard limit to keep optimization runs bounded. |