The AnyNeedle method is a synthesis algorithm, proprietary to FilmOptima. It extends the widely known Needle method. While the classical Needle method only inserts zero-thickness layers into the stack, the AnyNeedle approach also evaluates layers with finite thicknesses, making it more flexible and often more efficient.
For each candidate insertion (depth, thickness), the new stack is refined using Adam. The best-improving candidate is accepted.
This iterative process continues until no further improvements can be achieved.
Advantages
- Systematic Construction: Does not require a strong initial guess.
- Layer Efficiency: Often achieves high performance with fewer layers than the Needle method.
- Practical Thicknesses: Less prone to generating extremely thin (impractical) layers than the Needle method.
- Robust to Initial Design: Performance is less sensitive to the initial design than the Needle method.
- Highly Parallelizable: Candidate stacks can be evaluated independently.
Limitations
- Combinatorial Cost: Computationally expensive when many insertion points and thickness values are evaluated.
- Thin-Layer Risk: May still produce very thin layers that require filtering via minimum-thickness constraints.
In FilmOptima
In FilmOptima, the AnyNeedle method belongs to the Synthesis category of algorithms.
| Parameter | Description |
|---|---|
| ↓ Thickness | The lower bound for candidate layer thicknesses. |
| ↑ Thickness | The upper bound for candidate layer thicknesses. |
| # Thicknesses | Number of equally spaced candidate thickness values between the lower and upper bounds. |
| # Insertions | Specifies how many equally spaced needle insertions are attempted across the stack. The exact insertion positions are determined using interpolation. |
| LearningRate | Controls the step size in the Adam optimizer during refinement. Higher values make updates faster but risk overshooting, while lower values are more stable but slower to converge. |
| Patience | Defines how many iterations the Adam optimizer will continue without improvement in the merit function before halving the learning rate. |
| MaxEpoch | Sets the maximum number of training cycles for the Adam optimizer in each refinement step. Acts as a hard limit to keep optimization runs bounded. |