BriefGPT.xyz
Jul, 2023
一次迭代是否足够进行多重准确性超参数优化?
Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization?
HTML
PDF
Romain Egele, Isabelle Guyon, Yixuan Sun, Prasanna Balaprakash
TL;DR
调研中发现多样性MF-HPO基准测试应包含更复杂的案例,同时建议研究人员始终使用建议的基准测试以及多样性MF-HPO方法的基准测试结果需要延长计算时间。
Abstract
hyperparameter optimization
(HPO) is crucial for fine-tuning machine learning models but can be computationally expensive. To reduce costs,
multi-fidelity hpo
(MF-HPO) leverages intermediate accuracy levels in th
→