Feb, 2024

使用多级优化的掩码自编码器中的下游任务引导掩码学习

TL;DRMulti-level Optimized Mask Autoencoder (MLO-MAE) is a novel framework for visual representation learning that leverages end-to-end feedback from downstream tasks to learn an optimal masking strategy during pretraining, demonstrating remarkable improvements in adaptability and efficiency compared to existing methods.