Recent work has observed an intriguing ''neural collapse'' phenomenon in
well-trained neural networks, where the last-layer representations of training
samples with the same label collapse into each other. This appears to suggest
that the last-layer representations are completely deter