Generative models that learn to associate variations in the output along isolated attributes with coordinate directions in latent space are said to possess disentangled representations. This has been a recent topic of great interest, but remains poorly understood. We show that even for gans