We introduce a new approach for audio-visual speech separation. Given a
video, the goal is to extract the speech associated with a face in spite of
simultaneous background sounds and/or other human speakers. Whereas existing
methods focus on learning the alignment between the speaker's