Intuitively, obedience -- following the order that a human gives -- seems
like a good property for a robot to have. But, we humans are not perfect and we
may give orders that are not best aligned to our preferences. We show that when
a human is not perfectly rational then a robot that tries to infer and act
according to the human's underlying preferences can