meta-reinforcement learning approaches aim to develop learning procedures
that can adapt quickly to a distribution of tasks with the help of a few
examples. Developing efficient exploration strategies capable of
Meta-Reinforcement Learning (Meta-RL) aims to acquire meta-knowledge for quick adaptation to diverse tasks. Our novel approach, Constraint Model Agnostic Meta Learning (C-MAML), merges meta learning with constrained optimization to enable rapid and efficient task adaptation, demonstrating effectiveness in simulated locomotion with wheeled robot tasks of varying complexity.