Multimodal Large Language Models (mLLMs) that are trained on caption-like and interleaved text-image data, such as mOSCAR, show improved in-context learning capabilities, boost in few-shot learning performance across various multilingual image-text tasks and benchmarks, and address the limitation of current multilingual and multimodal datasets.