Generative Multimodal Models are In-Context Learners

1Beijing Academy of Artificial Intelligence 2Tsinghua University 3Peking University
*equal contribution project lead

Abstract

The human ability to easily solve multimodal tasks in context (i.e., with only a few demonstrations or simple instructions), is what current multimodal systems have largely struggled to imitate. In this work, we demonstrate that the task-agnostic in-context learning capabilities of large multimodal models can be significantly enhanced by effective scaling-up. We introduce Emu2, a generative multimodal model with 37 billion parameters, trained on large-scale multimodal sequences with a unified autoregressive objective. Emu2 exhibits strong multimodal in-context learning abilities, even emerging to solve tasks that require on-the-fly reasoning, such as visual prompting and object-grounded generation. The model sets a new record on multiple multimodal understanding tasks in few-shot settings. When instruction-tuned to follow specific instructions, Emu2 further achieves new state-of-the-art on challenging tasks such as question answering benchmarks for large multimodal models and open-ended subject-driven generation. These achievements demonstrate that Emu2 can serve as a base model and general-purpose interface for a wide range of multimodal tasks. Code and models are publicly available to facilitate future research.

Video

A strong multimodal few-shot learner

comparison_fewshot.

An impressive multimodal generalist

Radar.

A skilled painter

gen_metrics.

Zero-shot subject-driven generation

Multimodal in-context learning

multi-modal-incontext-learning.

Strong multimodal understanding

hexogon.
guide robot.
damage car.
sample A and B.

Generate image from any prompt sequence

generate_from_any_prompt_sequence1.
generate_from_any_prompt_sequence2.
generate_from_any_prompt_sequence3.

Generate video from any prompt sequence

video_generation.

Method

Emu2 learns with a predict-the-next-element objective in multimodality. Each image in the multimodal sequence is tokenized into embeddings via a visual encoder, and then interleaved with text tokens for autoregressive modeling. The regressed visual embeddings will be decoded into an image or a video by a visual decoder. Compared to Emu1, Emu2 embraces a simpler framework, better visual decoder, and scales up to 37 billion parameters.

emu2_method.

BibTeX

@article{Emu2,
        title={Generative Multimodal Models are In-Context Learners}, 
        author={Quan Sun and Yufeng Cui and Xiaosong Zhang and Fan Zhang and Qiying Yu and Zhengxiong Luo and Yueze Wang and Yongming Rao and Jingjing Liu and Tiejun Huang and Xinlong Wang},
        publisher={arXiv preprint arXiv:2312.13286},
        year={2023}
  }