Friday, February 9, 2024

MLLM-Guided Image Editing (MGIE)

Emilia David:

Apple researchers released a new model that lets users describe in plain language what they want to change in a photo without ever touching photo editing software.

The MGIE model, which Apple worked on with the University of California, Santa Barbara, can crop, resize, flip, and add filters to images all through text prompts.

MGIE, which stands for MLLM-Guided Image Editing, can be applied to simple and more complex image editing tasks like modifying specific objects in a photo to make them a different shape or come off brighter. The model blends two different uses of multimodal language models. First, it learns how to interpret user prompts. Then it “imagines” what the edit would look like (asking for a bluer sky in a photo becomes bumping up the brightness on the sky portion of an image, for example).

Amber Neely:

MGIE is open-source and available on GitHub for anyone to try. The GitHub page allows users to snag the code, data, and pre-trained models.

Previously:

Comments RSS · Twitter · Mastodon

Leave a Comment