Content

Speaker

Javier Rando

Abstract

Augmenting language models with image inputs may enable more effective jailbreak attacks through continuous optimization, unlike text inputs that require discrete optimization. However, new multimodal fusion models tokenize all input modalities using non-differentiable functions, which hinders straightforward attacks. In this work, we introduce the notion of a tokenizer shortcut that approximates tokenization with a continuous function and enables continuous optimization. We use tokenizer shortcuts to create the first end-to-end gradient image attacks against multimodal fusion models. We evaluate our attacks on Chameleon models and obtain jailbreak images that elicit harmful information for 72.5% of prompts. Jailbreak images outperform text jailbreaks optimized with the same objective and require 3x lower compute budget to optimize 50x more input tokens. Finally, we find that representation engineering defenses, like Circuit Breakers, trained only on text attacks can effectively transfer to adversarial image inputs.

Bio

Javier Rando is a doctoral student at ETH Zurich, advised by Florian Tramèr and Mrinmaya Sachan. His research focuses on identifying potential failures in deploying advanced AI models in real-world applications, particularly through red-teaming large language models. His PhD is supported by the ETH AI Center Doctoral Fellowship. In the summer of 2024, he interned with Meta's GenAI Safety & Trust team. Javier holds an MSc in Computer Science from ETH Zurich and a BSc in Data Science from Pompeu Fabra University. He has also been a visiting researcher at NYU under He He and founded EXPAI, an explainable AI startup in Spain. He has received the spotlight award for one of his recent papers at ICLR 2025.

Host

UMass AI Security