AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error
Authors: Jonas Ricker, Denis Lukovnikov, Asja Fischer
What
This paper introduces AEROBLADE, a novel method for detecting images generated by Latent Diffusion Models (LDMs) by exploiting the reconstruction error of the autoencoder (AE) used in the LDM pipeline.
Why
The paper addresses the growing threat of visual disinformation fueled by the increasing realism and accessibility of AI-generated images. AEROBLADE provides a simple, training-free, and effective method for detecting these images, which is crucial for combating misinformation.
How
The authors leverage the observation that LDMs’ AEs reconstruct generated images more accurately than real images. They calculate the reconstruction error between an input image and its reconstruction after passing through the LDM’s AE. By comparing the error against a threshold, AEROBLADE can determine if an image is real or generated. The authors evaluate AEROBLADE on a dataset of images generated by various LDMs and compare its performance against existing detection methods.
Result
AEROBLADE achieves high detection accuracy (average precision of 0.992) on a dataset of images generated by state-of-the-art LDMs, including Stable Diffusion and Midjourney, even without access to the generator’s specific AE. The method’s performance is comparable to deep learning-based detectors that require extensive training. Additionally, the authors demonstrate that AEROBLADE can be used for qualitative image analysis, such as identifying inpainted regions in real images.
LF
The authors acknowledge that AEROBLADE’s performance is best when the specific AE of the LDM used for generation is known. Future work includes exploring the use of more robust distance metrics and training a classifier on top of the reconstruction errors to enhance robustness against image perturbations. Additionally, they aim to investigate the potential of using reconstruction errors for precise localization of inpainted regions.
Abstract
With recent text-to-image models, anyone can generate deceptively realistic images with arbitrary contents, fueling the growing threat of visual disinformation. A key enabler for generating high-resolution images with low computational cost has been the development of latent diffusion models (LDMs). In contrast to conventional diffusion models, LDMs perform the denoising process in the low-dimensional latent space of a pre-trained autoencoder (AE) instead of the high-dimensional image space. Despite their relevance, the forensic analysis of LDMs is still in its infancy. In this work we propose AEROBLADE, a novel detection method which exploits an inherent component of LDMs: the AE used to transform images between image and latent space. We find that generated images can be more accurately reconstructed by the AE than real images, allowing for a simple detection approach based on the reconstruction error. Most importantly, our method is easy to implement and does not require any training, yet nearly matches the performance of detectors that rely on extensive training. We empirically demonstrate that AEROBLADE is effective against state-of-the-art LDMs, including Stable Diffusion and Midjourney. Beyond detection, our approach allows for the qualitative analysis of images, which can be leveraged for identifying inpainted regions. We release our code and data at https://github.com/jonasricker/aeroblade .