Watermark-based AI-generated image detectors are vulnerable to transfer evasion attacks, even in the absence of access to detection APIs.
Robust defense strategies are crucial in countering adversarial patch attacks on object detection AI models.
Protecting AI-generated content through a unified watermarking framework in diffusion models.
GUARDT2I introduces a generative moderation framework to enhance T2I models' safety against adversarial prompts.