Publications

Filter by

RESET


MPI PAPERS

MPI Papers


BIBTEX TYPES

Conference Paper

Technical Report


YEAR

2024


DEPARTMENTS

Emperical Interference

Haptic Intelligence

Modern Magnetic Systems

Perceiving Systems

Physical Intelligence

Robotic Materials

Social Foundations of Computation


Research Groups

Autonomous Vision

Autonomous Learning

Bioinspired Autonomous Miniature Robots

Dynamic Locomotion

Embodied Vision

Human Aspects of Machine Learning

Intelligent Control Systems

Learning and Dynamical Systems

Locomotion in Biorobotic and Somatic Systems

Micro, Nano, and Molecular Systems

Movement Generation and Control

Neural Capture and Synthesis

Physics for Inference and Optimization

Organizational Leadership and Diversity

Probabilistic Learning Group


Topics

Robot Learning

Conference Paper

2022

Autonomous Learning

Robotics

AI

Career

Award


Safety- and Efficiency- aligned Learning Technical Report A Realistic Threat Model for Large Language Model Jailbreaks Boreiko, V., Panfilov, A., Hein, M., Geiping, J. October 2024 (Submitted)
A plethora of jailbreaking attacks have been proposed to obtain harmful responses from safety-tuned LLMs. In their original settings, these methods all largely succeed in coercing the target output, but their attacks vary substantially in fluency and computational effort. In this work, we propose a unified threat model for the principled comparison of these methods. Our threat model combines constraints in perplexity, measuring how far a jailbreak deviates from natural text, and computational budget, in total FLOPs. For the former, we build an N-gram model on 1T tokens, which, in contrast to model-based perplexity, allows for an LLM-agnostic and inherently interpretable evaluation. We adapt popular attacks to this new, realistic threat model, with which we, for the first time, benchmark these attacks on equal footing. After a rigorous comparison, we not only find attack success rates against safety-tuned modern models to be lower than previously presented but also find that attacks based on discrete optimization significantly outperform recent LLM-based attacks. Being inherently interpretable, our threat model allows for a comprehensive analysis and comparison of jailbreak attacks. We find that effective attacks exploit and abuse infrequent N-grams, either selecting N-grams absent from real-world text or rare ones, e.g. specific to code datasets.
URL BibTeX

Safety- and Efficiency- aligned Learning Technical Report AI Risk Management Should Incorporate Both Safety and Security Qi, X., Huang, Y., Zeng, Y., Debenedetti, E., Geiping, J., He, L., Huang, K., Madhushani, U., Sehwag, V., Shi, W., Wei, B., Xie, T., Chen, D., Chen, P., Ding, J., Jia, R., Ma, J., Narayanan, A., Su, W. J., Wang, M., et al. 2024 BibTeX

Safety- and Efficiency- aligned Learning Technical Report Coercing LLMs to do and reveal (almost) anything Geiping, J., Stein, A., Shu, M., Saifullah, K., Wen, Y., Goldstein, T. 2024 (Submitted) URL BibTeX

Safety- and Efficiency- aligned Learning Technical Report Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion Souri, H., Bansal, A., Kazemi, H., Fowl, L., Saha, A., Geiping, J., Wilson, A. G., Chellappa, R., Goldstein, T., Goldblum, M. 2024 (Submitted) URL BibTeX