Selected Publications
|
A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring
Usman Anwar*, Julianna Piskorz*, David D. Baek, David Africa, Jim Weatherall, Max Tegmark, Christian Schroeder de Witt, Mihaela van der Schaar, David Krueger
Under review at ICML., 2026
Current approaches to detecting hidden communication in AI systems rely on ad-hoc approaches such as inspecting messages for anomalies. We introduce a new framework that instead detects steganography through its behavioral effects; measuring whether a signal helps intended recipients more than outside monitors on real tasks.
|
|
Analyzing and Improving Chain-of-Thought Monitorability Through Information Theory
Usman Anwar*, Tim Baker*, Dana Kianfar, Cristina Pinneri, Christos Louizos
Under review at ICML., 2026
We propose a simple training objective based on mutual information that prevents CoT obfuscation and maintains CoT monitorability when models are optimized against monitors. Through our theoretical analysis, we also characterize two possible failure modes for practical monitors: information gap, where the monitor cannot interpret the model’s reasoning, and elicitation error, where the monitor fails to correctly evaluate outputs for the target attribute.
|
|
Foundational Challenges in Assuring Alignment and Safety of Large Language Models
Usman Anwar and 41 other authors
Transactions on Machine Learning Research (Survey Certification), 2024
arxiv /
tweetprint /
This 150+ pages long agenda identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs). These challenges are organized into three different categories: scientific understanding of LLMs, development and deployment methods, and sociotechnical challenges. Based on the identified challenges, we pose 200+, concrete research questions.
|
|
Reward Model Ensembles Help Mitigate Overoptimization
Thomas Coste, Usman Anwar, Robert Kirk, David Krueger
Internation Conference on Learning Representations, 2024
arxiv /
code /
We show that using an ensmeble of reward models can be effective in mitigating overoptimization.
|
|
Bayesian Methods for Constraint Inference in Reinforcement Learning
Dimitris Papadimitriou, Usman Anwar, Daniel Brown
Transactions on Machine Learning Research, 2022
paper /
poster /
We develop a Bayesian approach for learning constraints which provides several advantages as it can work with partial trajectories, is applicable in both stochastic and deterministic environments and due to its ability to provide a posterior distribution enables use of active learning for accurate learning of constraints.
|
|
Inverse Constrained Reinforcement Learning
Usman Anwar*, Shehryar Malik*, Alireza Aghasi, Ali Ahmed
Internation Conference on Machine Learning, 2021
arxiv /
video /
code /
poster /
slides /
We propose a framework for learning Markovian constraints from user demonstrations in high dimensional, continuous settings. We empirically show that constraints thus learned are general and transfer well to agents with different dynamics and morphologies.
|
|