Usman Anwar

I am a final year PhD student in Computational and Biological Learning lab at Cambridge University, UK. I am broadly interested in AI Safety and Alignment, with my recent focus on chain-of-thought monitorability. I am supervised by David Kruger and funded by Open Phil AI Fellowship and Vitalik Buterin Fellowship on AI Safety.

Email  /  GitHub  /  Google Scholar  /  LinkedIn  /  CV

If you want to chat or collaborate with me, or pitch me on what I should do post-PhD, please drop me an email.

xyz1 photo

Selected Publications

project1 image

A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring


Usman Anwar*, Julianna Piskorz*, David D. Baek, David Africa, Jim Weatherall, Max Tegmark, Christian Schroeder de Witt, Mihaela van der Schaar, David Krueger
Under review at ICML., 2026

Current approaches to detecting hidden communication in AI systems rely on ad-hoc approaches such as inspecting messages for anomalies. We introduce a new framework that instead detects steganography through its behavioral effects; measuring whether a signal helps intended recipients more than outside monitors on real tasks.

project1 image

Analyzing and Improving Chain-of-Thought Monitorability Through Information Theory


Usman Anwar*, Tim Baker*, Dana Kianfar, Cristina Pinneri, Christos Louizos
Under review at ICML., 2026

We propose a simple training objective based on mutual information that prevents CoT obfuscation and maintains CoT monitorability when models are optimized against monitors. Through our theoretical analysis, we also characterize two possible failure modes for practical monitors: information gap, where the monitor cannot interpret the model’s reasoning, and elicitation error, where the monitor fails to correctly evaluate outputs for the target attribute.

project1 image

Foundational Challenges in Assuring Alignment and Safety of Large Language Models


Usman Anwar and 41 other authors
Transactions on Machine Learning Research (Survey Certification), 2024
arxiv / tweetprint /

This 150+ pages long agenda identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs). These challenges are organized into three different categories: scientific understanding of LLMs, development and deployment methods, and sociotechnical challenges. Based on the identified challenges, we pose 200+, concrete research questions.

project1 image

Reward Model Ensembles Help Mitigate Overoptimization


Thomas Coste, Usman Anwar, Robert Kirk, David Krueger
Internation Conference on Learning Representations, 2024
arxiv / code /

We show that using an ensmeble of reward models can be effective in mitigating overoptimization.

project1 image

Bayesian Methods for Constraint Inference in Reinforcement Learning


Dimitris Papadimitriou, Usman Anwar, Daniel Brown
Transactions on Machine Learning Research, 2022
paper / poster /

We develop a Bayesian approach for learning constraints which provides several advantages as it can work with partial trajectories, is applicable in both stochastic and deterministic environments and due to its ability to provide a posterior distribution enables use of active learning for accurate learning of constraints.

project1 image

Inverse Constrained Reinforcement Learning


Usman Anwar*, Shehryar Malik*, Alireza Aghasi, Ali Ahmed
Internation Conference on Machine Learning, 2021
arxiv / video / code / poster / slides /

We propose a framework for learning Markovian constraints from user demonstrations in high dimensional, continuous settings. We empirically show that constraints thus learned are general and transfer well to agents with different dynamics and morphologies.


Design and source code from Leonid Keselman's website