RLC Logo RLC 2026

PAC Apprenticeship Learning with Bayesian Active Inverse Reinforcement Learning

Ondrej Bajgar, Dewi Sid William Gould, Jonathon Liu, Alessandro Abate, Konstantinos Gatsis, Michael A Osborne

RL from Human Feedback, Imitation Learning Wednesday, August 6 Poster #19 Accepted — RLC 2025

Abstract

As AI systems become increasingly autonomous, reliably aligning their decision-making with human preferences is essential. Inverse reinforcement learning (IRL) offers a promising approach to infer preferences from demonstrations. These preferences can then be used to produce an apprentice policy that performs well on the demonstrated task. However, in domains like autonomous driving or robotics, where errors can have serious consequences, we need not just good average performance but reliable policies with formal guarantees. But obtaining sufficient human demonstrations for reliability guarantees can be costly. *Active* IRL addresses this challenge by strategically selecting the most informative scenarios for human demonstration. We introduce PAC-EIG, an information-theoretic acquisition function that directly targets probably-approximately-correct (PAC) guarantees for the learned policy -- providing the first such theoretical guarantee for active IRL with imperfect expert demonstrations. Our method maximizes information gain about immediate regret, efficiently identifying which states require further demonstration to ensure reliable apprentice behaviour. We also present an alternative method for scenarios where learning the reward itself is the primary objective. We prove convergence bounds, illustrate failure modes of prior heuristic methods, and demonstrate our approach experimentally.