Autonomous vehicles prove statistically safer than humans. Trust remains elusive. The gap lives in unclear intent, not capability. When agents make consequential decisions, how do we make their reasoning legible without sacrificing the adaptability that makes them useful?