Specialized agents interact with competing goals and different architectures. One acts, another reverses it. Complexity creates unpredictabilities nobody explicitly coded. Can we govern behavior that emerges from interaction rather than instruction? Should we try?