Barry Diller backed Sam Altman against recent criticism, but delivered a stark warning about artificial general intelligence that undercuts any comfort his support might offer.

Speaking at a conference, the media mogul said he trusts Altman personally. Yet Diller argued that individual trust becomes "irrelevant" as AGI approaches because the stakes transcend any single person's judgment or intentions. He emphasized that AGI represents a fundamentally unpredictable force that requires robust guardrails regardless of who leads its development.

Diller's comments reflect a widening gap between confidence in current AI leadership and anxiety about where the technology heads. OpenAI has faced internal turmoil and external scrutiny over its governance structure, safety protocols, and the concentration of power around Altman. Diller's defense shields Altman from personal attacks while simultaneously warning that personal virtue means little when confronting existential risks.

The distinction matters. Diller isn't endorsing the status quo. He's saying the problem runs deeper than any individual's trustworthiness. As systems grow more capable and autonomous, traditional accountability mechanisms break down. A CEO's good faith becomes a footnote next to technical questions about alignment, control, and unforeseen behaviors in superintelligent systems.

This frames AGI development as a governance problem that transcends corporate leadership. Diller called for guardrails, though he offered limited specifics on what those should entail. The implication is clear: the industry needs external oversight, regulatory frameworks, and technical safeguards that don't depend on trusting any single company or executive.

Diller's position sits uneasily between optimism and caution. He trusts Altman enough to back him publicly. But he trusts AGI so little that he treats personal trust as functionally meaningless. That tension captures the central paradox of current AI governance. The