OpenAI is making GPT-5.5 Instant the default model across ChatGPT, replacing the previous standard. Internal testing shows the update cuts hallucinations by 52.5 percent on high-risk domains like medicine and law, where false information carries real consequences.
A new "memory sources" feature gives users transparency into which stored context shaped each response. Users can now see exactly what prior conversations, uploaded files, or integrated Gmail data influenced ChatGPT's answer. This addresses a persistent complaint about AI systems: black-box reasoning that leaves users guessing how answers were constructed.
The rollout launches immediately for all ChatGPT users, but personalization features roll out in phases. Plus and Pro subscribers on web access personalization based on past chats, files, and Gmail integration first. Free users and mobile customers get the new default model now but wait for personalized features.
The hallucination reduction matters most in domains where errors cause harm. Medicine and law require accuracy. A 52.5 percent drop in false claims significantly reduces the risk of users acting on incorrect medical advice or legal guidance. GPT-5.5 Instant doesn't eliminate hallucinations entirely, but the improvement is substantial enough to meaningfully lower risk.
Memory sources transforms how users interact with AI. Rather than trusting an answer without understanding its origins, users see the actual context. This shifts power back to users. They can verify whether ChatGPT pulled from relevant sources or misapplied old information.
The tiered rollout strategy lets OpenAI prioritize paying customers while testing personalization at scale. Gmail integration on paid tiers particularly matters. Letting ChatGPT pull from email history creates powerful personalization but also expands privacy considerations. Limiting this to paying subscribers initially manages both opportunity and risk.
THE BOTTOM LINE: OpenAI cuts hallucinations in half