OpenAI Release Notes

Last updated: Jan 20, 2026

OpenAI Products

All OpenAI Release Notes (421)

  • Jan 20, 2026
    • Parsed from source:
      Jan 20, 2026
    • Detected by Releasebot:
      Jan 20, 2026
    OpenAI logo

    OpenAI

    ServiceNow powers actionable enterprise AI with OpenAI

    ServiceNow teams up with OpenAI to embed GPT-5.2 frontier models into the AI Platform, enabling end‑to‑end automation across IT, finance, HR and more. Enterprises gain voice, multimodal, and governance‑driven AI that understands, reasons, and acts in real workflows.

    Key takeaways

    • Multi-year agreement expands ServiceNow customer access to OpenAI frontier models.
    • OpenAI models will be a preferred intelligence capability for enterprises that run more than 80 billion workflows each year in ServiceNow.
    • OpenAI will support direct speech-to-speech and native voice technology in ServiceNow.

    ServiceNow, the AI control tower for business reinvention, today announced OpenAI will be a preferred intelligence capability for enterprises that run more than 80 billion workflows each year on its platform.
    Enterprises around the world use ServiceNow to orchestrate workflows that keep their systems and operations running smoothly. In complex environments where technology is spread across many systems, teams, and vendors, ServiceNow ties everything together—helping organizations spot issues early, route work to the right people, manage approvals, and resolve challenges quickly so the business keeps moving.
    ServiceNow’s AI Platform brings OpenAI models like GPT‑5.2 directly into these enterprise workflows, so AI can understand what’s happening, help decide what to do next, and take action within a customer’s secure infrastructure. With OpenAI, ServiceNow will unlock a new level of automation for the world’s largest companies enabling enterprise intelligence at scale for any function or department including IT, finance, sales, human resources, and more.
    “ServiceNow leads the market in AI-powered workflows, setting the enterprise standard for real-world AI outcomes,” said Amit Zavery, president, chief operating officer, and chief product officer at ServiceNow. “Together, Service and OpenAI are building the future of AI experiences: deploying AI that takes end-to-end action in complex enterprise environments—not sandboxes. As companies shift experimenting with AI to deploying it at scale, they need the power of multiple AI leaders working together, to deliver faster, better outcomes. Bringing together our engineering teams and our respective technologies will drive faster value for customers and more intuitive ways of working with AI.”
    “ServiceNow is helping enterprises bring agentic AI into workflows that are secure, scalable, and designed to deliver measurable outcomes,” said Brad Lightcap, chief operating officer at OpenAI. “With OpenAI frontier models and multimodal capabilities in ServiceNow, enterprises across every industry will benefit from intelligence that handles work end to end in even the most complex environments.”
    With OpenAI, the ServiceNow AI Platform will leverage frontier intelligence like GPT‑5.2 so customers can understand more about what’s happening and take action inside enterprise workflows.

    Powering actionable AI workflows for enterprise customers

    ServiceNow and OpenAI will support enterprises in adopting AI systems that can reason across tasks and carry out work with little human intervention. Customers can leverage OpenAI models alongside with ServiceNow workflows:

    • AI assistance
      that lets employees ask questions in natural language and get clear, actionable answers based on real enterprise data.
    • AI-powered summarization and content generation
      for incidents, cases, knowledge articles, and service interactions, helping teams resolve issues faster with less manual effort.
    • Developer and admin tools
      that turn intent into workflows, logic, and automation, dramatically speeding how business processes are built and updated.
    • Intelligent search and discovery
      that pulls the right information from across enterprise systems exactly when it’s needed.

    For example, employees use ServiceNow in an intuitive experience where data, models, AI modalities, and workflows converge to ask for what they need in plain language, like “I need to view my benefits” or “this customer issue needs to be escalated.”
    With GPT‑5.2 built directly into the ServiceNow AI Platform, those requests aren’t just answered—they’re acted on. The model pairs with the ServiceNow workflow engine, where it can access enterprise data, respect governance and permissions, and provide insights to trigger real actions. GPT‑5.2 helps add more context, decide what should happen next, and, via the ServiceNow platform, move work through approvals, and updates until it’s done. To employees it feels like chatting with a smart coworker; behind the scenes, it’s AI running real enterprise workflows end to end.
    Looking ahead, ServiceNow and OpenAI will build toward more natural, multimodal experiences, where users can talk, type, or use visuals to interact with AI agents seamlessly.
    ServiceNow extends OpenAI’s work with the world’s largest and most established enterprises, including Accenture, Walmart⁠ (opens in a new window), PayPal⁠ (opens in a new window), Intuit, Target, Thermo Fisher⁠ (opens in a new window), BNY, Morgan Stanley, BBVA, and many more.
    More than 1 million business customers around the world are directly using OpenAI—the fastest-growing business platform in history.

    Original source Report a problem
  • Jan 16, 2026
    • Parsed from source:
      Jan 16, 2026
    • Detected by Releasebot:
      Jan 16, 2026
    OpenAI logo

    Codex by OpenAI

    0.86.0

    New features let SKILL.toml define skill metadata surfaced in the app server and TUI, and an explicit header to disable web search for server side rollout control. Bug fixes tidy prompts, empty payloads, clipboard reads, and background cleanup. Changelog highlights routing and log improvements.

    New Features

    • Skill metadata can now be defined in SKILL.toml (names, descriptions, icons, brand color, default prompt) and surfaced in the app server and TUI (#9125)
    • Clients can explicitly disable web search and signal eligibility via a header to align with server-side rollout controls (#9249)

    Bug Fixes

    • Accepting an MCP elicitation now sends an empty JSON payload instead of null to satisfy servers expecting content (#9196)
    • Input prompt placeholder styling is back to non-italic to avoid terminal rendering issues (#9307)
    • Empty paste events no longer trigger clipboard image reads (#9318)
    • Unified exec cleans up background processes to prevent late End events after listeners stop (#9304)

    Chores

    • Refresh the orchestrator prompt to improve internal routing behavior (#9301)
    • Reduce noisy needs_follow_up error logging (#9272)

    Changelog

    Full Changelog: rust-v0.85.0...rust-v0.86.0

    • chore: better orchestrator prompt (#9301) @jif-oai
    • nit: clean unified exec background processes (#9304) @jif-oai
    • Revert recent styling change for input prompt placeholder text (#9307) @etraut-openai
    • Support SKILL.toml file. (#9125) @xl-openai
    • [search] allow explicitly disabling web search (#9249) @sayan-oai
    • remove needs_follow_up error log (#9272) @pap-openai
    • Revert empty paste image handling (#9318) @aibrahim-oai
    • fix: send non-null content on elicitation Accept (#9196) @yuvrajangadsingh
    Original source Report a problem
  • Jan 16, 2026
    • Parsed from source:
      Jan 16, 2026
    • Detected by Releasebot:
      Jan 17, 2026
    OpenAI logo

    OpenAI

    Introducing ChatGPT Go, now available worldwide

    ChatGPT Go goes global with an $8/month US price, delivering 10x more messages, uploads and image creation plus longer memory on GPT‑5.2 Instant. The plan sits alongside Plus and Pro as a worldwide rollout, expanding access and capabilities for everyday users.

    In August 2025, we introduced ChatGPT Go in India as a low-cost subscription designed to expand access to ChatGPT’s most popular features and help more people use advanced AI in their daily life. Since then, ChatGPT Go has rolled out to 170 additional countries, making it our fastest growing plan and among the most affordable AI subscription globally.

    In markets where Go has been available, we’ve seen strong adoption and regular everyday use for tasks like writing, learning, image creation, and problem-solving. This early momentum helped inform our decision to make ChatGPT Go available globally.

    Starting today, ChatGPT Go is rolling out everywhere ChatGPT is available. In the US, Go is available for $8 per month.

    With this launch, ChatGPT now offers three subscription tiers globally:

    • ChatGPT Go at $8 USD/month*
    • ChatGPT Plus at $20 USD/month
    • ChatGPT Pro at $200 USD/month

    *US price displayed. Go pricing is localized in some markets.

    What you get with ChatGPT Go

    ChatGPT Go is designed for people who want expanded access to our latest model, GPT‑5.2 Instant, at a lower price point—more messages, more uploads, and more image creation. With ChatGPT Go, you get:

    • 10x more messages, file uploads and image creation than the free tier, so you can keep chatting with no limits on GPT‑5.2 Instant.
    • Longer memory and context window, so ChatGPT can remember more helpful details about you over time.

    This now sits alongside our two existing consumer subscription plans: ChatGPT Plus and ChatGPT Pro.

    ChatGPT Plus is designed for work that requires deeper reasoning—like writing and editing documents, learning and research, or data analysis. It offers expanded access to our most advanced models, including GPT‑5.2 Thinking, along with the flexibility to choose legacy models and use our coding agent, Codex. Compared to Go, Plus includes higher limits for messages, file uploads, memory, and context, so ChatGPT can remember more detail from past conversations and support longer, more continuous workflows.

    ChatGPT Pro is built for AI power users pushing the limits of advanced intelligence. It offers full access to our most powerful model, GPT‑5.2 Pro, along with maximum memory and context, and early previews of our newest features.

    Supporting accessibility with ads

    We plan to begin testing ads in the free tier and ChatGPT Go in the US soon. Ads support our commitment to making AI accessible to everyone by helping us keep ChatGPT available at free and affordable price points.

    ChatGPT Plus, Pro, Business and Enterprise will remain ad-free.

    Read more about how we plan to introduce ads here⁠.

    Plan details

    To compare plans and see what’s included, visit chatgpt.com/pricing⁠ (opens in a new window).

    Original source Report a problem
  • Jan 16, 2026
    • Parsed from source:
      Jan 16, 2026
    • Detected by Releasebot:
      Jan 17, 2026
    OpenAI logo

    OpenAI

    Our approach to advertising and expanding access to ChatGPT

    ChatGPT expands access with Go, bringing free tier and Go subscription to the U.S. offering messaging, image creation, file uploads and memory for $8/month. Ads testing is planned with strong privacy controls and ad-free options, signaling broader monetization while preserving trust.

    AI ads and Go launch

    AI is reaching a point where everyone can have a personal super-assistant that helps them learn and do almost anything. Who gets access to that level of intelligence will shape whether AI expands opportunity or reinforces the same divides.

    We’ve been working to make powerful AI accessible to everyone through our free product and low-cost subscription tier, ChatGPT Go, which has launched in 171 countries since August. Today we’re bringing Go to the U.S. and everywhere ChatGPT is available⁠, giving people expanded access to messaging, image creation, file uploads and memory for $8 USD/month. In the coming weeks, we’re also planning to start testing ads in the U.S. for the free and Go tiers, so more people can benefit from our tools with fewer usage limits or without having to pay. Plus, Pro, Business, and Enterprise subscriptions will not include ads.

    People trust ChatGPT for many important and personal tasks, so as we introduce ads, it’s crucial we preserve what makes ChatGPT valuable in the first place. That means you need to trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising. You need to know that your data and conversations are protected and never sold to advertisers. And we need to keep a high bar and give you control over your experience so you see truly relevant, high-quality ads—and can turn off personalization if you want.

    Given that, we want to be clear about the principles that guide our approach to advertising:

    Our ads principles

    • Mission alignment: Our mission is to ensure AGI benefits all of humanity; our pursuit of advertising is always in support of that mission and making AI more accessible.
    • Answer independence: Ads do not influence the answers ChatGPT gives you. Answers are optimized based on what's most helpful to you. Ads are always separate and clearly labeled.
    • Conversation privacy: We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.
    • Choice and control: You control how your data is used. You can turn off personalization, and you can clear the data used for ads at any time. We’ll always offer a way to not see ads in ChatGPT, including a paid tier that’s ad-free.
    • Long-term value: We do not optimize for time spent in ChatGPT. We prioritize user trust and user experience over revenue.

    We’re not launching ads yet, but we do plan to start testing in the coming weeks for logged in adults in the U.S. on the free and Go tiers. To start, we plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation. Ads will be clearly labeled and separated from the organic answer. You’ll be able to learn more about why you’re seeing that ad, or dismiss any ad and tell us why. During our test, we will not show ads in accounts where the user tells us or we predict that they are under 18, and ads are not eligible to appear near sensitive or regulated topics like health, mental health or politics.

    Here’s an example of what the first ad formats we plan to test could look like:

    The best ads are useful, entertaining, and help people discover new products and services. Given what AI can do, we're excited to develop new experiences over time that people find more helpful and relevant than any other ads. Conversational interfaces create possibilities for people to go beyond static messages and links. For example, soon you might see an ad and be able to directly ask the questions you need to make a purchase decision.

    Ads also can be transformative for small businesses and emerging brands trying to compete. AI tools level the playing field even further, allowing anyone to create high-quality experiences that help people discover options they might never have found otherwise.

    We’ll learn from feedback and refine how ads show up over time, but our commitment to putting users first and maintaining trust won’t change. By starting our ad platform from the ground up with these principles in place, we can align our incentives with what people want from ChatGPT. Our long-term focus remains on building products that millions of people and businesses find valuable enough to pay for. Our enterprise and subscription businesses are already strong, and we believe in having a diverse revenue model where ads can play a part in making intelligence more accessible to everyone.

    Once we begin testing our first ad formats in the coming weeks and months, we look forward to getting people's feedback and ensuring that ads can support broad access to AI and keep the trust that makes ChatGPT valuable.

    Original source Report a problem
  • Jan 15, 2026
    • Parsed from source:
      Jan 15, 2026
    • Detected by Releasebot:
      Jan 16, 2026
    • Modified by Releasebot:
      Jan 20, 2026
    OpenAI logo

    Codex by OpenAI

    0.85.0

    App-server v2 adds real time collaboration events, richer agent control with spawn_agent roles and interruptible send_input, and upgraded models metadata with upgrade guidance. Bug fixes improve sandbox fallback, resume behavior, and prompt decoding for wider compatibility.

    New Features

    • App-server v2 now emits collaboration tool calls as item events in the turn stream, so clients can render agent coordination in real time. (#9213)
    • Collaboration tools gained richer agent control: spawn_agent accepts an agent role preset, and send_input can optionally interrupt a running agent before delivering the message. (#9275, #9276)
    • /models metadata now includes upgrade migration markdown so clients can display richer guidance when suggesting model upgrades. (#9219)

    Bug Fixes

    • [revert] Linux sandboxing now falls back to Landlock-only restrictions when user namespaces are unavailable, and sets no_new_privs before applying sandbox rules. (#9250)
    • codex resume --last now respects the current working directory, with --all as an explicit override. (#9245)
    • Stdin prompt decoding now handles BOMs/UTF-16 and provides clearer errors for invalid encodings. (#9151)

    Changelog

    Full Changelog: rust-v0.84.0...rust-v0.85.0

    • Add migration_markdown in model_info (#9219) @aibrahim-oai
    • fix: fallback to Landlock-only when user namespaces unavailable and set PR_SET_NO_NEW_PRIVS early (#9250) @viyatb-oai
    • feat: collab tools app-server event mapping (#9213) @jif-oai
    • feat: add agent roles to collab tools (#9275) @jif-oai
    • feat: add interrupt capabilities to send_input (#9276) @jif-oai
    • feat: basic tui for event emission (#9209) @jif-oai
    • Changed codex resume --last to honor the current cwd (#9245) @etraut-openai
    • feat: propagate approval request of unsubscribed threads (#9232) @jif-oai
    • fix(tui): only show 'Worked for' separator when actual work was performed (#8958) @ThanhNguyxn
    • fix(exec): improve stdin prompt decoding (#9151) @liqiongyu
    • revert: remove pre-Landlock bind mounts apply (#9300) @viyatb-oai
    Original source Report a problem
  • Jan 15, 2026
    • Parsed from source:
      Jan 15, 2026
    • Detected by Releasebot:
      Jan 16, 2026
    OpenAI logo

    ChatGPT by OpenAI

    January 15, 2026

    Improved memory for finding details from past chats (Plus & Pro)

    When reference chat history is enabled, ChatGPT can now more reliably find specific details from your past chats when you ask. Any past chat used to answer your question now appears as a source so you can open and review the original context.

    This memory improvement is now available for Plus and Pro users globally.

    Original source Report a problem
  • Jan 13, 2026
    • Parsed from source:
      Jan 13, 2026
    • Detected by Releasebot:
      Jan 20, 2026
    OpenAI logo

    Codex by OpenAI

    0.81.0

    OpenAI releases new features including default API model gpt-5.2-codex, headless device login, and Linux read-only mounts, plus codex tool response improvements and UI reliability fixes. The changelog covers auth, metrics, and read-only sandbox enhancements across the stack.

    New Features

    • Default API model moved to gpt-5.2-codex. (#9188)
    • The codex tool in codex mcp-server now includes the threadId in the response so it can be used with the codex-reply tool, fixing #3712. The documentation has been updated at https://developers.openai.com/codex/guides/agents-sdk/ (#9192)
    • Headless runs now switch to device-code login automatically so sign-in works without a browser. (#8756)
    • Linux sandbox can mount paths read-only to better protect files from writes. (#9112)
    • Support partial tool calls rendering in tui

    Bug Fixes

    • Alternate-screen handling now avoids breaking Zellij scrollback and adds a config/flag to control it. (#8555)
    • Windows correctly prompts before unsafe commands when running with a read-only sandbox policy. (#9117)
    • Config.toml and rules parsing errors are reported to app-server clients/TUI instead of failing silently. (#9182, #9011)
    • Worked around a macOS system-configuration crash in proxy discovery. (#8954)
    • Invalid user image uploads now surface an error instead of being silently replaced. (#9146)

    Documentation

    • Published a generated JSON Schema for config.toml in docs/ to validate configs. (#8956)
    • Documented the TUI paste-burst state machine for terminals without reliable bracketed paste. (#9020)

    Chores

    • Added Bazel build support plus a just bazel-codex helper for contributors. (#8875, #9177)

    Changelog

    Full Changelog: rust-v0.80.0...rust-v0.81.0

    • [device-auth] When headless environment is detected, show device login flow instead. (#8756) @mzeng-openai
    • feat: first pass on clb tool (#8930) @jif-oai
    • nit: rename session metric (#8966) @jif-oai
    • chore: non mutable btree when building specs (#8969) @jif-oai
    • chore: move otel provider outside of trace module (#8968) @jif-oai
    • chore: add mcp call metric (#8973) @jif-oai
    • chore: add approval metric (#8970) @jif-oai
    • chore: metrics tool call (#8975) @jif-oai
    • chore: update metrics temporality (#8901) @jif-oai
    • Work around crash in system-configuration library (#8954) @etraut-openai
    • fix(app-server): set originator header from initialize JSON-RPC request (#8873) @owenlin0
    • Add config to disable /feedback (#8909) @gt-oai
    • chore: nuke telemetry file (#8985) @jif-oai
    • Revert "fix(app-server): set originator header from initialize JSON-RPC request" (#8986) @jif-oai
    • nit: rename to analytics_enabled (#8978) @jif-oai
    • renaming: task to turn (#8963) @jif-oai
    • fix: add sourcing of rc files to shell snapshot (#9150) @jif-oai
    • fix: shell snapshot clean-up (#9155) @jif-oai
    • feat: return an error if the image sent by the user is a bad image (#9146) @jif-oai
    • feat: only source shell snapshot if the file exists (#9197) @jif-oai
    • fix: drop double waiting header in TUI (#9145) @jif-oai
    • Render exec output deltas inline (#9194) @aibrahim-oai
    • chore: clamp min yield time for empty write_stdin (#9156) @jif-oai
    • feat: add auto refresh on thread listeners (#9105) @jif-oai
    • feat: add support for read-only bind mounts in the linux sandbox (#9112) @viyatb-oai
    • Use current model for review (#9179) @pakrym-oai
    Original source Report a problem
  • Jan 12, 2026
    • Parsed from source:
      Jan 12, 2026
    • Detected by Releasebot:
      Jan 13, 2026
    • Modified by Releasebot:
      Jan 16, 2026
    OpenAI logo

    ChatGPT by OpenAI

    January 12, 2026

    Dictation Updates

    We’re improving our dictation capabilities in ChatGPT for all logged-in users, significantly reducing empty transcriptions and improving accuracy.

    Original source Report a problem
  • Jan 9, 2026
    • Parsed from source:
      Jan 9, 2026
    • Detected by Releasebot:
      Jan 10, 2026
    OpenAI logo

    Codex by OpenAI

    Codex CLI 0.80.0

  • Jan 8, 2026
    • Parsed from source:
      Jan 8, 2026
    • Detected by Releasebot:
      Jan 9, 2026
    OpenAI logo

    OpenAI

    Introducing OpenAI for Healthcare

    OpenAI launches OpenAI for Healthcare with ChatGPT for Healthcare and an HIPAA‑compliant API, delivering enterprise AI for clinical, research, and admin workflows. Built for safety with evidence-backed reasoning and policy alignment, with early hospital deployments starting now.

    OpenAI for Healthcare

    Secure AI products to help healthcare organizations scale high-quality care, reduce admin work for teams, and power custom clinical solutions—while protecting health data.

    We’re introducing OpenAI for Healthcare, a set of products designed to help healthcare organizations deliver more consistent, high-quality care for patients—while supporting their HIPAA compliance requirements.

    This includes ChatGPT for Healthcare, available starting today and already rolling out to leading institutions like AdventHealth, Baylor Scott & White Health, Boston Children’s Hospital, Cedars-Sinai Medical Center, HCA Healthcare, Memorial Sloan Klineg Cancer Center, Stanford Medicine Children’s Health, and University of California, San Francisco (UCSF).

    It also includes the OpenAI API, which powers much of today’s healthcare ecosystem. Thousands of organizations have configured it to support HIPAA-compliant use—such as Abridge, Ambience, and EliseAI.

    Healthcare is under unprecedented strain. Demand is rising, clinicians are overwhelmed by administrative work, and critical medical knowledge is fragmented across countless sources. At the same time, AI adoption in healthcare is gaining momentum, driven by its potential to help address these challenges. Advances in models have significantly improved AI’s ability to support real-world clinical and administrative work, like helping clinicians personalize care using the latest evidence. According to the American Medical Association⁠ (opens in a new window), physicians’ use of AI nearly doubled in a year. Yet many clinicians still have to rely on their own tools because their organizations aren’t adopting AI fast enough, often due to the constraints of regulated environments.

    OpenAI for Healthcare helps close that gap by giving organizations a secure, enterprise-grade foundation for AI—so teams can use the same tools to deliver better, more reliable care, while supporting HIPAA compliance.

    ChatGPT for Healthcare

    ChatGPT for Healthcare is built to support the careful, evidence-based reasoning required in real patient care, while reducing administrative burden so teams can spend more time with patients. Organizations can bring clinicians, administrators, and researchers into a secure workspace with the controls they need to deploy AI securely and at scale.

    Here’s what it includes:

    • Models built for healthcare workflows: High-quality responses for clinical, research, and operational work—powered by GPT‑5 models built for healthcare and evaluated through physician-led testing across benchmarks and real workflows, including HealthBench⁠ and GDPval⁠.
    • Evidence retrieval with transparent citations: Answers grounded in relevant medical sources—drawing from millions of peer-reviewed research studies, public health guidance, and clinical guidelines—with clear citations including titles, journals, and publication dates to support quick source-checking. This helps clinicians reason through cases with greater confidence, so patients get to the right diagnosis and treatment sooner.
    • Institutional policy and care pathway alignment: Integrations with enterprise tools such as Microsoft SharePoint and other systems, so responses can incorporate an institution’s approved policies, pathway documents, and operational guidance to support consistent execution across teams and help ensure patients receive high-quality care.
    • Reusable templates to automate workflows: Shared templates for common tasks like drafting discharge summaries, patient instructions, clinical letters, and prior authorization support. Clinical teams spend less time rewriting and searching, and patients have clearer next steps and smoother transitions of care.
    • Access management and governance: A centralized workspace with role-based access controls and organization-wide user management through SAML SSO and SCIM. This gives healthcare organizations the governance and visibility they need to deploy AI across clinical, administrative, and research teams.
    • Data control and support for HIPAA compliance: Patient data and PHI remain under an organization’s control, with options for data residency, audit logs, customer-managed encryption keys, and a Business Associate Agreement (BAA) with OpenAI to support HIPAA-compliant use. Content shared with ChatGPT for Healthcare is not used to train models.

    Learn more⁠ about our enterprise-grade security, privacy, and compliance programs.

    Supporting clinical and operational workflows: In practice, teams use ChatGPT for Healthcare to synthesize medical evidence alongside institutional guidance and apply it to a patient’s specific context, draft clinical and administrative documentation, and adapt patient-facing education materials for readability and translation. This reduces time spent on admin, helps teams follow shared standards of care, and supports a better patient experience—while clinicians stay in charge.

    Reach out to our team⁠ to learn more and get started, or visit the OpenAI Academy⁠ (opens in a new window) for examples of how clinicians, researchers, and administrators can use ChatGPT for Healthcare in their daily work.

    Early hospital partners

    Healthcare is among the fastest-growing enterprise markets⁠ adopting AI, and hospitals and academic medical centers are already rolling out ChatGPT for Healthcare across their teams.

    “Our early work with a custom OpenAI-powered solution allowed us to move quickly, prove value in a secure environment, and establish strong governance foundations. ChatGPT for Healthcare offers a path toward operational scale, providing an enterprise-grade platform that can support broad, responsible adoption across clinical, research, and administrative teams.”

    John Brownstein, SVP and Chief Innovation Officer, Boston Children’s Hospital

    OpenAI API for Healthcare

    With the OpenAI API platform, developers can power tools and products with our latest models—including GPT‑5.2—and embed AI directly into healthcare systems and workflows. Eligible customers can apply for a Business Associate Agreement (BAA) with OpenAI to support HIPAA compliance requirements.

    In practice, teams are using our APIs to build healthcare applications including patient chart summarization, care team coordination, and discharge workflows. Companies like Abridge, Ambience, and EliseAI are building capabilities like ambient listening, automated clinical documentation, and appointment scheduling for clinicians and patients.

    To get started, explore our API platform⁠. If you need a BAA for our API services, learn how to apply⁠ (opens in a new window). Enterprise API customers can contact their account team to request access.

    AI models optimized for healthcare

    All OpenAI for Healthcare products are powered by GPT‑5.2 models, which outperform earlier OpenAI models and were developed through ongoing research and real-world evaluation that reflect how clinicians actually use AI.

    Over the past two years, we’ve partnered with⁠ a global network of more than 260 licensed physicians across 60 countries of practice to evaluate model performance using real clinical scenarios. To date, this group has reviewed more than 600,000 model outputs spanning 30 areas of focus. Their continuous feedback has directly informed model training, safety mitigations, and product iteration. ChatGPT for Healthcare went through multiple rounds of physician-led red teaming to tune model behavior, trustworthy information retrieval, and other evaluations.

    We also look to evidence from live deployments. A study with Penda Health found that an OpenAI-powered clinical copilot⁠ used in routine primary care reduced both diagnostic and treatment errors—early evidence that AI, when deployed with appropriate safeguards and clinician oversight, can improve care quality.

    Benchmarks like HealthBench⁠, an open, clinician-designed evaluation, also reinforce this progress. HealthBench measures model behavior across realistic medical scenarios using rubrics written by physicians. It goes beyond factual recall to assess clinical reasoning, safety, uncertainty handling, and communication quality—dimensions that better reflect how clinicians use AI in practice. Across these evaluations, GPT‑5.2 models consistently outperform prior generations and comparator models on real clinical workflows.

    GPT‑5.2 models score higher on a subset of challenging health professional workflows from HealthBench Consensus compared to other models. Scores reflect performance across clinical tasks and should not be interpreted as percentage accuracy.

    In real-world healthcare tasks, GPT‑5.2 also performs better than human baselines across every role measured in GDPval⁠, surpassing earlier OpenAI models.

    What’s next

    This announcement builds on OpenAI’s longstanding work across health, biopharma, and life sciences. That includes products like ChatGPT Health⁠, which helps people better understand and more confidently navigate their health, ongoing research into how AI can accelerate scientific discovery with companies like Retro Biosciences⁠, and work with leading life sciences organizations like Amgen⁠, Thermo Fisher⁠ (opens in a new window), Moderna⁠, and others. We also collaborate with leading professional services and consulting firms including Boston Consulting Group (BCG), Bain, McKinsey & Company, and Accenture to help healthcare organizations move faster with AI.

    OpenAI's mission is to ensure AI benefits all of humanity, and we believe improving health will be one of the defining impacts of AI. We’ll continue working closely with healthcare organizations using OpenAI for Healthcare to learn from real-world use and further improve our products for healthcare.

    To learn more about OpenAI for Healthcare, contact our team⁠.

    Original source Report a problem

Related vendors