OpenAI Products
All OpenAI Release Notes (600)
- Apr 21, 2026
- Date parsed from source:Apr 21, 2026
- First seen by Releasebot:Apr 21, 2026
April 21, 2026
ChatGPT introduces ImageGen 2.0 in ChatGPT, plus ImageGen Thinking with reasoning, multi-output, and web search tools.
We’re introducing
ImageGen 2.0
, our new image generation model in ChatGPT. ImageGen 2.0 is available to all ChatGPT plans.
We’re also introducing
ImageGen 2.0 Thinking
, which adds reasoning, multi-output generation, and access to tools like web search.
ImageGen Thinking is available to all paid ChatGPT plans and is only accessible by selecting Thinking and Pro models.
Original source - Apr 20, 2026
- Date parsed from source:Apr 20, 2026
- First seen by Releasebot:Apr 21, 2026
0.122.0
Codex adds a broad release of new features, with more self-contained standalone installs, better Windows and Intel Mac setup, richer TUI side conversations, stronger Plan Mode workflows, expanded plugin browsing, and tighter filesystem and sandbox controls.
New Features
- Standalone installs are more self-contained, and codex app now opens or installs Desktop correctly on Windows and Intel Macs (#17022, #18500).
- The TUI can open /side conversations for quick side questions, and queued input now supports slash commands and ! shell prompts while work is running (#18190, #18542).
- Plan Mode can start implementation in a fresh context, with context-usage shown before deciding whether to carry the planning thread forward (#17499, #18573).
- Plugin workflows now include tabbed browsing, inline enable/disable toggles, marketplace removal, and remote, cross-repo, or local marketplace sources (#18222, #18395, #17752, #17751, #17277, #18017, #18246).
- Filesystem permissions now support deny-read glob policies, managed deny-read requirements, platform sandbox enforcement, and isolated codex exec runs that ignore user config or rules (#15979, #17740, #18096, #18646).
- Tool discovery and image generation are now enabled by default, with higher-detail image handling and original-detail metadata support for MCP and js_repl image outputs (#17854, #17153, #17714, #18386).
Bug Fixes
- App-server approvals, user-input prompts, and MCP elicitations now disappear from the TUI when another client resolves them, instead of leaving stale prompts behind (#15134).
- Remote-control startup now tolerates missing ChatGPT auth, and MCP startup cancellation works again through app-server sessions (#18117, #18078).
- Resumed and forked app-server threads now replay token usage immediately so context/status UI starts with the restored state (#18023).
- Security-sensitive flows were tightened: logout revokes managed ChatGPT tokens, project hooks and exec policies require trusted workspaces, and Windows sandbox setup avoids broad user-profile and SSH-root grants (#17825, #14718, #18443, #18493).
- Sandboxed apply_patch writes work correctly with split filesystem policies, and file watchers now notice files created after watching begins (#18296, #18492).
- Several TUI rough edges were fixed, including fatal skills-list failures, invalid resume hints, duplicate context statusline entries, /model menu loops, redundant memory notices, and terminal title quoting in iTerm2 (#18061, #18059, #18054, #18154, #18580, #18261).
Documentation
- Added a security-boundaries reference to SECURITY.md for sandboxing, approvals, and network controls (#17848, #18004).
- Documented custom MCP server approval defaults and exec-server stdin behavior (#17843, #18086).
- Updated app-server docs for plugin API changes, marketplace removal, resume/fork token-usage replay, and warning notifications (#17277, #17751, #18023, #18298).
- Added a short guide for the responses API proxy (#18604).
Chores
- Split plugin and marketplace code into codex-core-plugins, moved more connector code into connectors, and continued breaking up the large core session/turn modules (#18070, #18158, #18200, #18206, #18244, #18249).
- Refactored config loading and AGENTS.md discovery behind narrower filesystem and manager abstractions (#18209, #18035).
- Stabilized Bazel and CI with flake fixes, native Rust test sharding, scoped repository caches, stronger Windows clippy coverage, and updated rules_rs /LLVM pins (#17791, #18082, #18366, #18350, #18397).
- Added core CODEOWNERS and a smaller development build profile (#18362, #18612).
- Removed the stale core models.json and updated release preparation to refresh the active model catalog (#18585).
Changelog
Full Changelog:
rust-v0.121.0...rust-v0.122.0
Original source All of your release notes in one feed
Join Releasebot and get updates from OpenAI and hundreds of other software products.
- Apr 20, 2026
- Date parsed from source:Apr 20, 2026
- First seen by Releasebot:Apr 21, 2026
April 20, 2026
ChatGPT Business adds simplified app action controls in Workspace settings, giving owners and admins an easier way to manage app permissions with options for all actions, read-only actions, or custom settings, plus control over how new actions are handled.
Simplified app action controls
Workspace owners and admins can now manage app actions with a simplified model in Workspace settings > Apps. In Action control, admins can choose whether an app allows all actions, allows only read actions, or uses a custom configuration for current actions.
Admins can also choose how actions added later are handled by enabling all new actions, enabling only new read actions, or disabling new actions until they are reviewed. This makes it easier to manage app permissions while keeping granular control when needed.
Owners and admins may want to review currently enabled apps, RBAC access, and app action settings to confirm that each app's policy matches their workspace needs. For more information, see:
Admin Controls, Security, and Compliance in apps (Enterprise, Edu, and Business)
Original source - Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 16, 2026
- Modified by Releasebot:Apr 21, 2026
0.121.0
Codex adds marketplace and app-server support, richer TUI history and memory controls, expanded MCP and plugin capabilities, new realtime and filesystem APIs, and a secure devcontainer profile, while also fixing macOS, Windows, rate-limit, Guardian, and app-server edge cases.
New Features
- Added codex marketplace add and app-server support for installing plugin marketplaces from GitHub, git URLs, local directories, and direct marketplace.json URLs (#17087, #17717, #17756).
- Added TUI prompt history improvements, including Ctrl+R reverse search and local recall for accepted slash commands (#17550, #17336).
- Added TUI and app-server controls for memory mode, memory reset/deletion, and memory-extension cleanup (#17632, #17626, #17913, #17937, #17844).
- Expanded MCP/plugin support with MCP Apps tool calls, namespaced MCP registration, parallel-call opt-in, and sandbox-state metadata for MCP servers (#17364, #17404, #17667, #17763).
- Added realtime and app-server APIs for output modality, transcript completion events, raw turn item injection, and symlink-aware filesystem metadata (#17701, #17703, #17719).
- Added a secure devcontainer profile with bubblewrap support, plus macOS sandbox allowlists for Unix sockets (#10431, #17547, #17654).
Bug Fixes
- Fixed macOS sandbox/proxy handling for private DNS and removed the danger-full-access denylist-only network mode (#17370, #17732).
- Fixed Windows cwd/session matching so resume --last and thread/list work when paths use verbatim prefixes (#17414).
- Fixed rate-limit/account handling for prolite plans and made unknown WHAM plan values decodable (#17419).
- Made Guardian timeouts distinct from policy denials, with timeout-specific guidance and visible TUI history entries (#17381, #17486, #17521, #17557).
- Stabilized app-server behavior by avoiding premature thread unloads, tolerating failed trust persistence on startup, and skipping broken symlinks in fs/readDirectory (#17398, #17595, #17907).
- Fixed MCP/tool-call edge cases including flattened deferred tool names, elicitation timeout accounting, and empty namespace descriptions (#17556, #17566, #17946).
Documentation
- Documented the secure devcontainer profile and its bubblewrap requirements (#10431, #17547).
- Added TUI composer documentation for history search behavior (#17550).
- Updated app-server docs for new MCP, marketplace, turn injection, memory reset, filesystem metadata, external-agent migration, and websocket token-hash APIs (#17364, #17717, #17703, #17913, #17719, #17855, #17871).
- Documented WSL1 bubblewrap limitations and WSL2 behavior (#17559).
- Added memory pipeline documentation for extension cleanup (#17844).
Chores
- Hardened supply-chain and CI inputs by pinning GitHub Actions, cargo installs, git dependencies, V8 checksums, and cargo-deny source allowlists (#17471).
- Added Bazel release-build verification so release-only Rust code is compiled in PR CI (#17704, #17705).
- Introduced the codex-thread-store crate/interface and moved local thread listing behind it (#17659, #17824).
- Required reviewed pnpm dependency build scripts for workspace installs (#17558).
- Reduced Rust maintenance surface by removing unused helper APIs and broader absolute-path types (#17407, #17792, #17146).
Changelog
Full Changelog:
rust-v0.120.0...rust-v0.121.0
Original source - Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 19, 2026
April 16, 2026
ChatGPT Business adds Workspace analytics, a refreshed workspace-level dashboard that replaces User analytics and helps admins track adoption and Codex usage with summary metrics, member-level usage, flexible date ranges, search and filtering, and direct Codex analytics access.
Workspace analytics for Business
A refreshed analytics experience, entitled Workspace analytics, is now available for ChatGPT Business. Workspace analytics replaces User analytics with a simpler, workspace-level view that helps admins understand overall adoption and Codex usage across their organization.
Key highlights:
- Refreshed analytics experience: Updated visuals and navigation with a streamlined dashboard built for quick review of workspace activity.
- Workspace summary metrics: View headline metrics such as active users, total messages sent, and total credits spent across the selected time range.
- Member-level usage table: See usage by workspace member, including seat type, credits spent, and messages sent.
- Flexible date ranges: Analyze activity across preset windows including 7 days, 1 month, 6 months, 12 months, or a custom range.
- Codex visibility: Jump directly to Codex analytics for lightweight insights into credits usage and developer activity.
- Search and filtering: Quickly find specific members and review individual usage patterns.
- Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 17, 2026
Codex for (almost) everything
OpenAI releases a major Codex update that expands beyond coding into computer use, web workflows, image generation, memory, automations, and deeper developer tools for reviews, terminals, SSH devboxes, and in-app browsing.
We’re releasing a major update to Codex, making it a more powerful partner for the more than 3 million developers who use it every week to accelerate work across the full software development lifecycle.
Codex can now operate your computer alongside you, work with more of the tools and apps you use everyday, generate images, remember your preferences, learn from previous actions, and take on ongoing and repeatable work. The Codex app also now includes deeper support for developer workflows, like reviewing PRs, viewing multiple files & terminals, connecting to remote devboxes via SSH, and an in-app browser to make it faster to iterate on frontend designs, apps, and games.
Extending Codex beyond coding
With background computer use, Codex can now use all of the apps on your computer by seeing, clicking, and typing with its own cursor. Multiple agents can work on your Mac in parallel, without interfering with your own work in other apps. For developers, this is helpful for iterating on frontend changes, testing apps, or working in apps that don’t expose an API.
Codex is also beginning to work natively with the web. The app now includes an in-app browser, where you can comment directly on pages to provide precise instructions to the agent. This is useful for frontend and game development today, and over time we plan to expand it so Codex can fully command the browser beyond web applications on localhost.
Codex can now use gpt-image-1.5 to generate and iterate on images. Combined with screenshots and code, it is helpful for creating visuals for product concepts, frontend designs, mockups, and games inside the same workflow.
We’re also releasing more than 90 additional plugins, which combine skills, app integrations, and MCP servers to give Codex more ways to gather context and take action across your tools. Some of the new plugins developers will find most useful include Atlassian Rovo to help manage JIRA, CircleCI, CodeRabbit, GitLab Issues, Microsoft Suite, Neon by Databricks, Remotion, Render, and Superpowers.
Working across the software development lifecycle
The app now includes support for addressing GitHub review comments, running multiple terminal tabs, and connecting to remote devboxes over SSH in alpha. It also lets you open files directly in the sidebar with rich previews for PDFs, spreadsheets, slides, and docs, and use a new summary pane to track agent plans, sources, and artifacts.
Together, these improvements make it faster to move across all the stages of the software development lifecycle between writing code, checking outputs, reviewing changes, and collaborating with the agent in one workspace.
Carry work forward over time
We have expanded automations to allow re-using existing conversation threads, preserving context previously built up. Codex can now schedule future work for itself and wake up automatically to continue on a long-term task, potentially across days or weeks.
Teams use automations for everything from landing open pull requests to following up on tasks and staying on top of fast-moving conversations across tools like Slack, Gmail, and Notion.
We’re also releasing a preview of memory, which allows Codex to remember useful context from previous experience, including personal preferences, corrections and information that took time to gather. This helps future tasks complete faster and to a level of quality previously only possible through extensive custom instructions.
Codex now also proactively proposes useful work to continue where you have left off. Using context from projects, connected plugins, and memory, Codex can now suggest how to start your work day or where to pick up on a previous project. For example Codex can identify open comments in Google Docs that require your attention, pull relevant context from Slack, Notion, and your codebase, then provide you with a prioritized list of actions.
Availability
Starting today, these updates are rolling out to Codex desktop app users who are signed in with ChatGPT.
Personalization features including context-aware suggestions and memory will roll out to Enterprise, Edu, and EU and UK users soon. Computer use is initially available on macOS, and will roll out to EU and UK users soon.
If you’ve been using Codex in the terminal or editor, try it across the rest of your workflow. If you haven’t tried Codex yet, download the app and get started.
What’s next
In just the year since Codex launched, the ways developers are using Codex has expanded. Developers start with Codex to write code, then increasingly use it to understand systems, gather context, review work, debug issues, coordinate with teammates, and keep longer-running work moving.
Our mission is to ensure that AGI benefits all of humanity. That includes narrowing the gap between what people can imagine and what they can build. This release brings Codex closer to the tools, workflows, and decisions involved in building software, with much more to come soon.
Original source - Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 17, 2026
April 16, 2026
ChatGPT rolls out ads for Free and Go users in Australia, New Zealand, and Canada.
We're beginning to rollout ads for users on Free and Go plans in Australia, New Zealand, and Canada. Plus, Pro, Business, Enterprise, and Education plans do not have ads.
Original source - Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 17, 2026
Introducing GPT‑Rosalind for life sciences research
OpenAI introduces GPT-Rosalind, a research preview life sciences reasoning model for biology, drug discovery, and translational medicine, plus a new Codex research plugin that connects scientists to 50+ tools and data sources for faster research workflows.
Today, we’re introducing GPT‑Rosalind, our frontier reasoning model built to support research across biology, drug discovery, and translational medicine. The life sciences model series is optimized for scientific workflows, combining improved tool use with deeper understanding across chemistry, protein engineering, and genomics.
On average, it takes roughly 10 to 15 years to go from target discovery to regulatory approval for a new drug in the United States. Gains made at the earliest stages of discovery compound downstream in better target selection, stronger biological hypotheses and higher-quality experiments. Progress in the life sciences is constrained not only by the difficulty of the underlying science, but by the complexity of the research workflows themselves. Scientists must work across large volumes of literature, specialized databases, experimental data, and evolving hypotheses in order to generate and evaluate new ideas. These workflows are often time-intensive, fragmented, and difficult to scale.
We believe advanced AI systems can help researchers move through these workflows faster—not just by making existing work more efficient, but by helping scientists explore more possibilities, surface connections that might otherwise be missed, and arrive at better hypotheses sooner. By supporting evidence synthesis, hypothesis generation, experimental planning, and other multi-step research tasks, this model is designed to help researchers accelerate the early stages of discovery. Over time, these systems could help life sciences organizations discover breakthroughs that wouldn’t otherwise be possible, with a much higher rate of success.
GPT‑Rosalind is now available as a research preview in ChatGPT, Codex, and the API for qualified customers through our trusted access program. We’re also introducing a freely accessible Life Sciences research plugin for Codex, helping scientists connect models to over 50 scientific tools and data sources. We are working with customers like Amgen, Moderna, the Allen Institute, Thermo Fisher Scientific, and others to apply GPT‑Rosalind across workflows that accelerate research and discovery.
The model is named after Rosalind Franklin, whose rigorous research helped reveal the structure of DNA and laid foundations for modern molecular biology.
From raw data to grounded discovery decisions, see how our purpose-built model accelerates research workflows.
Built for scientific workflows
The GPT‑Rosalind life sciences model series is built for modern scientific work across published evidence, data, tools, and experiments. In our evaluations, it delivers the best performance on tasks that require reasoning over molecules, proteins, genes, pathways, and disease-relevant biology, and it is more effective at using scientific tools and databases in multi-step workflows such as literature review, sequence-to-function interpretation, experimental planning, and data analysis.
This is the first release in our GPT‑Rosalind life sciences model series, and we will continue to expand the frontiers of the model’s biochemical reasoning capabilities across long-horizon, tool-heavy scientific workflows. OpenAI’s compute infrastructure gives us the ability to continue training, evaluating, and improving increasingly capable domain models against real scientific tasks—helping these systems become more useful as the workflows themselves become more complex.
From evidence-based discovery insights to high-impact experiments, see how our suite of solutions translate into measurable improvements in your research workflows.
Customers and ecosystem
We are working with leading pharmaceutical, biotechnology, and research customers, as well as life sciences technology organizations, to apply GPT‑Rosalind across workflows that drive discovery.
“The life sciences field demands precision at every step. The questions are highly complex, the data are highly unique, and the stakes are incredibly high. Our unique collaboration with OpenAI enables us to apply their most advanced capabilities and tools in new and innovative ways with the potential to accelerate how we deliver medicines to patients.”
—Sean Bruich, Senior Vice President of Artificial Intelligence and Data, AmgenPerformance and evaluation
We evaluated GPT‑Rosalind across a range of capabilities fundamental to scientific discovery and industry research. These evaluations measure core reasoning across scientific subdomains, including chemical reaction mechanisms; protein structure, mutation effects, and interactions; and phylogenetic interpretation of DNA sequences. They also assess whether models can support real research workflows by interpreting experimental outputs, identifying expert-relevant patterns, and synthesizing external information to design follow-up experiments. Finally, they test whether models can select and use the right computational tools, databases, and domain-specific capabilities to augment their reasoning. Taken together, these evaluations show progress across the end-to-end process of scientific research and suggest a stronger ability to help researchers work through challenging discovery tasks.
Prompt
I am planning a base-promoted SNAr coupling of 1-(pyridin-3-yl)ethanol with 1-fluoro-2-nitrobenzene with the goal of synthesizing 1-(pyridin-3-yl)ethyl 2-nitrophenyl ether. I found several patents that describe room-temperature O-arylation of alcohols in DMF/Cs2CO3, but the reaction is taking longer than I would like. How can I improve this reaction? Help me find any relevant literature or patents as well.
Industry evaluations
We evaluated GPT‑Rosalind on a series of public benchmarks. On BixBench, a benchmark designed around real-world bioinformatics and data analysis, GPT‑Rosalind achieved leading performance among models with published scores.
On LABBench2, a benchmark measuring performance on a range of research tasks such as literature retrieval, database access, sequence manipulation and protocol design, GPT‑Rosalind outperforms GPT‑5.4 on 6 out of 11 tasks. The most notable improvement comes from CloningQA, which requires end-to-end design of DNA and enzyme reagents for molecular cloning protocols.
We also partnered with Dyno Therapeutics, a company pioneering AI-designed gene therapies, to evaluate the model on an RNA sequence-to-function prediction and generation task using unpublished, uncontaminated sequences. Performance was compared against 57 historical scores from human experts in the AI-bio field. When evaluated directly in the Codex app, best-of-ten model submissions ranked above the 95th percentile of human experts on the prediction task and around the 84th percentile of human experts on the sequence generation task.
These evaluations provide a meaningful signal of performance on the kinds of workflows scientists rely on every day to generate evidence, analyze complex data, and move toward defensible biological conclusions.
Connecting to the tools scientists use
Scientists can use our new Life Sciences research plugin for Codex, available today in GitHub. This package includes a broad set of modular skills for most common research workflows, designed to help users work across human genetics, functional genomics, protein structure, biochemistry, clinical evidence, and public study discovery.
These skills act as an orchestration layer that helps scientists work through broad, ambiguous, and multi-step questions more effectively. They provide access to more than 50 public multi-omics databases, literature sources, and biology tools, and offer a flexible starting point for common repeatable workflows such as protein structure lookup, sequence search, literature review, and public dataset discovery.
Eligible Enterprise users can leverage this plugin in research workflows with GPT‑Rosalind for deeper biological reasoning, while all users can use the plugin package with our mainline models.
Trusted access
We want to make these capabilities available to the scientists and research organizations best positioned to advance human health, while maintaining strong safeguards against biological misuse. The Life Sciences model is launching through a trusted-access deployment structure for qualified Enterprise customers in the U.S. to start, with controls around eligibility, access management, and organizational governance. At the same time, we are making a set of connectors and the Life Sciences Research Plugin available more broadly, so researchers can use our mainline models more effectively for life sciences research tasks.
The Life Sciences model was developed with heightened enterprise-grade security controls and strengthened access management, enabling professional scientific use in governed research environments. We evaluate access based on three core principles: beneficial use, strong governance and safety oversight, and controlled access with enterprise-grade security. In practice, this means participating organizations must be conducting legitimate scientific research with clear public benefit; maintain appropriate governance, compliance, and misuse-prevention controls; and restrict access to approved users within secure, well-managed environments. Organizations must also agree to the life sciences research preview terms and comply with OpenAI’s usage policies, and we may request additional information as part of onboarding or continued participation.
Getting started
Organizations can request access through our qualification and safety review process.
During the research preview, use of this model will not consume existing credits or tokens—subject to abuse guardrails. We’ll share more details on pricing and availability as the program expands.
The Life Sciences model is built to help scientific organizations do higher-quality work, faster, in environments that require both technical capability and operational control. Our dedicated Life Sciences team—as well as advisory partners including McKinsey & Company, Boston Consulting Group (BCG), and Bain & Company—help organizations identify high-impact use cases, integrate the model into enterprise environments, and drive measurable outcomes. If you’d like to explore ways OpenAI Life Sciences can support your work, you can contact our Life Sciences team.
What’s next
This is the first release in our Life Sciences model series, and we view it as the beginning of a long-term commitment to building AI that can accelerate scientific discovery in areas that matter deeply to society, from human health to broader biological research. We will continue improving the model’s biological reasoning, expanding support for tool-heavy and long-horizon research workflows, and working closely with leading scientific institutions to evaluate real-world impact. That includes ongoing partnerships with national laboratories such as Los Alamos National Laboratory, where we are exploring AI-guided protein and catalyst design, including the ability of AI systems to modify biological structures while preserving or improving key functional properties.
Over time, we expect these systems to become increasingly capable partners in discovery—helping scientists move faster from question to evidence, from evidence to insight, and from insight to new treatments for patients.
Original source - Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 16, 2026
Codex can now help with more of your work
Codex broadens its AI workspace with an in-app browser, macOS computer use, threaded chats, scheduled follow-ups, richer pull request review, and sidebar previews for generated files, plus remote connections, multiple terminals, multi-window support, and improved rendering.
Codex is becoming a broader workspace for getting work done with AI. This update makes it easier to start work with less setup, verify what Codex is building, create richer outputs, and keep momentum across longer-running tasks.
Verify more of your work
The Codex app now includes an early in-app browser. You can open local or public pages that don’t require sign-in, comment directly on the rendered page, and ask Codex to address page-level feedback.
Computer use lets Codex operate macOS apps by seeing, clicking, and typing, which helps with native app testing, simulator flows, low-risk app settings, and GUI-only bugs.
The feature isn’t available in the European Economic Area, the United Kingdom, or Switzerland at launch.
Start, follow, and steer work
Chats are threads you can start without choosing a project folder first. They’re useful for research, writing, planning, analysis, source gathering, and tool-driven work that doesn’t begin in a codebase.
For work that needs a later check-in, thread automations can wake up the same thread on a schedule while preserving the conversation context. Use them to check a long-running process, watch for updates, or continue a follow-up loop without starting from scratch.
The task sidebar makes plans, sources, generated artifacts, and summaries easier to follow while Codex works. Context-aware suggestions can also help you pick up relevant follow-ups when you start or return to Codex.
Stronger for software development
Codex now brings more of the pull request workflow into the app. You can inspect GitHub pull requests in the sidebar, review comments in the diff, review changed files, then ask Codex to explain feedback, make changes, check them, and keep the review moving.
Review richer outputs
The artifact viewer can preview generated files such as PDF files, spreadsheets, documents, and presentations in the sidebar before you commit or share them. Memories, where available, can also carry useful context from past tasks into future threads, including stable preferences, project conventions, and recurring work patterns.
Other features
- Remote connections - We are gradually rolling out SSH remote connections in alpha
- Support for multiple terminals
- macOS menu bar and Windows system tray support
- Multi-window support
- Intel Mac support
- New plugins
- Improved thread and tool rendering
- Apr 15, 2026
- Date parsed from source:Apr 15, 2026
- First seen by Releasebot:Apr 15, 2026
The next evolution of the Agents SDK
OpenAI introduces new Agents SDK capabilities with a more capable model-native harness and native sandbox execution for safer file, tool, and code workflows. The update adds configurable memory, standardized integrations, portable workspace support, and built-in snapshotting for durable agent runs.
We’re introducing new capabilities to the Agents SDK that give developers standardized infrastructure that is easy to get started with and is built correctly for OpenAI models: a model-native harness that lets agents work across files and tools on a computer, plus native sandbox execution for running that work safely.
For example, developers can give an agent a controlled workspace, explicit instructions, and the tools it needs to inspect evidence:
Developers need more than the best models to build useful agents—they need systems that support how agents inspect files, run commands, write code, and keep working across many steps.
The systems that exist today come with tradeoffs as teams move from prototypes to production. Model-agnostic frameworks are flexible but do not fully utilize frontier models capabilities ; model-provider SDKs can be closer to the model but often lack enough visibility into the harness; and managed agent APIs can simplify deployment but constrain where agents run and how they access sensitive data.
Here’s what some of the customers who tested the new SDK with us had to say:
“The updated Agents SDK made it production-viable for us to automate a critical clinical records workflow that previous approaches couldn’t handle reliably enough. For us, the difference was not just extracting the right metadata, but correctly understanding the boundaries of each encounter in long, complex records. As a result, we can more quickly understand what's happening for each patient in a given visit, helping members with their care needs and improving their experience with us.”
— Rachael Burns, Staff Engineer & AI Tech Lead, Oscar HealthA more capable harness for the agent loop
With today’s release, the Agents SDK harness becomes more capable for agents that work with documents, files, and systems. It now has configurable memory, sandbox-aware orchestration, Codex-like filesystem tools, and standardized integrations with primitives that are becoming common in frontier agent systems.
These primatives include tool use via MCP, progressive disclosure via skills, custom instructions via AGENTS.md, code execution using the shell tool, file edits using the apply patch tool, and more. The harness will continue to incorporate new agentic patterns and primitives over time, so developers can spend less time on core infrastructure updates and more time on the domain-specific logic that makes their agents useful.
The harness also helps developers unlock more of a frontier model’s capability by aligning execution with the way those models perform best. That keeps agents closer to the model’s natural operating pattern, improving reliability and performance on complex tasks—particularly when work is long-running or coordinated across a diverse set of tools and systems.
In addition, we realize each product is unique and rarely fits neatly into a mold. We designed Agents SDK to support this diversity. Developers get a harness that’s turnkey yet flexible—making it easy to adapt it to their own stack—including tool use, memory, and sandbox environment.
Native sandbox execution
The updated Agents SDK supports sandbox execution natively, so agents can run in controlled computer environments with the files, tools, and dependencies they need for a task.
Many useful agents need a workspace where they can read and write files, install dependencies, run code, and use tools safely. Native sandbox support gives developers that execution layer out of the box, instead of forcing them to piece it together themselves.
Developers can bring their own sandbox or use built-in support for Blaxel, Cloudflare, Daytona, E2B, Modal, Runloop, and Vercel.
To make those environments portable across providers, the SDK also introduces a Manifest abstraction for describing the agent’s workspace. Developers can mount local files, define output directories, and bring in data from storage providers including AWS S3, Google Cloud Storage, Azure Blob Storage, and Cloudflare R2.
This gives developers a consistent way to shape the agent’s environment from local prototype to production deployment. It also gives the model a predictable workspace: where to find inputs, where to write outputs, and how to keep work organized across a long-running task.
Separating harness from compute for security, durability, and scale
Agent systems should be designed assuming prompt-injection and exfiltration attempts. Separating harness and compute helps keep credentials out of environments where model-generated code executes.
It also enables durable execution. When the agent’s state is externalized, losing a sandbox container does not mean losing the run. With built-in snapshotting and rehydration, the Agents SDK can restore the agent’s state in a fresh container and continue from the last checkpoint if the original environment fails or expires.
Finally, it makes agents more scalable. Agent runs can use one sandbox or many, invoke sandboxes only when needed, route subagents to isolated environments, and parallelize work across containers for faster execution.
Pricing and availability
These new Agents SDK capabilities are generally available to all customers via the API and use standard API pricing, based on tokens and tool use.
What’s next
As we continue to develop the Agents SDK, we’ll keep expanding what developers can build with it, making it easier to bring more capable agents into production with less custom infrastructure, while preserving the flexibility and control developers need to fit agents into their own environments.
The new harness and sandbox capabilities are launching first in Python, with TypeScript support planned for a future release. We’re also working to bring additional agent capabilities, including code mode and subagents, to both Python and TypeScript.
In addition, we want to help bring the broader agent ecosystem together over time, with support for more sandbox providers, more integrations, and more ways for developers to plug the SDK into the tools and systems they already use.
Original source - Apr 11, 2026
- Date parsed from source:Apr 11, 2026
- First seen by Releasebot:Apr 11, 2026
- Modified by Releasebot:Apr 16, 2026
0.120.0
Codex adds realtime agent progress streaming, cleaner TUI hook and status updates, richer typed tool declarations, and clearer SessionStart handling. It also fixes Windows sandbox, remote websocket, ordering, and MCP cleanup issues.
New Features
- Realtime V2 can now stream background agent progress while work is still running and queue follow-up responses until the active response completes (#17264, #17306)
- Hook activity in the TUI is easier to scan, with live running hooks shown separately and completed hook output kept only when useful (#17266)
- Custom TUI status lines can include the renamed thread title (#17187)
- Code-mode tool declarations now include MCP outputSchema details so structured tool results are typed more precisely (#17210)
- SessionStart hooks can distinguish sessions created by /clear from fresh startup or resume sessions (#17073)
Bug Fixes
- Fixed Windows elevated sandbox handling for split filesystem policies, including read-only carveouts under writable roots (#14568)
- Fixed sandbox permission handling for symlinked writable roots and carveouts, preventing failures in shell and apply_patch workflows (#15981)
- Fixed codex --remote wss://... panics by installing the Rustls crypto provider before TLS websocket connections (#17288)
- Preserved tool search result ordering instead of alphabetically reordering results (#17263)
- Fixed live Stop-hook prompts so they appear immediately instead of only after thread history reloads (#17189)
- Fixed app-server MCP cleanup on disconnect so unsubscribed threads and resources are torn down correctly (#17223)
Documentation
- Documented the elevated vs restricted-token Windows sandbox support split in the core README (#14568)
- Updated app-server protocol documentation for the new /clear SessionStart source (#17073)
Chores
- Made rollout recording more reliable by retrying failed flushes and surfacing durability failures instead of dropping buffered items (#17214)
- Added analytics schemas and metadata wiring for compaction and Guardian review events (#17155, #17055)
- Improved Guardian follow-up efficiency by sending transcript deltas instead of repeatedly resending full history (#17269)
- Added stable Guardian review IDs across app-server events and internal approval state (#17298)
- Apr 10, 2026
- Date parsed from source:Apr 10, 2026
- First seen by Releasebot:Apr 11, 2026
- Modified by Releasebot:Apr 16, 2026
0.119.0
Codex adds realtime voice sessions on the v2 WebRTC path, expands MCP Apps and custom servers, and improves remote workflows, TUI controls, notifications, and resume stability. It also includes bug fixes, docs updates, and core performance and build cleanup.
New Features
- Realtime voice sessions now default to the v2 WebRTC path, with configurable transport, voice selection, native TUI media support, and app-server coverage for the new flow (#16960, #17057, #17058, #17093, #17097, #17145, #17165, #17176, #17183, #17188).
- MCP Apps and custom MCP servers gained richer support, including resource reads, tool-call metadata, custom-server tool search, server-driven elicitations, file-parameter uploads, and more reliable plugin cache refreshes (#16082, #16465, #16944, #17043, #15197, #16191, #16947).
- Remote/app-server workflows now support egress websocket transport, remote --cd forwarding, runtime remote-control enablement, sandbox-aware filesystem APIs, and an experimental codex exec-server subcommand (#15951, #16700, #16973, #16751, #17059, #17142, #17162).
- The TUI can copy the latest agent response with Ctrl+O, including better clipboard behavior over SSH and across platforms (#16966).
- /resume can now jump directly to a session by ID or name from the TUI (#17222).
- TUI notifications are more configurable, including Warp OSC 9 support and an opt-in mode for notifications even while the terminal is focused (#17174, #17175).
Bug Fixes
- The TUI starts faster by fetching rate limits asynchronously, and /status now refreshes stale limits instead of showing frozen or misleading quota information (#16201, #17039).
- Resume flows are more stable: the picker no longer flashes false empty states, uses fresher thread names, stabilizes timestamp labels, preserves resume hints on zero-token exits, and avoids crashing when resuming the current thread (#16591, #16601, #16822, #16987, #17086).
- Composer and chat behavior are smoother, including fixed paste teardown, CJK word navigation, stale /copy output, percent-decoded local file links, and clearer truncated exec-output hints (#16202, #16829, #16648, #16810, #17076).
- Fast Mode no longer stays stuck on after /fast off in app-server-backed TUI sessions (#16833).
- MCP status and startup are less noisy and faster: hyphenated server names list tools correctly, /mcp avoids slow full inventory probes, disabled servers skip auth probing, and residency headers are honored by codex mcp-server (#16674, #16831, #17098, #16952).
- Sandbox, network, and platform edge cases were tightened, including clearer read-only apply_patch errors, refreshed network proxy policy after sandbox changes, suppressed irrelevant bubblewrap warnings, a macOS HTTP-client sandbox panic fix, and Windows firewall address handling (#16885, #17040, #16667, #16670, #17053).
Documentation
- The README now uses the current ChatGPT Business plan name (#16348).
- Developer guidance for argument_comment_lint was updated to favor getting CI started instead of blocking on slow local lint runs (#16375).
- Obsolete codex-cli README content was removed to avoid stale setup guidance (#17096).
- codex exec --help now shows clearer usage and approval-mode wording (#16881, #16888).
Chores
- codex-core was slimmed down through major crate extractions for MCP, tools, config, model management, auth, feedback, protocol, and related ownership boundaries (#15919, #16379, #16508, #16523, #16962).
- Rust CI and workspace guardrails were simplified by blocking new crate features and dropping routine --all-features runs (#16455, #16473).
- Core compile times were reduced by removing expensive async-trait expansion from hot tool/task abstractions (#16630, #16631).
- Bazel diagnostics and dependency wiring improved with compact execution logs, repository-cache persistence, remote downloader support, and several platform-specific build fixes (#16577, #16926, #16928, #16634, #16744).
- Apr 9, 2026
- Date parsed from source:Apr 9, 2026
- First seen by Releasebot:Apr 9, 2026
- Modified by Releasebot:Apr 20, 2026
ChatGPT Enterprise/EDU by OpenAI
April 9, 2026
ChatGPT Enterprise/EDU releases GPT-5.3 Instant Mini as a smarter fallback model with more natural chat, stronger writing, and better context awareness.
Today, we’re releasing GPT-5.3 Instant Mini in ChatGPT. It replaces GPT-5 Instant Mini as the fallback model users reach after hitting their rate limits for GPT-5.3 Instant. Because it serves as a fallback, it won’t appear in the model picker.
Compared with GPT-5 Instant Mini, GPT-5.3 Instant Mini feels more natural in conversation, with stronger writing and contextual awareness throughout chats. It outperforms GPT-5 Instant Mini across a range of use cases.
Original source - Apr 9, 2026
- Date parsed from source:Apr 9, 2026
- First seen by Releasebot:Apr 9, 2026
April 9, 2026
ChatGPT releases GPT-5.3 Instant Mini in ChatGPT as a new fallback model with more natural conversation, stronger writing, and better contextual awareness. It also updates Pro and Plus plans with new Codex usage options and a $100/month Pro tier.
GPT-5.3 Instant mini in ChatGPT
Today, we’re releasing GPT-5.3 Instant Mini in ChatGPT. It replaces GPT-5 Instant Mini as the fallback model users reach after hitting their rate limits for GPT-5.3 Instant. Because it serves as a fallback, it won’t appear in the model picker.
Compared with GPT-5 Instant Mini, GPT-5.3 Instant Mini feels more natural in conversation, with stronger writing and contextual awareness throughout chats. It outperforms GPT-5 Instant Mini across a range of use cases.
New Pro plan options
We’re introducing a new $100/month Pro plan and updating how Codex usage works across Plus and Pro.
- Pro ($100/month): Built for longer, high-intensity Codex sessions. It includes unlimited access to GPT-5.4, access to GPT-5.4 Pro, and, for a limited time, up to 10x more Codex usage than Plus (up from 5X standard usage allowance).
- Pro ($200/month): Our highest-usage option remains available and continues on its current Codex promotion through May 31.
- Plus ($20/month): Plus remains best for steady, day-to-day use. As the temporary Codex promotion on Plus ends, we’re rebalancing Plus usage to support more sessions across the week, rather than longer, high-intensity sessions on a single day.
You can upgrade, switch, or cancel your plan from Settings > My Plan or from the Pricing page.
Original source - Apr 8, 2026
- Date parsed from source:Apr 8, 2026
- First seen by Releasebot:Apr 20, 2026
ChatGPT Enterprise/EDU by OpenAI
April 8, 2026
ChatGPT Enterprise/EDU expands Outlook Email and Calendar support with delegated team workflows for shared mailboxes and shared calendars, including reading messages, moving mail, sending plain-text email, and managing calendar events with added admin controls.
The Outlook Email and Calendar apps for ChatGPT now support more delegated Outlook workflows for teams. With the right Microsoft permissions, users can ask ChatGPT to list and read shared mailbox messages, browse shared mailbox folders, mark shared mail read or unread, move shared messages, and send plain-text email from or on behalf of a shared mailbox. ChatGPT can also create, update, respond to, cancel, delete, and attach small files to events on shared Outlook calendars.
Workspace owners and admins should review Microsoft Entra permissions, RBAC access, and the Outlook app's action controls before enabling newly added actions. Users who previously connected Outlook may need to reconnect after the workspace enables the new actions; Microsoft Entra approval may also be required.
Learn more.
Original source