Codex Release Notes

Last updated: Jan 14, 2026

  • Jan 9, 2026
    • Parsed from source:
      Jan 9, 2026
    • Detected by Releasebot:
      Jan 14, 2026

    Codex by OpenAI

    0.80.0

    New features add conversation fork endpoints, read requirements.toml, metrics, elevated sandbox onboarding, and V2 API skill input. Bug fixes improve env inheritance, review flow, patch approvals, model prompts, and Windows pasting. Documentation and changelog updates included.

    New Features

    • Add conversation/thread fork endpoints in the protocol and app server so clients can branch a session into a new thread. (#8866)
    • Expose requirements via requirement/list so clients can read requirements.toml and adjust agent-mode UX. (#8800)
    • Introduce metrics capabilities with additional counters for observability. (#8318, #8910)
    • Add elevated sandbox onboarding with prompts for upgrade/degraded mode plus the /elevate-sandbox command. (#8789)
    • Allow explicit skill invocations through v2 API user input. (#8864)

    Bug Fixes

    • Codex CLI subprocesses again inherit env vars like LD_LIBRARY_PATH / DYLD_LIBRARY_PATH to avoid runtime issues. As explained in #8945, failure to pass along these environment variables to subprocesses that expect them (notably GPU-related ones), was causing 10x+ performance regressions! Special thanks to @johnzfitch for the detailed investigation and write-up in #8945. (#8951)
    • /review in TUI/TUI2 now launches the review flow instead of sending plain text. (#8823)
    • Patch approval “allow this session” now sticks for previously approved files. (#8451)
    • Model upgrade prompt now appears even if the current model is hidden from the picker. (#8802)
    • Windows paste handling now supports non-ASCII multiline input reliably. Special thanks to @occurrent laying the groundwork for this fix in #8021! (#8774)
    • Git apply path parsing now handles quoted/escaped paths and /dev/null correctly to avoid misclassified changes. (#8824)

    Documentation

    • App-server README now documents skills support and usage. (#8853)
    • Skill-creator docs clarify YAML frontmatter formatting and quoting rules. (#8610)

    Changelog

    Full Changelog:
    rust-v0.79.0...rust-v0.80.0

    • fix: do not propose to add multiline commands to execpolicy (#8734) @tibo-openai
    • Enable model upgrade popup even when selected model is no longer in picker (#8802) @charley-oai
    • chore: stabilize core tool parallelism test (#8805) @tibo-openai
    • chore: silent just fmt (#8820) @jif-oai
    • fix: parse git apply paths correctly (#8824) @tibo-openai
    • fix: handle /review arguments in TUI (#8823) @tibo-openai
    • chore: rename unified exec sessions (#8822) @jif-oai
    • fix: handle early codex exec exit (#8825) @tibo-openai
    • chore: unify conversation with thread name (#8830) @jif-oai
    • Move tests below auth manager (#8840) @pakrym-oai
    • fix: upgrade lru crate to 0.16.3 (#8845) @bolinfest
    • Merge Modelfamily into modelinfo (#8763) @aibrahim-oai
    • remove unnecessary todos (#8842) @aibrahim-oai
    • Stop using AuthManager as the source of codex_home (#8846) @pakrym-oai
    • Fix app-server write_models_cache to treat models with less priority number as higher priority. (#8844) @aibrahim-oai
    • chore: drop useless feature flags (#8850) @jif-oai
    • chore: drop some deprecated (#8848) @jif-oai
    • [chore] update app server doc with skills (#8853) @celia-oai
    • fix: implement 'Allow this session' for apply_patch approvals (#8451) @owenlin0
    • Override truncation policy at model info level (#8856) @aibrahim-oai
    • Simplify error managment in run_turn (#8849) @aibrahim-oai
    • Add feature for optional request compression (#8767) @cconger
    • Clarify YAML frontmatter formatting in skill-creator (#8610) @darlingm
    • Warn in /model if BASE_URL set (#8847) @gt-oai
    • Support symlink for skills discovery. (#8801) @xl-openai
    • Feat: appServer.requirementList for requirement.toml (#8800) @shijie-oai
    • fix: update resource path resolution logic so it works with Bazel (#8861) @bolinfest
    • fix: use tokio for I/O in an async function (#8868) @bolinfest
    • add footer note to TUI (#8867) @iceweasel-oai
    • feat: introduce find_resource! macro that works with Cargo or Bazel (#8879) @bolinfest
    • Support UserInput::Skill in V2 API. (#8864) @xl-openai
    • add ability to disable input temporarily in the TUI. (#8876) @iceweasel-oai
    • fix: make the find_resource! macro responsible for the absolutize() call (#8884) @bolinfest
    • fix: windows can now paste non-ascii multiline text (#8774, #8855) @dylan-hurd-oai, @occurrent
    • chore: add list thread ids on manager (#8855) @jif-oai
    • feat: metrics capabilities (#8318) @jif-oai
    • fix: stabilize list_dir pagination order (#8826) @tibo-openai
    • chore: drop metrics exporter config (#8892) @jif-oai
    • chore: align error limit comment (#8896) @tibo-openai
    • fix: include project instructions in /review subagent (#8899) @tibo-openai
    • chore: add small debug client (#8894) @jif-oai
    • fix: leverage find_resource! macro in load_sse_fixture_with_id (#8888) @bolinfest
    • Avoid setpgid for inherited stdio on macOS (#8691) @seeekr
    • fix: leverage codex_utils_cargo_bin() in codex-rs/core/tests/suite (#8887) @bolinfest
    • chore: drop useless interaction_input (#8907) @jif-oai
    • nit: drop unused function call error (#8903) @jif-oai
    • feat: add a few metrics (#8910) @jif-oai
    • gitignore bazel-* (#8911) @zbarsky-openai
    • config requirements: improve requirement error messages (#8843) @gt-oai
    • fix: reduce duplicate include_str!() calls (#8914) @bolinfest
    • feat: add list loaded threads to app server (#8902) @jif-oai
    • [fix] app server flaky thread/resume tests (#8870) @celia-oai
    • clean: all history cloning (#8916) @jif-oai
    • otel test: retry WouldBlock errors (#8915) @gt-oai
    • Update models.json (#8792) @github-actions
    • fix: preserve core env vars on Windows (#8897) @tibo-openai
    • Add read-only when backfilling requirements from managed_config (#8913) @gt-oai
    • add tooltip hint for shell commands (!) (#8926) @fps7806
    • Immutable CodexAuth (#8857) @pakrym-oai
    • nit: parse_arguments (#8927) @jif-oai
    • fix: increase timeout for tests that have been flaking with timeout issues (#8932) @bolinfest
    • fix: correct login shell mismatch in the accept_elicitation_for_prompt_rule() test (#8931) @bolinfest
    • [fix] app server flaky send_messages test (#8874) @celia-oai
    • feat: fork conversation/thread (#8866) @apanasenko-oai
    • remove get_responses_requests and get_responses_request_bodies to use in-place matcher (#8858) @aibrahim-oai
    • [chore] move app server tests from chat completion to responses (#8939) @celia-oai
    • Attempt to reload auth as a step in 401 recovery (#8880) @pakrym-oai
    • fix: increase timeout for wait_for_event() for Bazel (#8946) @bolinfest
    • Elevated sandbox NUX (#8789) @iceweasel-oai
    • fix: treat null MCP resource args as empty (#8917) @tibo-openai
    • Add 5s timeout to models list call + integration test (#8942) @aibrahim-oai
    • fix: remove existing process hardening from Codex CLI (#8951) @bolinfest
    Original source Report a problem
  • Jan 9, 2026
    • Parsed from source:
      Jan 9, 2026
    • Detected by Releasebot:
      Jan 10, 2026

    Codex by OpenAI

    Codex CLI 0.80.0

  • Jan 7, 2026
    • Parsed from source:
      Jan 7, 2026
    • Detected by Releasebot:
      Dec 12, 2025
    • Modified by Releasebot:
      Jan 12, 2026

    Codex by OpenAI

    Codex CLI 0.79.0

  • Dec 19, 2025
    • Parsed from source:
      Dec 19, 2025
    • Detected by Releasebot:
      Dec 22, 2025
    • Modified by Releasebot:
      Jan 12, 2026

    Codex by OpenAI

    Agent skills in Codex

    Codex adds agent skills, reusable bundles of instructions that help automate tasks, now available in CLI and IDE. Invoke skills with $skill-name or let Codex choose automatically; an experimental create-plan skill requires install. Install per user or per repo, with built‑in and curated skills shipped by OpenAI.

    Codex now supports agent skills

    Codex now supports agent skills: reusable bundles of instructions (plus optional scripts and resources) that help Codex reliably complete specific tasks.

    Skills are available in both the Codex CLI and IDE extensions.

    You can invoke a skill explicitly by typing $skill-name (for example, $skill-installer or the experimental $create-plan skill after installing it), or let Codex select a skill automatically based on your prompt.

    Learn more in the skills documentation.

    Folder-based standard (agentskills.io)

    Following the open agent skills specification, a skill is a folder with a required SKILL.md and optional supporting files:

    • my-skill/
      • SKILL.md # Required: instructions + metadata
      • scripts/ # Optional: executable code
      • references/ # Optional: documentation
      • assets/ # Optional: templates, resources

    Install skills per-user or per-repo

    You can install skills for just yourself in ~/.codex/skills, or for everyone on a project by checking them into .codex/skills in the repository.

    Codex also ships with a few built-in system skills to get started, including $skill-creator and $skill-installer. The $create-plan skill is experimental and needs to be installed (for example: $skill-installer install https://github.com/openai/skills/tree/main/skills/.experimental/create-plan).

    Curated skills directory

    Codex ships with a small curated set of skills inspired by popular workflows at OpenAI. Install them with $skill-installer, and expect more over time.

    Original source Report a problem
  • Dec 18, 2025
    • Parsed from source:
      Dec 18, 2025
    • Detected by Releasebot:
      Dec 19, 2025
    • Modified by Releasebot:
      Jan 12, 2026

    Codex by OpenAI

    Introducing GPT-5.2-Codex

    GPT-5.2-Codex is released as the new agentic coding model for complex software engineering. It boosts long-horizon context, big refactors, Windows performance, and cybersecurity, with the CLI and IDE defaulting to gpt-5.2-codex and API access coming soon.

    GPT-5.2-Codex Release Notes

    Today we are releasing GPT-5.2-Codex , the most advanced agentic coding model yet for complex, real-world software engineering.

    GPT-5.2-Codex is a version of GPT-5.2 further optimized for agentic coding in Codex, including improvements on long-horizon work through context compaction, stronger performance on large code changes like refactors and migrations, improved performance in Windows environments, and significantly stronger cybersecurity capabilities.

    Starting today, the CLI and IDE Extension will default to gpt-5.2-codex for users who are signed in with ChatGPT. API access for the model will come soon.

    If you have a model specified in your config.toml configuration file , you can instead try out gpt-5.2-codex for a new Codex CLI session using:

    codex --model gpt-5.2-codex
    

    You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5.2-Codex from the dropdown menu.

    If you want to switch for all sessions, you can change your default model to gpt-5.2-codex by updating your config.toml configuration file :

    model = "gpt-5.2-codex”
    
    Original source Report a problem
  • Dec 15, 2025
    • Parsed from source:
      Dec 15, 2025
    • Detected by Releasebot:
      Nov 25, 2025
    • Modified by Releasebot:
      Dec 18, 2025

    Codex by OpenAI

    Codex CLI 0.73.0

    $
    npm
    install
    -g
    @openai/[email protected]
    

    View details

    Original source Report a problem
  • Dec 10, 2025
    • Parsed from source:
      Dec 10, 2025
    • Detected by Releasebot:
      Dec 4, 2025
    • Modified by Releasebot:
      Jan 12, 2026

    Codex by OpenAI

    Codex CLI 0.69.0

  • Dec 4, 2025
    • Parsed from source:
      Dec 4, 2025
    • Detected by Releasebot:
      Dec 12, 2025
    • Modified by Releasebot:
      Dec 29, 2025

    Codex by OpenAI

    Introducing Codex for Linear

    Assign or mention @Codex in an issue to kick-off a Codex cloud task. As Codex works, it posts updates back to Linear, providing a link to the completed task so you can review, open a PR, or keep working.

    Screenshot of a successful Codex task started in Linear

    Codex for Linear documentation

    To learn more about how to connect Codex to Linear both locally through MCP and through the new integration, check out the
    Codex for Linear documentation
    .

    Original source Report a problem
  • Nov 21, 2025
    • Parsed from source:
      Nov 21, 2025
    • Detected by Releasebot:
      Nov 25, 2025
    • Modified by Releasebot:
      Dec 4, 2025

    Codex by OpenAI

    Codex CLI 0.63.0

  • Nov 21, 2025
    • Parsed from source:
      Nov 21, 2025
    • Detected by Releasebot:
      Oct 7, 2025
    • Modified by Releasebot:
      Nov 22, 2025

    Codex by OpenAI

    Codex CLI Release: 0.63.0

    Release details

    Click to reveal release details

    To install this version of Codex CLI, run:

    $ npm install -g @openai/[email protected]
    

    View full release on GitHub

    Original source Report a problem

Related products