Google Release Notes

Last updated: Mar 6, 2026

Google Products (22)

All Google Release Notes (1069)

  • March 2026
    • No date parsed from source.
    • First seen by Releasebot:
      Mar 6, 2026
    Google logo

    Go by Google

    [security] Go 1.26.1 and Go 1.25.8 are released

    Go releases 1.26.1 and 1.25.8 bring five security fixes across crypto/x509, html/template, net/url and os. Highlights address certificate verification constraints, panics on malformed certs, meta tag URL escaping, IPv6 literal validation, and directory FileInfo root escapes. Includes download links.

    Go releases: Go 1.26.1 and 1.25.8

    Hello gophers,

    We have just released Go versions 1.26.1 and 1.25.8, minor point releases.

    These releases include 5 security fixes following the security policy:

    • crypto/x509: incorrect enforcement of email constraints
      When verifying a certificate chain which contains a certificate containing
      multiple email address constraints (composed of the full email address) which
      share common local portions (the portion of the address before the '@'
      character) but different domain portions (the portion of the address after the
      '@' character), these constraints will not be properly applied, and only the
      last constraint will be considered.

      This can allow certificates in the chain containing email addresses which are
      either not permitted or excluded by the relevant constraints to be returned by
      calls to Certificate.Verify. Since the name constraint checks happen after chain
      building is complete, this only applies to certificate chains which chain to
      trusted roots (root certificates either in VerifyOptions.Roots or in the system
      root certificate pool), requiring a trusted CA to issue certificates containing
      either not permitted or excluded email addresses.

      This issue only affects Go 1.26.

      Thanks to Jakub Ciolek for reporting this issue.

      This is CVE-2026-27137 and Go issue https://go.dev/issue/77952.

    • crypto/x509: panic in name constraint checking for malformed certificates
      Certificate verification can panic when a certificate in the chain has an empty
      DNS name and another certificate in the chain has excluded name constraints.
      This can crash programs that are either directly verifying X.509 certificate
      chains, or those that use TLS.

      Since the name constraint checks happen after chain building is complete, this
      only applies to certificate chains which chain to trusted roots (root
      certificates either in VerifyOptions.Roots or in the system root certificate
      pool), requiring a trusted CA to issue certificates containing malformed DNS
      names.

      This issue only affects Go 1.26.

      Thanks to Jakub Ciolek for reporting this issue.

      This is CVE-2026-27138 and Go issue https://go.dev/issue/77953.

    • html/template: URLs in meta content attribute actions are not escaped
      Actions which insert URLs into the content attribute of HTML meta tags are not
      escaped. This can allow XSS if the meta tag also has an http-equiv attribute
      with the value "refresh".

      A new GODEBUG setting has been added, htmlmetacontenturlescape, which can be
      used to disable escaping URLs in actions in the meta content attribute which
      follow "url=" by setting htmlmetacontenturlescape=0.

      This is CVE-2026-27142 and Go issue https://go.dev/issue/77954.

    • net/url: reject IPv6 literal not at start of host
      The Go standard library function net/url.Parse insufficiently
      validated the host/authority component and accepted some invalid URLs
      by effectively treating garbage before an IP-literal as ignorable.
      The function should have rejected this as invalid.

      To prevent this behavior, net/url.Parse now rejects IPv6 literals
      that do not appear at the start of the host subcomponent of a URL.

      Thanks to Masaki Hara (https://github.com/qnighy) of Wantedly.

      This is CVE-2026-25679 and Go issue https://go.dev/issue/77578.

    • os: FileInfo can escape from a Root
      On Unix platforms, when listing the contents of a directory using
      File.ReadDir or File.Readdir the returned FileInfo could reference
      a file outside of the Root in which the File was opened.

      The contents of the FileInfo were populated using the lstat system
      call, which takes the path to the file as a parameter. If a component
      of the full path of the file described by the FileInfo is replaced with
      a symbolic link, the target of the lstat can be directed to another
      location on the filesystem.

      The impact of this escape is limited to reading metadata provided by
      lstat from arbitrary locations on the filesystem. This could be used
      to probe for the presence or absence of files as well as gleaning
      metadata like file sizes, but does not permit reading or writing files
      outside the root.

      The FileInfo is now populated using fstatat.

      Thank you to Miloslav Trmač of Red Hat for reporting this issue.

      This is CVE-2026-27139 and Go issue https://go.dev/issue/77827.

    View the release notes for more information:
    https://go.dev/doc/devel/release#go1.26.1

    You can download binary and source distributions from the Go website:
    https://go.dev/dl/

    To compile from source using a Git clone, update to the release with
    git checkout go1.26.1
    and build as usual.

    Thanks to everyone who contributed to the releases.

    Cheers,
    Cherry and David for the Go team

    Original source Report a problem
  • Mar 6, 2026
    • Date parsed from source:
      Mar 6, 2026
    • First seen by Releasebot:
      Mar 7, 2026
    Google logo

    Google Workspace by Google

    Google Workspace Updates Weekly Recap - March 6, 2026

    Google highlights a week of Workspace updates. It covers improved join permission logging for Meet audit events, Gemini app conversation sharing via public links, tighter coupling of Calendar events and Meet links to protect artifacts, and new dynamic data sources for dropdowns in Google Chat apps.

    March 6, 2026

    Google Workspace Updates Weekly Recap - March 6, 2026

    • Improved join permission logging for Google Meet Audit events

      Google Meet Audit event logging for endpoints will now also include the permission type used to grant access to join a meeting. | Learn more about
      improved join permission logging for Google Meet Audit events.

    • Workspace admins can allow Gemini app conversation sharing for their organizations

      Google Workspace admins can now enable users in their organization to share their Gemini chat conversations by creating public links to share and publish. | Learn more about
      how Workspace admins can allow Gemini app conversation sharing for their organizations.

    • Improving the connection between Google Calendar events and Google Meet calls

      Google is updating how Google Meet links to Calendar events to ensure meeting artifacts (recordings, notes, and chats) are shared with the correct people.This update solves the "ambiguity" of reused codes, preventing sensitive meeting records from being shared with the wrong participants or lost entirely. | Learn more about
      how the connection between Google Calendar events and Google Meet calls is improved.

    • New dynamic data source support for dropdowns in Google Chat apps

      Google Chat developers can now use dynamic data sources for dropdown menus, allowing apps to query and filter external databases in real-time as a user types. | Learn more about
      new dynamic data source support for dropdowns in Google Chat apps.

    The announcements above were published on the Workspace Updates blog over the last week. Please refer to the original blog posts for complete details.

    Original source Report a problem
  • All of your release notes in one feed

    Join Releasebot and get updates from Google and hundreds of other software products.

  • Mar 6, 2026
    • Date parsed from source:
      Mar 6, 2026
    • First seen by Releasebot:
      Mar 6, 2026
    Google logo

    Gemini by Google

    The latest AI news we announced in February

    Google highlights February AI updates with Gemini 3.1 Pro and Nano Banana 2 launches, Flow enhancements, and Lyria 3 music generation. The roundup covers real product releases and availability for developers and consumers, plus Deep Think upgrade and new AI partnerships shaping user-facing tools.

    Here’s a recap of our biggest AI updates from February, including highlights from the AI Impact Summit in India, the release of Gemini 3.1 Pro and Nano Banana 2.

    For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. Teams across Google are working on ways to unlock AI’s benefits in fields as wide-ranging as healthcare, crisis response and education. To keep you posted on our progress, we're doing a regular roundup of Google's most recent AI news.

    Here’s a look back at some of our AI announcements from February.

    For us, February was about global impact. At the AI Impact Summit in India, we demonstrated how our ongoing breakthroughs in AI are now solving real-world challenges for people everywhere — and we launched new partnerships and investments to make sure everyone benefits. We see AI as an enabling technology that can help people achieve their goals — whether you're a researcher, entrepreneur or Olympic athlete. On the slopes, in a research lab, or right in the palm of your hand, Google's latest AI announcements are here to help you.

    AI to help everyone dream bigger

    We announced new partnerships and investments at the AI Impact Summit. As world leaders gathered in New Delhi, India, we shared how we’re partnering to make AI work for everyone. That includes new Impact Challenges to help advance science and spark innovation for governments, as well as new national partnerships in India for AI and collaborations to accelerate scalable AI solutions in science and education.

    CEO Sundar Pichai delivered opening remarks at the AI Impact Summit. Sundar explained why “no technology has [him] dreaming bigger than AI” and called on leaders to pursue AI boldly, approach it responsibly, and work through this moment in AI’s development together. He shared ways that Google is ensuring everyone benefits with major infrastructure investments and new AI skills training.

    AI to help express your creativity

    We released Nano Banana 2, combining Pro image capabilities with Flash image speed. That means you can now access high-quality image generation with faster results across products like the Gemini app and Google Search. We’re also continuing to improve tools like SynthID to help you identify AI-generated content. Developers can now build with Nano Banana 2 and deploy sophisticated visual creation at scale with an amazing price-performance ratio.

    We released our most advanced music generation tools. Lyria 3 allows you to create custom music in the Gemini app. That means you can describe an idea or upload a photo or video, and Gemini will generate a 30-second track with custom cover art. On top of sharing that news, we also shared six tips to get you started prompting Lyria 3. And as an added creative tool, we also announced that ProducerAI is joining Google Labs. Whether you’re refining lyrics or a melody, ProducerAI is a music creation partner that can help turn your imagination into dynamic, comprehensive songs.

    We shared new ways to create images and videos in Flow. To help you generate, edit and animate images and videos in a single workspace, we’re bringing our top AI capabilities into Flow. You can create high-fidelity images and instantly use them as building blocks for video generation, all in one place. With an updated interface, it’s now even easier to search, filter and manage your assets.

    AI to help clarify and manage complex challenges

    We released Gemini 3.1 Pro to help tackle your most complex tasks. Gemini 3.1 Pro is a smarter, more capable baseline model for complex problem-solving, demonstrating more than double the reasoning performance of 3 Pro. It’s designed to help you when a simple answer isn’t enough, whether you’re looking for a clear, visual explanation of a topic, synthesizing data into a single view or pulling together a creative project. Gemini 3.1 Pro is available to developers, enterprises and consumers via various platforms.

    We released a major upgrade to Gemini 3 Deep Think. We collaborated with world-class scientists and researchers to improve Gemini 3 Deep Think. Designed specifically for the complexities of science and engineering, the updated Gemini 3 Deep Think excels where data is messy and solutions aren't black-and-white. It moves beyond abstract theory to deliver practical, actionable results for technical challenges. The new Deep Think is now available in the Gemini app for Google AI Ultra subscribers. Researchers, engineers and enterprises can express interest in early access to test Deep Think via the Gemini API.

    We shared our view on what’s required to achieve digital resilience in the AI era at MSC. New technologies mean new frontiers for strategic competition. We’re already seeing how threats are evolving, and how old ways of responding are failing to meet the moment. That’s why at the 62nd Munich Security Conference, Google President of Global Affairs Kent Walker called for a collaborative approach to security and outlined how partners could work together to build resilience without sacrificing control over their data.

    AI to help athletes (and their fans) elevate their game

    We shared how Google Cloud helped Team USA find their edge with AI. Ahead of the Olympic Winter Games, Google Cloud and Google DeepMind built an AI video analysis tool to help Team USA and U.S. Ski & Snowboard elite athletes analyze their tricks. Using Google DeepMind’s research into spatial intelligence, the platform maps an athlete’s motion directly from 2D video images — even through bulky winter gear. The tool, which runs on Google Cloud, processes this data in minutes, providing near real-time feedback that athletes and coaches could use to make adjustments and help elevate performance.

    We shared our new Gemini ad for football’s biggest weekend. In our national in-game spot, "New Home," a mother and son use Gemini to bring their new house to life, imagining how different spaces will look and feel. The spot, named by the Kellogg School as the best in-game ad in its annual ranking, played during the big game and highlighted just a few of the amazing things people can do — and are doing — with Gemini.

    Original source Report a problem
  • Mar 5, 2026
    • Date parsed from source:
      Mar 5, 2026
    • First seen by Releasebot:
      Mar 6, 2026
    Google logo

    Google Ads by Google

    Ask a Techspert: How does AI understand my visual searches?

    Google highlights a major leap in visual search with AI Mode and Circle to Search, enabling multi-object searches in images and simultaneous results. It explains the fan-out technique powering faster, cohesive image queries for uses from fashion to home decor.

    Visual search progress

    Visual search has improved leaps and bounds — look no further than recent updates to Google Search. Here, a Google expert explains this progress and what technique we’ve used to make it happen.

    We’ve all been there: You see a photo of a perfectly styled living room or a well-curated street-style outfit, and you want to know where everything came from. Until recently, visual search was a one-item-at-a-time process. But a major update to Circle to Search and Lens now allows Google to break down and search for multiple objects within a single image simultaneously. This means if you use Circle to Search on Android to search for an entire outfit, you’ll see results for every component of a look, not just one piece at a time. In recent months, we’ve also launched several updates that enhance both visual search and image results in AI Mode, so you can better find inspiration as you search.

    To better understand these breakthroughs, we talked to Search Senior Engineering Director Dounia Berrada.

    What part of Search do you work on?

    I focus on multimodal search, aka Google Lens — essentially, enabling Google to help with your most complex questions about images, PDFs and anything you see. Visual search is redefining how we interact with information; Lens should be intelligent enough to understand the "why" behind your search, making it effortless to get help with what you see on your screen, or in the world around you. That means building a tool that can just as easily explain a complex math problem as it can identify a rare succulent or help you track down a pair of shoes you love.

    How does it do that?

    Imagine you’re redesigning a room so you upload a photo of a mid-century modern space for inspiration. You probably aren’t just looking for the side table; you want to recreate the entire vibe. Previously, you’d have to search for the lamp, then the rug, then the chair individually. Now, AI Mode can break down that complex image, identify each individual piece and issue multiple visual searches simultaneously. You can see this in action right now using Circle to Search.

    What powers these types of visual search responses?

    Our advanced Gemini models make AI Mode possible, and its multimodal capabilities benefit from the visual expertise we've built into Lens over the years. When you search with an image, Gemini analyzes the image alongside your question to decide which tools to use. Let's say you're scrolling on your phone and see an outfit on social media that you love. When you search it, the model knows to use Lens to retrieve image results for the hat, shoes and jacket of the outfit simultaneously. It then weaves those individual results into one easy-to-read response.

    Think of it this way: The AI model acts as the "brain" that can “see” the image, while the visual search backend acts as the "library" containing billions of web results. The AI performs multi-object reasoning to understand what you’re looking at. Then it uses a "fan-out" technique which triggers multiple searches at once, reads through the results and presents a single, cohesive response with helpful links — all in seconds.

    Can you explain the fan-out technique?

    AI Mode is basically doing a dozen searches for you in the time it takes to do one. If you upload a photo of a garden you admire, you might have several questions: Will these plants survive in the shade? Are they right for my climate? How much maintenance do they need?

    Before, you’d ask those one by one. Now, AI Mode identifies all those necessary "fan-out" searches. This way, it gathers care requirements for every plant in the photo using helpful web results, breaks down the info and even suggests next steps you might want to take. Since AI Mode is uncovering more visual results from a single search, it's easier than ever to find just what you're looking for, and stumble upon something new that sparks your interest.

    Do you have to start with an image to get this kind of help in AI Mode?

    Not at all! You can start with a simple text search in AI Mode, like "visual inspo for work outfits." When you see a result you like, you can just say, "Show me more options like the second skirt." The system immediately takes that specific image and begins the fan-out process from there.

    It definitely seems great for shopping — what else could you use it for?

    You could take a photo of a wall at a museum and ask for explanations of each painting. Or take a photo of a bakery window and ask what all the different pastries are. It’s about moving from "What is this one thing?" to "Explain this entire scene to me."

    Sounds like I’ve got some photos to take and a lot more to discover. I'm off to put these tools to the test!

    Original source Report a problem
  • Mar 5, 2026
    • Date parsed from source:
      Mar 5, 2026
    • First seen by Releasebot:
      Mar 6, 2026
    Google logo

    Antigravity by Google

    1.20.3

    Google releases stability and UI improvements with multiple enhancements, fixes, and patches.

    Stability and UI improvements.

    Improvements (3)

    Fixes (3)

    Patches (1)

    Original source Report a problem
  • Mar 4, 2026
    • Date parsed from source:
      Mar 4, 2026
    • First seen by Releasebot:
      Mar 5, 2026
    Google logo

    Google Meet by Google

    Improving the connection between Google Calendar events and Google Meet calls

    Google Meet now ties each video call to its initial Calendar event, fixing code reuse ambiguity. Artifacts like notes and chat stay with the original event’s guests; reusing a code on a new calendar won’t copy artifacts to new guests. Apple Calendar auto‑updates codes. Gradual rollout begins March 2026.

    For each video call, Meet attempts to connect the right Calendar event to determine:

    • Who receives meeting records (ex. Gemini notes, recordings)
    • Who is included in the continuous meeting chat in Google Chat
    • Who can join the meeting without having to be manually admitted by the host

    Reusing the same meeting code across multiple events can sometimes lead to ambiguity and unexpected behavior such as meeting artifacts being shared with the wrong guests (or no guests at all). We recently announced a change to reduce this ambiguity by stopping automatically copying Meet codes when duplicating Calendar events.

    We are now fixing this ambiguity by having each Meet video call be tied to the initial Calendar event where it was created. This gives predictability and transparency about which guests receive notes, messages in Google Chat, recordings and other details from the meeting.

    When users manually paste an old meeting code into a new Calendar event, they’ll see a dialog highlighting that the Meet code is still tied to the initial event. Codes created outside of Calendar (like instant meetings from meet.google.com) will remain unlinked.

    For example:

    • If you reuse the meeting code from an old Calendar (Event A) on a new Calendar (Event B), meeting artifacts will only be shared with the host, co-hosts, and guests of the old Calendar event (Event A), and not guests of the new Calendar event (Event B).
    • If you reuse a meeting code created from meet.google.com on a new Calendar event, meeting artifacts will only be shared with the meetings host and co-hosts, and not guests of the new Calendar event.

    Warnings shown when reusing a meet code

    Additional details

    If you use Apple Calendar to create Google Calendar events with a Google Meet meeting code, the code will be updated automatically. This change ensures that each event uses a unique meeting code. Users receive an email informing them about the update.

    Getting started

    • Admins: There is no admin control for this feature.
    • End users: There is no end user setting for this feature. Visit the Help Center to learn more.

    Rollout pace

    Changes to behavior when creating Google Calendar event with meeting code in Apple Calendar

    • Rapid Release domains and Scheduled Release domains: Gradual rollout (up to 15 days for feature visibility) starting on March 9, 2026

    Changes to behavior when reusing meeting code in Google Calendar

    • Rapid Release domains and Scheduled Release domains: Gradual rollout (up to 15 days for feature visibility) starting on March 23, 2026

    Availability

    • Available to all Google Workspace customers and users with personal Google accounts

    Resources

    • Google Help: Learn about meeting codes in Calendar events
    • Google Help: Take notes for me in Google Meet
    • Google Help: Start or schedule a Google Meet video meeting
    • Google Workspace Updates Blog: Enhancing meeting privacy for copied Calendar events
    • Google Workspace Updates Blog: New to Google Meet: Continue your conversations in Google Chat
    Original source Report a problem
  • Mar 3, 2026
    • Date parsed from source:
      Mar 3, 2026
    • First seen by Releasebot:
      Mar 4, 2026
    • Modified by Releasebot:
      Mar 5, 2026
    Google logo

    Firebase by Google

    March 03, 2026

    Firebase AI Logic adds Gemini 3.1 Lite support with direct access via Gemini Developer API or Vertex AI Gemini API, plus a global location note. Firebase C++ SDK v13.5.0 lands with App Check enhancements and a Cloud Firestore bug fix and other changes.

    Firebase AI Logic

    Firebase AI Logic supports the latest Gemini model: gemini-3.1-flash-lite-preview. You can directly access this model in your mobile or web app via either the Gemini Developer API or the Vertex AI Gemini API.

    If you're using the Gemini Developer API, accessing this model does not require the pay-as-you-go Blaze pricing plan. If you access the model via the Vertex AI Gemini API, make sure to set the location in your request to global.

    SDK Releases

    The Firebase SDK for C++ (v13.5.0) is now available. This release includes new features in App Check, a bug fix in Cloud Firestore, and other changes.

    Original source Report a problem
  • Mar 3, 2026
    • Date parsed from source:
      Mar 3, 2026
    • First seen by Releasebot:
      Mar 4, 2026
    Google logo

    Gemini CLI by Google

    Release v0.32.1

    What's Changed

    • fix(patch): cherry-pick 0659ad1 to release/v0.32.0-pr-21042 to patch version v0.32.0 and create version 0.32.1 by @gemini-cli-robot in #21048

    Full Changelog

    v0.32.0...v0.32.1

    Original source Report a problem
  • Mar 3, 2026
    • Date parsed from source:
      Mar 3, 2026
    • First seen by Releasebot:
      Mar 4, 2026
    Google logo

    Gemini CLI by Google

    Release v0.32.0

    Major release with extensive feature and stability updates across core, plan, and CLI. Highlights include improved A2A content extraction, parallel extension loading, enhanced plan editing, new workspace policy handling, and expanded telemetry and error handling.

    What's Changed

    • feat(plan): add integration tests for plan mode by @Adib234 in #20214
    • fix(acp): update auth handshake to spec by @skeshive in #19725
    • feat(core): implement robust A2A streaming reassembly and fix task continuity by @adamfweidman in #20091
    • feat(cli): load extensions in parallel by @scidomino in #20229
    • Plumb the maxAttempts setting through Config args by @kevinjwang1 in #20239
    • fix(cli): skip 404 errors in setup-github file downloads by @h30s in #20287
    • fix(cli): expose model.name setting in settings dialog for persistence by @achaljhawar in #19605
    • docs: remove legacy cmd examples in favor of powershell by @scidomino in #20323
    • feat(core): Enable model steering in workspace. by @joshualitt in #20343
    • fix: remove trailing comma in issue triage workflow settings json by @Nixxx19 in #20265
    • feat(core): implement task tracker foundation and service by @anj-s in #19464
    • test: support tests that include color information by @jacob314 in #20220
    • feat(core): introduce Kind.Agent for sub-agent classification by @abhipatel12 in #20369
    • Changelog for v0.30.0 by @gemini-cli-robot in #20252
    • Update changelog workflow to reject nightly builds by @g-samroberts in #20248
    • Changelog for v0.31.0-preview.0 by @gemini-cli-robot in #20249
    • feat(cli): hide workspace policy update dialog and auto-accept by default by @Abhijit-2592 in #20351
    • feat(core): rename grep_search include parameter to include_pattern by @SandyTao520 in #20328
    • feat(plan): support opening and modifying plan in external editor by @Adib234 in #20348
    • feat(cli): implement interactive shell autocompletion by @mrpmohiburrahman in #20082
    • fix(core): allow /memory add to work in plan mode by @Jefftree in #20353
    • feat(core): add HTTP 499 to retryable errors and map to RetryableQuotaError by @bdmorgan in #20432
    • feat(core): Enable generalist agent by @joshualitt in #19665
    • Updated tests in TableRenderer.test.tsx to use SVG snapshots by @devr0306 in #20450
    • Refactor Github Action per b/485167538 by @google-admin in #19443
    • fix(github): resolve actionlint and yamllint regressions from #19443 by @jerop in #20467
    • fix: action var usage by @galz10 in #20492
    • feat(core): improve A2A content extraction by @adamfweidman in #20487
    • fix(cli): support quota error fallbacks for all authentication types by @sehoon38 in #20475
    • fix(core): flush transcript for pure tool-call responses to ensure BeforeTool hooks see complete state by @krishdef7 in #20419
    • feat(plan): adapt planning workflow based on complexity of task by @jerop in #20465
    • fix: prevent orphaned processes from consuming 100% CPU when terminal closes by @yuvrajangadsingh in #16965
    • feat(core): increase fetch timeout and fix [object Object] error stringification by @bdmorgan in #20441
    • [Gemma x Gemini CLI] Add an Experimental Gemma Router that uses a LiteRT-LM shim into the Composite Model Classifier Strategy by @sidwan02 in #17231
    • docs(plan): update documentation regarding supporting editing of plan files during plan approval by @Adib234 in #20452
    • test(cli): fix flaky ToolResultDisplay overflow test by @jwhelangoog in #20518
    • ui(cli): reduce length of Ctrl+O hint by @jwhelangoog in #20490
    • fix(ui): correct styled table width calculations by @devr0306 in #20042
    • Avoid overaggressive unescaping by @scidomino in #20520
    • feat(telemetry) Instrument traces with more attributes and make them available to OTEL users by @heaventourist in #20237
    • Add support for policy engine in extensions by @chrstnb in #20049
    • Docs: Update to Terms of Service & FAQ by @jkcinouye in #20488
    • Fix bottom border rendering for search and add a regression test. by @jacob314 in #20517
    • fix(core): apply retry logic to CodeAssistServer for all users by @bdmorgan in #20507
    • Fix extension MCP server env var loading by @chrstnb in #20374
    • feat(ui): add 'ctrl+o' hint to truncated content message by @jerop in #20529
    • Fix flicker showing message to press ctrl-O again to collapse. by @jacob314 in #20414
    • fix(cli): hide shortcuts hint while model is thinking or the user has typed a prompt + add debounce to avoid flicker by @jacob314 in #19389
    • feat(plan): update planning workflow to encourage multi-select with descriptions of options by @Adib234 in #20491
    • refactor(core,cli): useAlternateBuffer read from config by @psinha40898 in #20346
    • fix(cli): ensure dialogs stay scrolled to bottom in alternate buffer mode by @jacob314 in #20527
    • fix(core): revert auto-save of policies to user space by @Abhijit-2592 in #20531
    • Demote unreliable test. by @gundermanc in #20571
    • fix(core): handle optional response fields from code assist API by @sehoon38 in #20345
    • fix(cli): keep thought summary when loading phrases are off by @LyalinDotCom in #20497
    • feat(cli): add temporary flag to disable workspace policies by @Abhijit-2592 in #20523
    • Disable expensive and scheduled workflows on personal forks by @dewitt in #20449
    • Moved markdown parsing logic to a separate util file by @devr0306 in #20526
    • fix(plan): prevent agent from using ask_user for shell command confirmation by @Adib234 in #20504
    • fix(core): disable retries for code assist streaming requests by @sehoon38 in #20561
    • feat(billing): implement G1 AI credits overage flow with billing telemetry by @gsquared94 in #18590
    • feat: better error messages by @gsquared94 in #20577
    • fix(ui): persist expansion in AskUser dialog when navigating options by @jerop in #20559
    • fix(cli): prevent sub-agent tool calls from leaking into UI by @abhipatel12 in #20580
    • fix(cli): Shell autocomplete polish by @jacob314 in #20411
    • Changelog for v0.31.0-preview.1 by @gemini-cli-robot in #20590
    • Add slash command for promoting behavioral evals to CI blocking by @gundermanc in #20575
    • Changelog for v0.30.1 by @gemini-cli-robot in #20589
    • Add low/full CLI error verbosity mode for cleaner UI by @LyalinDotCom in #20399
    • Disable Gemini PR reviews on draft PRs. by @gundermanc in #20362
    • Docs: FAQ update by @jkcinouye in #20585
    • fix(core): reduce intrusive MCP errors and deduplicate diagnostics by @spencer426 in #20232
    • docs: fix spelling typos in installation guide by @campox747 in #20579
    • Promote stable tests to CI blocking. by @gundermanc in #20581
    • feat(core): enable contiguous parallel admission for Kind.Agent tools by @abhipatel12 in #20583
    • Enforce import/no-duplicates as error by @Nixxx19 in https://github.com/google-g ...

    Full Changelog: v0.30.0...v0.30.1

    Original source Report a problem
  • Mar 3, 2026
    • Date parsed from source:
      Mar 3, 2026
    • First seen by Releasebot:
      Mar 4, 2026
    Google logo

    Gemini by Google

    March Pixel Drop: New personalization and AI tools

    March Pixel Drop unleashes Circle to Search, Gemini tasks and Magic Cue restaurant picks across Pixel phones and Watch, with new AI icon styles and At a Glance enhancements. Standalone Now Playing, Find Hub on Pixel Watch, one handed gestures, Express Pay and Earthquake/SOS alerts boost safety and convenience.

    Explore the latest Pixel features, including new ways to shop with Circle to Search, safety features on Pixel Watch and restaurant recommendations from Magic Cue.

    Our March Pixel Drop is here, and your Pixel devices just became even more intuitive, personal and helpful. Updates start rolling out today and will continue over the next several weeks. Check out what’s new below and learn about additional updates coming to Pixel devices in the latest Android launches.

    Get the full picture with Circle to Search

    Circle to Search is great at helping you learn more about something you see on your screen. And with multi-object image recognition you can get inspired by everything you see on your screen. Whether you want to identify every plant in a botanical garden, find out more about characters in a movie trailer or learn about the different dishes in a bento box you see online, Circle to Search can now help you explore every detail of an image. See an outfit that you love while scrolling? You can use Circle to Search on your Pixel 10 device to find every piece of the look.1 Simply circle the entire outfit, and Google will help you find all of the items you see — from the warm wool maxi coat to the oxford shoes — all in one search.

    To take the guesswork out of shopping, once you find an item you like, you can easily try it on right in Circle to Search. Just select an eligible product and tap the new "Try It On" button right within the image result. You can either upload your own photo or select a model to see how the item looks.2

    Offload your to-do list to Gemini

    Let Gemini handle your busy work. From ordering groceries to booking a rideshare service to reordering your usual coffee, Gemini works with your apps in the background to complete everyday tasks.3 And you have the option to view or end tasks, too, so you’re in control.4 This is available as a beta feature in the Gemini app.

    Find restaurant recommendations with Magic Cue and Gemini

    Finding the right place to eat just got a lot easier: When you’re texting with friends about restaurant ideas, Magic Cue prompts you to use Gemini to find the perfect spot. With just a tap, Gemini opens a new window within the chat with restaurant options based on your conversation, without you having to switch apps. You never have to leave the chat, so you can stay in the moment with your friends.5

    Keep track of your tunes in the Now Playing app

    Your Pixel’s built-in music recognition tool is getting its own home. Now Playing is now a standalone app, so you can keep track of all your music — from new discoveries to old favorites. You can also use the history tab to see everything in one place and then play tracks in your preferred music app.6

    See more at a glance

    Need quick updates on transit delays affecting your commute?7 Or real-time game scores?8 How about a look at your financial portfolio?9 At a Glance can now help you find the best route home, follow your favorite sports team, stay updated with Google Finance and so much more.

    Generate custom icons that suit your home screen

    Make your home screen even more you. Choose from five AI-generated styles that will give all the icons across your display a consistent aesthetic that matches your unique personality. You can also try them in the new SpongeBob SquarePants Theme Pack, too!

    Access AI features in more regions

    Scam Detection in Phone by Google is now available on Pixel phones in France, Italy, Spain, Mexico, Germany and Japan. Designed for your peace of mind, this feature recognizes speech patterns common by fraudsters and alerts you when a conversation turns suspicious.10

    And in India, Call Notes allows you to easily review your phone conversations. Your Pixel phone will record and transcribe your calls, making it easy for you to look back on calls whenever you need to.11

    Protect your phone with your Pixel Watch

    Pixel Watch now works proactively to secure your phone: It sends instant alerts if you accidentally leave it behind and automatically locks your phone the moment you move out of range.12 Plus, a connected watch and phone now offer faster identity checks13 for smooth, secure access.

    Get the peace of mind that finding your belongings is always just a tap away. Now you can also find your misplaced devices and essentials in seconds, right from your wrist with Find Hub on Pixel Watch. Directly ring any device that’s within Bluetooth range, or if it’s out of range, you’ll get the device’s location and directions on a map.

    Use more AI-powered magic and one-handed gestures on Pixel Watches

    The intuitive, one-handed gestures we brought to Pixel Watch 4 are now expanding to Pixel Watch 3. Whether you’re carrying groceries or grabbing a coffee, double pinch or turn your wrist to answer calls, snap a photo or pause your workout music — all without touching the screen.14

    Plus, pay even more conveniently without compromising security. After you toggle on the new express pay feature, and when it’s enabled, just turn and tap your Pixel Watch to the reader to buy — no need to open the Google Wallet app first!15

    Stay safe and alert

    We’re expanding our suite of safety tools to make Pixel Watch even more essential. The new standalone earthquake alerts notify you of nearby earthquakes in real-time and can provide seconds of warning so you can take protective action.16

    Satellite SOS, first launched on Pixel Watch 4 in the contiguous U.S., is now available in Canada, Europe, Alaska and Hawaii.17 18

    You can visit our community forum post for the full list of new features arriving on your Pixel devices this month.

    Original source Report a problem

Related vendors