Google Ads Release Notes

Last updated: Mar 6, 2026

  • Mar 5, 2026
    • Date parsed from source:
      Mar 5, 2026
    • First seen by Releasebot:
      Mar 6, 2026
    Google logo

    Google Ads by Google

    Ask a Techspert: How does AI understand my visual searches?

    Google highlights a major leap in visual search with AI Mode and Circle to Search, enabling multi-object searches in images and simultaneous results. It explains the fan-out technique powering faster, cohesive image queries for uses from fashion to home decor.

    Visual search progress

    Visual search has improved leaps and bounds — look no further than recent updates to Google Search. Here, a Google expert explains this progress and what technique we’ve used to make it happen.

    We’ve all been there: You see a photo of a perfectly styled living room or a well-curated street-style outfit, and you want to know where everything came from. Until recently, visual search was a one-item-at-a-time process. But a major update to Circle to Search and Lens now allows Google to break down and search for multiple objects within a single image simultaneously. This means if you use Circle to Search on Android to search for an entire outfit, you’ll see results for every component of a look, not just one piece at a time. In recent months, we’ve also launched several updates that enhance both visual search and image results in AI Mode, so you can better find inspiration as you search.

    To better understand these breakthroughs, we talked to Search Senior Engineering Director Dounia Berrada.

    What part of Search do you work on?

    I focus on multimodal search, aka Google Lens — essentially, enabling Google to help with your most complex questions about images, PDFs and anything you see. Visual search is redefining how we interact with information; Lens should be intelligent enough to understand the "why" behind your search, making it effortless to get help with what you see on your screen, or in the world around you. That means building a tool that can just as easily explain a complex math problem as it can identify a rare succulent or help you track down a pair of shoes you love.

    How does it do that?

    Imagine you’re redesigning a room so you upload a photo of a mid-century modern space for inspiration. You probably aren’t just looking for the side table; you want to recreate the entire vibe. Previously, you’d have to search for the lamp, then the rug, then the chair individually. Now, AI Mode can break down that complex image, identify each individual piece and issue multiple visual searches simultaneously. You can see this in action right now using Circle to Search.

    What powers these types of visual search responses?

    Our advanced Gemini models make AI Mode possible, and its multimodal capabilities benefit from the visual expertise we've built into Lens over the years. When you search with an image, Gemini analyzes the image alongside your question to decide which tools to use. Let's say you're scrolling on your phone and see an outfit on social media that you love. When you search it, the model knows to use Lens to retrieve image results for the hat, shoes and jacket of the outfit simultaneously. It then weaves those individual results into one easy-to-read response.

    Think of it this way: The AI model acts as the "brain" that can “see” the image, while the visual search backend acts as the "library" containing billions of web results. The AI performs multi-object reasoning to understand what you’re looking at. Then it uses a "fan-out" technique which triggers multiple searches at once, reads through the results and presents a single, cohesive response with helpful links — all in seconds.

    Can you explain the fan-out technique?

    AI Mode is basically doing a dozen searches for you in the time it takes to do one. If you upload a photo of a garden you admire, you might have several questions: Will these plants survive in the shade? Are they right for my climate? How much maintenance do they need?

    Before, you’d ask those one by one. Now, AI Mode identifies all those necessary "fan-out" searches. This way, it gathers care requirements for every plant in the photo using helpful web results, breaks down the info and even suggests next steps you might want to take. Since AI Mode is uncovering more visual results from a single search, it's easier than ever to find just what you're looking for, and stumble upon something new that sparks your interest.

    Do you have to start with an image to get this kind of help in AI Mode?

    Not at all! You can start with a simple text search in AI Mode, like "visual inspo for work outfits." When you see a result you like, you can just say, "Show me more options like the second skirt." The system immediately takes that specific image and begins the fan-out process from there.

    It definitely seems great for shopping — what else could you use it for?

    You could take a photo of a wall at a museum and ask for explanations of each painting. Or take a photo of a bakery window and ask what all the different pastries are. It’s about moving from "What is this one thing?" to "Explain this entire scene to me."

    Sounds like I’ve got some photos to take and a lot more to discover. I'm off to put these tools to the test!

    Original source Report a problem
  • Mar 2, 2026
    • Date parsed from source:
      Mar 2, 2026
    • First seen by Releasebot:
      Mar 3, 2026
    Google logo

    Google Ads by Google

    VRC Non-Skip ads are now generally available, allowing brands to reach TV audiences with Google AI.

    YouTube on TV gets easier with VRC Non-Skips now generally available globally in Google Ads and Display & Video 360. AI-powered optimization tailors 6s, 15s, and 30s formats for big screens, boosting reach and efficiency for CTV campaigns.

    We’re making it even easier to reach the millions of viewers enjoying YouTube in the living room — including the viewers that have made YouTube the #1 streamer in the U.S. for three years running1. VRC Non-Skips are now generally available globally in Google Ads and Display & Video 360.

    Why this matters for your media mix:

    • Built for the big screen: Non-skips are optimized for CTV delivery and ensure your message is delivered in its entirety.
    • AI-powered optimization: Google AI dynamically optimizes between 6-second Bumpers, 15-second standard and 30-second CTV-only non-skippable ad formats, ensuring your campaign reaches the right audience at the right time.
    • Drive better performance: AI-powered precision helps drive greater efficiency across multiple non-skip ad formats, delivering more unique reach and impact compared with manual mixes of single-format campaigns.
    Original source Report a problem
  • All of your release notes in one feed

    Join Releasebot and get updates from Google and hundreds of other software products.

  • Feb 26, 2026
    • Date parsed from source:
      Feb 26, 2026
    • First seen by Releasebot:
      Feb 26, 2026
    Google logo

    Google Ads by Google

    We’re expanding beta access to text guidelines for all advertisers globally in AI Max.

    AI Max expands beta access to text guidelines globally with full language and vertical support. Advertisers can steer Google AI by defining terms to avoid and phrases to exclude in their own words to stay on brand. BYD saw 24% more leads at 26% lower cost, proving safer, more effective creatives.

    AI-powered creatives and text guidelines expansion

    AI-powered creatives are essential for staying relevant in today’s complex search landscape, but above all they must meet your brand standards. That’s why we’re expanding beta access for text guidelines to all advertisers globally across AI Max for Search and Performance Max campaigns starting today, now with full language and vertical support.

    As text customization matches your creatives with intent, text guidelines ensure they remain precisely on-brand. You can now steer Google AI by defining specific terms to exclude or concepts to avoid, in your own words, with rules like “don’t imply our products are cheap” or “don’t use language like ‘only for’.” We’re exploring more ways for you to guide AI using everyday language.

    Brands like BYD are already scaling creatives with these controls in AI Max. They increased leads by 24% at a 26% lower cost, and text guidelines safeguarded their brand standards.

    High-quality creatives drive performance, and by pairing your unique insights with Google AI, your ads can stay meaningful across every new Search experience. Get started in AI Max today.

    Original source Report a problem
  • Feb 25, 2026
    • Date parsed from source:
      Feb 25, 2026
    • First seen by Releasebot:
      Feb 26, 2026
    Google logo

    Google Ads by Google

    See the whole picture and find the look with Circle to Search

    Circle to Search adds multi-object image search, letting you circle multiple items in a photo to identify each item at once and surface related products, outfits, and deeper insights. Available now on Galaxy S26 and Pixel 10, with virtual Try On.

    New multi-object image search helps you find more items from one picture at the same time in Circle to Search.

    H
    Harsh Kharbanda
    Director, Product Management, Search

    Since we launched Circle to Search, you have circled, scribbled and highlighted your way through billions of queries per month. It’s been a game changer for questions like “What are those shoes?” or “Where is this hiking trail?” — and it’s already a powerful tool for finding more information about anything on your Android’s screen.

    But we know that sometimes you aren't just looking for a single thing on your screen — you're looking for the whole thing. Like when you're redesigning a room, you don't want a single lamp, you’re trying to build an entire mid-century modern vibe in your living room. Today’s update levels up Circle to Search so you can now explore multiple objects in an image, all at once. Whether you’re curating a mood board, building an entire outfit or just satisfying your multi-layered curiosity, here’s how Circle to Search is getting a whole lot more helpful.

    Get inspired by everything you see

    Let’s say you're scrolling on your phone, and you see a breathtaking photo of a variety of vibrant, colorful fish. You want to explore more. Instead of wondering what's what, just circle all the fish on your screen and ask "what are all these fish, and how do they coexist?" Circle to Search will identify each unique species you've selected, from the Honeycomb Filefish to the Moon Jellyfish. Beyond just naming them and surfacing related images, it will explain the science behind their underwater community, and give you links out to the web to dive deeper.

    With this update, you'll see more visual results from a single search, which creates new opportunities for merchants and businesses to be discovered.

    Another popular way people use Circle to Search is for fashion; shopping-related searches are among the top uses of Circle to Search. Say you see an outfit you love on social media and you want to replicate the vibe. Now, you can search for every piece — accessories, clothing and shoes — all at once.

    On your Samsung Galaxy S26 series or Pixel 10, just tap, scribble or circle an entire outfit to deconstruct the look. Circle to Search instantly identifies every component, finding similar items to jumpstart your shopping or style inspiration.

    Try things on virtually, however you search

    It’s also now easier to virtually try on items when inspiration strikes. In the countries where shoppers can already try on clothes from product listings across Google, now they can enter their virtual dressing room right from Circle to Search on the Samsung Galaxy S26 series or Pixel 10 devices. See an outfit on your social feed that you want to replicate? Just circle it, find the look, and select "Try On" to see it on you.

    Go under the hood: How this works

    This next-generation Circle to Search experience is made possible by Gemini 3's agentic planning, reasoning and tool capabilities, which also enhances our visual query fan-out technique. Instead of simply looking for a single match, the model now thinks through a multi-step plan to get you the best results for everything you search on your screen. It automatically identifies the most important parts of an image to crop, runs several searches at once, and cross-references what it finds to compile a final response — including images from across the web — for each item you’ve searched.

    Check out the latest improvements to Circle to Search, starting today on the new Samsung Galaxy S26 series and the latest Pixel 10 devices, and coming to more Android devices soon.

    POSTED IN:

    • Search
    • Shopping
    • AI
    Original source Report a problem
  • Feb 19, 2026
    • Date parsed from source:
      Feb 19, 2026
    • First seen by Releasebot:
      Feb 20, 2026
    Google logo

    Google Ads by Google

    New Meridian tool puts MMM insights directly in marketers' hands.

    Meridian adds Scenario Planner, a no‑code interface that lets marketers and data scientists test budget scenarios and see real‑time ROI from MMM insights. It turns analytics into actionable plans, making Meridian more transparent and widely accessible.

    Scenario Planner

    Nearly 40% of marketers surveyed say their organizations struggle to connect Marketing Mix Model (MMM) outputs to real-world business decisions, according to a recent Harvard Business Review Analytic Services report. Since introducing Meridian, our open-source MMM, we’ve been focused on addressing this long-standing challenge by making its insights accessible.

    Today we're introducing Scenario Planner to help decision makers and data scientists alike bridge the gap between analytics and planning.

    Scenario Planner is a user-friendly interface that allows marketers to experiment with different budget scenarios and see real-time ROI estimates — no coding required. It transforms the conversation from a look back at what happened to a collaborative plan for what’s next, regardless of technical expertise.

    By connecting marketing teams with the Scenario Planner, we’re making it easier than ever to use measurement insights to inform business decisions. Meridian has always been transparent; now, it’s truly accessible.

    Original source Report a problem
  • Jan 22, 2026
    • Date parsed from source:
      Jan 22, 2026
    • First seen by Releasebot:
      Feb 2, 2026
    Google logo

    Google Ads by Google

    See the newest product features in January’s Demand Gen Drop.

    Demand Gen expands to general availability with Shoppable CTV, Attributed Branded Searches, and Travel Feeds for hotels, enabling dynamic video ads and measurable impact. The updates aim to boost conversions while lowering CPA.

    Demand Gen improvements

    • Demand Gen powers Shoppable CTV, enabling viewers to seamlessly browse and purchase products while watching YouTube ads on the big screen. Demand Gen campaigns that include TV screens drive an average of 7% additional conversions at the same ROI.
    • Attributed Branded Searches is now available for Demand Gen, showing the volume of your campaign’s branded searches on Google/YouTube, to help quantify your impact. Reach out to your Google representative to activate.
    • You can now turn browsing into booking faster with Travel Feeds in Demand Gen. Simply connect your Hotel Center feed to build dynamic video ads featuring hotel pricing, ratings and availability.

    Demand Gen has been integral in helping advertisers like LG Electronics drive performance, achieving a 24% higher conversion rate than its paid social campaigns while reaching high-valued customers at a 91% lower CPA.

    To learn more about Demand Gen improvements, visit Accelerate with Google.

    Original source Report a problem
  • Jan 15, 2026
    • Date parsed from source:
      Jan 15, 2026
    • First seen by Releasebot:
      Feb 2, 2026
    Google logo

    Google Ads by Google

    Campaign total budgets expands to more campaign types

    Campaign total budgets are in open beta across Search, Performance Max and Shopping, enabling you to set a target spend over a defined period and auto-optimize without daily tweaks. It helps promos and flights stay on budget while maximizing opportunity.

    Campaign total budgets

    Campaign total budgets are now available in Search, Performance Max and Shopping campaigns.

    Managing budgets for specific campaign flights — like product launches, sales events or promotional bursts — shouldn't require constant manual adjustments. Campaign total budgets are now available in open beta for Search, Performance Max and Shopping campaigns, allowing you to set the budget you want over a specific period of time from a few days to a few weeks.

    Instead of making daily manual tweaks to keep pace, campaign total budgets optimize your spend and aim to fully and effectively utilize your budget by your end date. Whether you’re running a 72-hour test or a month-long activation, you can launch with confidence knowing you won’t overspend or miss out on opportunities. Your campaign will stay on track for your budget goals without the need for daily maintenance.

    Case study: Escentual.com

    Escentual.com, a UK-based online beauty retailer, noticed their ads weren’t serving as much as they wanted to during promotions and underutilizing their budget. “Our goal was to increase traffic to our website during a promotion. The campaign total budget feature helped us to achieve a 16% increase in traffic without exceeding our budget or underperforming our target ROAS,” said Tom Jenkins, an Insights Manager at Escentual.com.

    We’re excited to see this help you hit your goals while giving you time back to focus on strategy.

    Original source Report a problem
  • January 2026
    • No date parsed from source.
    • First seen by Releasebot:
      Jan 12, 2026
    Google logo

    Google Ads by Google

    Let AI do the hard parts of your holiday shopping

    Learn more about new AI Google Shopping tools, including agentic checkout.

    Original source Report a problem
  • Dec 29, 2025
    • Date parsed from source:
      Dec 29, 2025
    • First seen by Releasebot:
      Dec 29, 2025
    Google logo

    Google Ads by Google

    The latest AI news we announced in December

    Google announces Gemini 3 Flash with frontier speed across apps and Search. New AI video verification in Gemini and live Translate translation enhance trust and global reach. The update also unlocks Gemini Deep Research, Pro models in Search, and Nano Banana virtual try‑on.

    December AI updates roundup

    Here’s a recap of our biggest AI updates from December, including the launch of Gemini 3 Flash, the release of new AI verification tools in the Gemini app and the arrival of Gemini’s powerful translation capabilities in Google Translate.

    For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. Teams across Google are working on ways to unlock AI’s benefits in fields as wide-ranging as healthcare, crisis response and education. To keep you posted on our progress, we're doing a regular roundup of Google's most recent AI news.

    Here’s a look back at some of our AI announcements from December.

    December is usually a time for reflection, and looking ahead. That’s why this month we’ve been focused on taking frontier intelligence out of the lab and putting it into your hands in ways that actually matter for your day-to-day. Whether it’s the lightning speed of Gemini 3 Flash helping you tackle tasks in seconds, the new video verification tools in the Gemini app or the simple relief of having GenTabs tame your open tabs, these updates share a single goal: making technology adapt to you, not the other way around. And as we push these boundaries, we’re staying grounded in responsibility — launching new tools to help you verify AI content so you can explore this new frontier with confidence.

    We released Gemini 3 Flash, featuring frontier intelligence built for speed. Gemini 3 Flash brings frontier intelligence to virtually every corner of the Google ecosystem, combining the speed of our most advanced models with improved reasoning capabilities to help with everyday tasks, all while keeping costs significantly lower. It's rolling out as the default model in the Gemini app and AI Mode in Search so people everywhere can now experience the incredible reasoning of our frontier model, right in our consumer products. And we’ve scaled this rollout to a global community, including developers building in the API, Antigravity, our new agentic development platform, and enterprise customers on Vertex AI.

    We added new AI verification tools for videos in the Gemini app. We’re bringing video verification capabilities directly to the Gemini app. People can now upload videos — up to 100 MB or 90 seconds — and simply ask if the content was generated or edited using Google AI. Gemini uses imperceptible SynthID watermarks to analyze both audio and visual tracks, pinpointing exactly which segments contain AI-generated elements.

    We announced a new experiment to improve browsing and manage complex online tasks. We’ve all felt the friction of juggling dozens of tabs to research a topic or plan a trip. Enter Disco, a new browsing experience from Google Labs designed to tame that complexity. Disco features GenTabs, an experiment that proactively synthesizes your open tabs and chat history to build custom, interactive web applications — transforming a scattered browser session into a streamlined tool for getting things done.

    We upgraded Gemini audio models for powerful voice interactions. The updated Gemini 2.5 Flash Native Audio is built to handle complex workflows and natural dialogue — meaning smoother conversations, higher accuracy and better responsiveness to instructions. It’s available now in AI Studio, Vertex AI, Gemini Live and, for the first time, Search Live. Plus, a new live speech translation beta in the Google Translate app brings live translation in 70+ languages directly to your headphones, preserving original intonation and pacing to unlock truly global communication.

    We released a new Gemini Deep Research agent. We brought a more powerful Gemini Deep Research to developers through the Interactions API. Developers can now embed advanced research capabilities — like navigating complex topics and synthesizing findings — directly into their own applications using a Gemini API key from Google AI Studio. We’ve also open-sourced our new DeepSearchQA benchmark, offering a transparent way to test just how comprehensive and effective research agents can be on web tasks. Plus, we shared how developers are already building mobile-first solutions to address real-world problems, from AI assistants for the visually impaired to tools fostering autonomy for people with cognitive disabilities.

    We released a new way for shoppers in the U.S. to use our virtual try-on tool. U.S. shoppers now have a more personalized way to find their next favorite outfit with our updated virtual try-on tool. Instead of needing a full-body photo, you can now upload a simple selfie and Nano Banana will generate a realistic, full-body digital version of you. Once you’ve selected your preferred studio-like image and clothing size, you can instantly see how you’d look in billions of products from our Shopping Graph.

    We expanded Gemini 3 Pro and Nano Banana Pro in Search. We brought our most intelligent model, Gemini 3, to AI Mode in Google Search in nearly 120 countries and territories in English. Google AI Pro and Ultra subscribers can visualize complex topics with Gemini 3 Pro by tapping “Thinking with 3 Pro” in the model drop-down in AI Mode. We also brought our generative imagery model, Nano Banana Pro, to AI Mode in more countries in English, starting with Google AI Pro and Ultra subscribers. For those in the U.S., we also expanded access to these Pro models (no subscription required), with higher usage limits for Google AI Pro and Ultra subscribers.

    We released the top YouTube trends of 2025 and first-ever personal Recap. YouTube celebrated its 20th birthday by looking back at 2025. MrBeast was the top creator for the sixth year running, while Rosé and Bruno Mars’ track "APT." became the fastest KPop video to hit one billion views. To mark the occasion, YouTube is launching its first-ever Recap so you can see a personalized summary of your year.

    We added new ways for you to personalize, create and share your Google Photos Recap. Google Photos Recap has returned to help you celebrate your favorite moments from 2025, now with more features to make the experience truly yours. We’ve added new controls that let you hide specific people or photos, ensuring your trip down memory lane is exactly how you want it. Plus, you can now get creative with exclusive templates in CapCut and easily share your finished masterpiece directly to WhatsApp or your favorite social feeds.

    We released Year in Search 2025. 2025 delivered history-making headlines — from the first American Pope to the global obsession with "KPop Demon Hunters" — but the quietest revolution happened right at our fingertips. Thanks to AI, this was the year we saw a massive shift toward natural, conversational questions with a surge in queries like “How do I…” and “What’s the deal with…” as AI helped technology finally catch up to the way we think.

    Original source Report a problem
  • Dec 11, 2025
    • Date parsed from source:
      Dec 11, 2025
    • First seen by Releasebot:
      Dec 12, 2025
    Google logo

    Google Ads by Google

    December Demand Gen Drop: Five things to know for the new year.

    Demand Gen gets AI powered personalization and new acquisition goals to reach audiences as they stream, scroll, and shop. Enhancements include auto generated videos, local offers, checkout links and Web to App connect, plus channel controls across YouTube, Display, Discover and Gmail and improved cross‑platform conversions.

    Demand Gen enhancements for 2026 campaigns

    Demand Gen can help you reach new audiences while they're streaming, scrolling or shopping. On average, 68% of Demand Gen conversions came from users who did not see the brand's ads on Google Search in the 30 days prior to converting.

    We’ve made enhancements to Demand Gen this year, and today we’re sharing the top takeaways for your 2026 campaigns:

    • Make the most of AI-powered ads personalization by using optimized targeting or new customer acquisition goals to engage new or high valued customers.
    • Maximize campaign reach and effectiveness across Demand Gen inventory by using a variety of creative assets, including auto-generated videos.
    • Drive sales across your business w/ tools like local offers for store sales, checkout links for web sales, and Web to App connect for in-app purchases.
    • Choose where your ads appear with channel controls to tailor your campaigns across YouTube, the Google Display Network, Discover and Gmail.
    • Measure for growth using the platform comparable conversion columns, empowering you to make better cross-platform comparisons.

    To learn about Demand Gen updates, visit Accelerate with Google.

    Original source Report a problem

Related products