Social and Ad Platforms Release Notes
Release notes for social media advertising and marketing APIs
Products (8)
Latest Social and Ad Platforms Updates
- Mar 5, 2026
- Date parsed from source:Mar 5, 2026
- First seen by Releasebot:Mar 6, 2026
Google Ads by Google
Ask a Techspert: How does AI understand my visual searches?
Google highlights a major leap in visual search with AI Mode and Circle to Search, enabling multi-object searches in images and simultaneous results. It explains the fan-out technique powering faster, cohesive image queries for uses from fashion to home decor.
Visual search progress
Visual search has improved leaps and bounds — look no further than recent updates to Google Search. Here, a Google expert explains this progress and what technique we’ve used to make it happen.
We’ve all been there: You see a photo of a perfectly styled living room or a well-curated street-style outfit, and you want to know where everything came from. Until recently, visual search was a one-item-at-a-time process. But a major update to Circle to Search and Lens now allows Google to break down and search for multiple objects within a single image simultaneously. This means if you use Circle to Search on Android to search for an entire outfit, you’ll see results for every component of a look, not just one piece at a time. In recent months, we’ve also launched several updates that enhance both visual search and image results in AI Mode, so you can better find inspiration as you search.
To better understand these breakthroughs, we talked to Search Senior Engineering Director Dounia Berrada.
What part of Search do you work on?
I focus on multimodal search, aka Google Lens — essentially, enabling Google to help with your most complex questions about images, PDFs and anything you see. Visual search is redefining how we interact with information; Lens should be intelligent enough to understand the "why" behind your search, making it effortless to get help with what you see on your screen, or in the world around you. That means building a tool that can just as easily explain a complex math problem as it can identify a rare succulent or help you track down a pair of shoes you love.
How does it do that?
Imagine you’re redesigning a room so you upload a photo of a mid-century modern space for inspiration. You probably aren’t just looking for the side table; you want to recreate the entire vibe. Previously, you’d have to search for the lamp, then the rug, then the chair individually. Now, AI Mode can break down that complex image, identify each individual piece and issue multiple visual searches simultaneously. You can see this in action right now using Circle to Search.
What powers these types of visual search responses?
Our advanced Gemini models make AI Mode possible, and its multimodal capabilities benefit from the visual expertise we've built into Lens over the years. When you search with an image, Gemini analyzes the image alongside your question to decide which tools to use. Let's say you're scrolling on your phone and see an outfit on social media that you love. When you search it, the model knows to use Lens to retrieve image results for the hat, shoes and jacket of the outfit simultaneously. It then weaves those individual results into one easy-to-read response.
Think of it this way: The AI model acts as the "brain" that can “see” the image, while the visual search backend acts as the "library" containing billions of web results. The AI performs multi-object reasoning to understand what you’re looking at. Then it uses a "fan-out" technique which triggers multiple searches at once, reads through the results and presents a single, cohesive response with helpful links — all in seconds.
Can you explain the fan-out technique?
AI Mode is basically doing a dozen searches for you in the time it takes to do one. If you upload a photo of a garden you admire, you might have several questions: Will these plants survive in the shade? Are they right for my climate? How much maintenance do they need?
Before, you’d ask those one by one. Now, AI Mode identifies all those necessary "fan-out" searches. This way, it gathers care requirements for every plant in the photo using helpful web results, breaks down the info and even suggests next steps you might want to take. Since AI Mode is uncovering more visual results from a single search, it's easier than ever to find just what you're looking for, and stumble upon something new that sparks your interest.
Do you have to start with an image to get this kind of help in AI Mode?
Not at all! You can start with a simple text search in AI Mode, like "visual inspo for work outfits." When you see a result you like, you can just say, "Show me more options like the second skirt." The system immediately takes that specific image and begins the fan-out process from there.
It definitely seems great for shopping — what else could you use it for?
You could take a photo of a wall at a museum and ask for explanations of each painting. Or take a photo of a bakery window and ask what all the different pastries are. It’s about moving from "What is this one thing?" to "Explain this entire scene to me."
Sounds like I’ve got some photos to take and a lot more to discover. I'm off to put these tools to the test!
Original source Report a problem - Mar 2, 2026
- Date parsed from source:Mar 2, 2026
- First seen by Releasebot:Mar 3, 2026
Google Ads by Google
VRC Non-Skip ads are now generally available, allowing brands to reach TV audiences with Google AI.
YouTube on TV gets easier with VRC Non-Skips now generally available globally in Google Ads and Display & Video 360. AI-powered optimization tailors 6s, 15s, and 30s formats for big screens, boosting reach and efficiency for CTV campaigns.
We’re making it even easier to reach the millions of viewers enjoying YouTube in the living room — including the viewers that have made YouTube the #1 streamer in the U.S. for three years running1. VRC Non-Skips are now generally available globally in Google Ads and Display & Video 360.
Why this matters for your media mix:
- Built for the big screen: Non-skips are optimized for CTV delivery and ensure your message is delivered in its entirety.
- AI-powered optimization: Google AI dynamically optimizes between 6-second Bumpers, 15-second standard and 30-second CTV-only non-skippable ad formats, ensuring your campaign reaches the right audience at the right time.
- Drive better performance: AI-powered precision helps drive greater efficiency across multiple non-skip ad formats, delivering more unique reach and impact compared with manual mixes of single-format campaigns.
All of your release notes in one feed
Join Releasebot and get updates from Google and hundreds of other software products.
- Feb 26, 2026
- Date parsed from source:Feb 26, 2026
- First seen by Releasebot:Feb 26, 2026
Google Ads by Google
We’re expanding beta access to text guidelines for all advertisers globally in AI Max.
AI Max expands beta access to text guidelines globally with full language and vertical support. Advertisers can steer Google AI by defining terms to avoid and phrases to exclude in their own words to stay on brand. BYD saw 24% more leads at 26% lower cost, proving safer, more effective creatives.
AI-powered creatives and text guidelines expansion
AI-powered creatives are essential for staying relevant in today’s complex search landscape, but above all they must meet your brand standards. That’s why we’re expanding beta access for text guidelines to all advertisers globally across AI Max for Search and Performance Max campaigns starting today, now with full language and vertical support.
As text customization matches your creatives with intent, text guidelines ensure they remain precisely on-brand. You can now steer Google AI by defining specific terms to exclude or concepts to avoid, in your own words, with rules like “don’t imply our products are cheap” or “don’t use language like ‘only for’.” We’re exploring more ways for you to guide AI using everyday language.
Brands like BYD are already scaling creatives with these controls in AI Max. They increased leads by 24% at a 26% lower cost, and text guidelines safeguarded their brand standards.
High-quality creatives drive performance, and by pairing your unique insights with Google AI, your ads can stay meaningful across every new Search experience. Get started in AI Max today.
Original source Report a problem - Feb 25, 2026
- Date parsed from source:Feb 25, 2026
- First seen by Releasebot:Feb 26, 2026
v23.1 (2026-02-25)
Google Ads v23.1 introduces new controls for partner domains and EU political ad declarations, plus AI text guidelines for Performance Max and Search. It expands campaign statuses, video settings, conversions tracking, benchmark metrics, and adds YouTubeVideoUpload API.
Account management
- Added advertising_partner_properties.allowed_domain to ProductLinkInvitation and ProductLink resources. The advertising partner will only be able to advertise on this domain.
- Added the contains_eu_political_advertising field to the Customer resource. This field retrieves the account-level declaration status of whether it contains political advertising targeted towards the EU, and returns an EuPoliticalAdvertisingStatusEnum.
Campaigns
- Added support for text guidelines, which can be used with Performance Max and Search campaigns to programmatically control AI-generated text assets.
- Added the Campaign.text_guidelines field to the Campaign resource.
- Within text_guidelines, you can define term_exclusions and messaging_restrictions.
- Added CampaignPrimaryStatusReason.CAMPAIGN_NOT_BOOKED, CampaignPrimaryStatusReason.BOOKING_HOLD_EXPIRING, CampaignPrimaryStatusReason.BOOKING_HOLD_EXPIRED, and CampaignPrimaryStatusReason.BOOKING_CANCELLED, to provide primary status reasons for campaigns with the FIXED_CPM bidding strategy.
- Added Campaign.VideoCampaignSettings.reservation_ad_category_self_disclosure and Campaign.VideoCampaignSettings.booking_details (read-only).
- Added Campaign.missing_eu_political_advertising_declaration to support querying and filtering campaigns that are missing declarations about whether they contain political advertising targeted towards the EU.
Conversions
- Added ConversionActionCategory.YOUTUBE_FOLLOW_ON_VIEWS to support tracking users who watch an ad and later watch a video from the same channel.
General
- Added CANNOT_TARGET_ONLY_UNDETERMINED to CriterionErrorEnum. This error is returned when attempting to target only the undetermined category in demographics dimensions.
Incentives
- Added two new error codes to IncentiveErrorEnum: MAX_INCENTIVES_REDEEMED and ACCOUNT_TOO_OLD. These errors can be returned for requests made on or after March 11, 2026.
Planning
- Added support for date breakdowns in GenerateBenchmarksMetrics using a BreakdownDefinition.
- Added GOOGLE_DISPLAY_NETWORK as a targetable surface for Demand Gen Max Conversions in ReachPlanService.GenerateReachForecast.
- Added historical trend line information in TrendInsightDataPoint to TrendInsights in GenerateTrendingInsights when searching by topic.
Reports
- Added new metrics that report how many users saw your ad at least two, three, four, five or ten times: unique_users_two_plus, unique_users_three_plus, unique_users_four_plus, unique_users_five_plus, and unique_users_ten_plus.
- Added VERTICAL_ADS_DATA_FEED to SearchTermMatchSourceEnum to support vertical ad data feeds (e.g., Travel Ads entity targeting).
YouTubeVideoUpload
- Added the YouTubeVideoUpload service to support uploading and managing videos on YouTube, and the YouTubeVideoUpload resource to support fetching upload status and metadata. This feature is only supported for REST and the Python client library.
- Feb 25, 2026
- Date parsed from source:Feb 25, 2026
- First seen by Releasebot:Feb 26, 2026
Google Ads by Google
See the whole picture and find the look with Circle to Search
Circle to Search adds multi-object image search, letting you circle multiple items in a photo to identify each item at once and surface related products, outfits, and deeper insights. Available now on Galaxy S26 and Pixel 10, with virtual Try On.
New multi-object image search helps you find more items from one picture at the same time in Circle to Search.
H
Harsh Kharbanda
Director, Product Management, SearchSince we launched Circle to Search, you have circled, scribbled and highlighted your way through billions of queries per month. It’s been a game changer for questions like “What are those shoes?” or “Where is this hiking trail?” — and it’s already a powerful tool for finding more information about anything on your Android’s screen.
But we know that sometimes you aren't just looking for a single thing on your screen — you're looking for the whole thing. Like when you're redesigning a room, you don't want a single lamp, you’re trying to build an entire mid-century modern vibe in your living room. Today’s update levels up Circle to Search so you can now explore multiple objects in an image, all at once. Whether you’re curating a mood board, building an entire outfit or just satisfying your multi-layered curiosity, here’s how Circle to Search is getting a whole lot more helpful.
Get inspired by everything you see
Let’s say you're scrolling on your phone, and you see a breathtaking photo of a variety of vibrant, colorful fish. You want to explore more. Instead of wondering what's what, just circle all the fish on your screen and ask "what are all these fish, and how do they coexist?" Circle to Search will identify each unique species you've selected, from the Honeycomb Filefish to the Moon Jellyfish. Beyond just naming them and surfacing related images, it will explain the science behind their underwater community, and give you links out to the web to dive deeper.
With this update, you'll see more visual results from a single search, which creates new opportunities for merchants and businesses to be discovered.
Another popular way people use Circle to Search is for fashion; shopping-related searches are among the top uses of Circle to Search. Say you see an outfit you love on social media and you want to replicate the vibe. Now, you can search for every piece — accessories, clothing and shoes — all at once.
On your Samsung Galaxy S26 series or Pixel 10, just tap, scribble or circle an entire outfit to deconstruct the look. Circle to Search instantly identifies every component, finding similar items to jumpstart your shopping or style inspiration.
Try things on virtually, however you search
It’s also now easier to virtually try on items when inspiration strikes. In the countries where shoppers can already try on clothes from product listings across Google, now they can enter their virtual dressing room right from Circle to Search on the Samsung Galaxy S26 series or Pixel 10 devices. See an outfit on your social feed that you want to replicate? Just circle it, find the look, and select "Try On" to see it on you.
Go under the hood: How this works
This next-generation Circle to Search experience is made possible by Gemini 3's agentic planning, reasoning and tool capabilities, which also enhances our visual query fan-out technique. Instead of simply looking for a single match, the model now thinks through a multi-step plan to get you the best results for everything you search on your screen. It automatically identifies the most important parts of an image to crop, runs several searches at once, and cross-references what it finds to compile a final response — including images from across the web — for each item you’ve searched.
Check out the latest improvements to Circle to Search, starting today on the new Samsung Galaxy S26 series and the latest Pixel 10 devices, and coming to more Android devices soon.
POSTED IN:
- Search
- Shopping
- AI
- Feb 19, 2026
- Date parsed from source:Feb 19, 2026
- First seen by Releasebot:Feb 20, 2026
Google Ads by Google
New Meridian tool puts MMM insights directly in marketers' hands.
Meridian adds Scenario Planner, a no‑code interface that lets marketers and data scientists test budget scenarios and see real‑time ROI from MMM insights. It turns analytics into actionable plans, making Meridian more transparent and widely accessible.
Scenario Planner
Nearly 40% of marketers surveyed say their organizations struggle to connect Marketing Mix Model (MMM) outputs to real-world business decisions, according to a recent Harvard Business Review Analytic Services report. Since introducing Meridian, our open-source MMM, we’ve been focused on addressing this long-standing challenge by making its insights accessible.
Today we're introducing Scenario Planner to help decision makers and data scientists alike bridge the gap between analytics and planning.
Scenario Planner is a user-friendly interface that allows marketers to experiment with different budget scenarios and see real-time ROI estimates — no coding required. It transforms the conversation from a look back at what happened to a collaborative plan for what’s next, regardless of technical expertise.
By connecting marketing teams with the Scenario Planner, we’re making it easier than ever to use measurement insights to inform business decisions. Meridian has always been transparent; now, it’s truly accessible.
Original source Report a problem - Feb 18, 2026
- Date parsed from source:Feb 18, 2026
- First seen by Releasebot:Feb 19, 2026
Facebook Marketing API by Meta
Version 25.0
Marketing API updates reveal deprecation of Advantage+ Campaigns and Shopping/App campaigns starting May 19, 2026, guiding migration to the Advantage+ structure. New asynchronous ad report error fields and error_code changes enhance diagnostics; affected endpoints include campaign creation and copies, plus ad report retrieval.
Marketing API
February 18, 2026 | Available until TBD | Blog post
Advantage+ Campaigns
Advantage+ Shopping Campaigns and Advantage+ App Campaigns deprecation
Applies to v25.0+. Will apply to all versions May 19, 2026.
Creation, duplication, and updates to Advantage+ shopping campaigns and Advantage+ app campaigns is no longer allowed.
Refer to the Advantage+ Campaigns documentation to learn how to migrate your campaigns to Advantage+ campaigns or continue to create new campaigns using the Advantage+ structure.
The following endpoints are affected:- POST /{ad-account-id}/campaigns
- POST /{campaign-id}/copies
Insights
Asynchronous JobsApplies to v25.0+.
The following new default fields will be returned when an asynchronous ad report fails:- error_code: The error code
- error_message: A message corresponding to the error_code
- error_subcode: The specific subcode for the error
- error_user_title: A user-friendly title for the error subcode
- error_user_msg: A user-friendly message detailing the error subcode
For any developers with access to the error_code field, the type will be changed from uint to int.
See Insights API Asynchronous Jobs and Ads Insights API Error Codes for more information.
The following endpoints are affected: - GET /{ad-report-run-id}
- Feb 6, 2026
- Date parsed from source:Feb 6, 2026
- First seen by Releasebot:Feb 7, 2026
February 6, 2026
Added support for the enable_fb_login parameter in Instagram OAuth authorization requests. This allows developers to control whether the Facebook Login option is shown on the Instagram login page prior to authorization. The default value is true. See Business Login for Instagram for more information.
Original source Report a problem - Feb 1, 2026
- Date parsed from source:Feb 1, 2026
- First seen by Releasebot:Feb 16, 2026
- Modified by Releasebot:Mar 3, 2026
Linkedin Marketing API by Linkedin
February 2026 - Version 202602 (Latest)
LinkedIn introduces new attribution fields in Videos API to track third party templates and linkbacks. Starting with 202602, optional templateName and linkbackContext enable attribution and deep linking to sources. Also Ad Campaigns API adds MAX_QUALIFIED_LEAD for LEAD_GENERATION to boost high quality leads.
Product & Platform Announcements
Videos API: New Media Attribution Fields for Third-Party Integration
Starting with the 202602 version, we have introduced two optional fields to the /videos endpoint's initializeUploadRequest: templateName and linkbackContext. Developers can now track and attribute videos created using third-party tools, companies, or templates. You can provide attribution metadata using the following optional fields:- templateName: A string identifying the name of the template used to create the video (Example: "Sunshine birthday wishes").
- linkbackContext: A string that provides context for linking back to the original source, such as a URL pointing to the specific template or tool.
These fields allow downstream systems to identify the source template or tool used during video creation and enable deep linking back to the original resource when needed. When a video is created with these attribution fields, LinkedIn can display the template name and provide a direct link to the source, enabling attribution and allowing advertisers and content creators to reference the tools or templates used during video creation.
Ad Campaigns API & Ad Budget Pricing API: Qualified leads available as optimization target for Lead Generation campaign objective
Starting with the 202602 version, we have introduced MAX_QUALIFIED_LEAD as a supported value for optimizationTargetType field used in the /adCampaigns and /adBudgetPricing endpoints when using LEAD_GENERATION as campaign objectiveType. This optimizes lead generation campaigns to target high-quality leads using qualified lead data sent from Conversions API (where conversion rule type must be QUALIFIED_LEAD) or CRM data shared from Business Manager. Learn more here.
- Jan 30, 2026
- Date parsed from source:Jan 30, 2026
- First seen by Releasebot:Jan 31, 2026
January 30, 2026
[New] Page Integrity API
You can now get real-time integrity information for a page via the Page Integrity Webhook and API. This includes the integrity status, violations, restrictions, recommended actions (e.g. file an appeal) and appeal status.
Original source Report a problem