Google Release Notes

Last updated: Dec 6, 2025

Google Products

All Google Release Notes

  • Dec 5, 2025
    • Parsed from source:
      Dec 5, 2025
    • Detected by Releasebot:
      Dec 6, 2025
    Google logo

    Google Chrome by Google

    What's new in Dev Tools, Chrome 143

    DevTools MCP server launches v0.11.0 with new prompts linking Elements and Network, improved console messages, and a press_key tool for keyboard events. Accessibility snapshots save to disk, pages can reload with cache control, and a user-data-dir option. Also trace export, starting-style support, masonry editor, Lighthouse 13, and Canary channels.

    What's new in DevTools

    We landed various improvements for the DevTools MCP server and released v0.11.0.

    • When prompting in your MCP client (Gemini CLI, Cursor, ...) you can now reference elements and network requests selected in the Elements and Network panels
    • The list_console_messages tool now also includes issues surfaced in the Issues panel
    • The new press_key tool can now be used to debug keyboard events unrelated to form elements
    • Accessibility tree snapshots can now be saved to disk
    • Pages can now be reloaded, with the cache optionally ignored
    • Configure the --user-data-dir flag to use an existing Chrome profile

    See the public changelog on GitHub for the full list of changes and bug fixes, and learn more about the DevTools MCP server in the announcement blog post.

    Improved trace sharing

    When exporting a performance trace you can now include additional data in the exported file to ease further debugging for your future self or a colleague. You can now choose to include the following:

    • Resource content: A copy of all HTML, CSS, and JavaScript files (excluding extension scripts)
    • Script source maps: Mappings to authored code, allowing you to see original function names and source files.

    Learn about what to share, and what to better keep private in our updated documentation.
    We'd like to thank our colleagues at Microsoft with whose collaboration this was achieved, and who led the early work for this.

    Support for @starting-style

    The Elements panel now has support for debugging the new CSS @starting-style rule, which is essential for creating entry animations.
    You can now see a starting-style adorner in the Elements tree next to relevant elements, toggle the element's starting-style state by clicking the pill, and inspect and debug the @starting-style block in the Styles tab.

    Editor widget for display: masonry

    If you experiment with CSS Masonry layout can now use the same editor widget as familiar from display: flex and grid layouts to quickly toggle through various alignment options in masonry layouts.

    Lighthouse 13

    The Lighthouse panel now runs Lighthouse 13. With this milestone, the work of unifying performance insights across DevTools and Lighthouse concludes.
    Learn more in the announcement blog post. To learn what Lighthouse is useful for, and how it connects to the Performance panel in DevTools, see Lighthouse: Optimize your website.

    Download the preview channels

    Consider using the Chrome Canary, Dev, or Beta as your default development browser. These preview channels give you access to the latest DevTools features, let you test cutting-edge web platform APIs, and help you find issues on your site before your users do!

    Get in touch with the Chrome DevTools team

    Use the following options to discuss the new features, updates, or anything else related to DevTools.

    • Submit feedback and feature requests to us at crbug.com.
    • Report a DevTools issue using the More options > Help > Report a DevTools issue in DevTools.
    • Tweet at @ChromiumDev.
    • Leave comments on What's new in DevTools YouTube videos or DevTools Tips YouTube videos.

    What's new in DevTools

    A list of everything that has been covered in the What's new in DevTools series.

    Original source Report a problem
  • Dec 5, 2025
    • Parsed from source:
      Dec 5, 2025
    • Detected by Releasebot:
      Dec 6, 2025
    Google logo

    Gemini by Google

    Google announces higher Antigravity rate limits for Google AI Pro, Ultra subscribers

    Google AI Pro and Ultra subscribers get higher rate limits for Google Antigravity with priority access and five hour quota refresh. Free plan users see a larger weekly limit to reduce hits while Gemini 3 Pro and all features stay available.

    Google Antigravity rate limits update

    Google AI Pro and Ultra subscribers now have higher rate limits for Google Antigravity.

    The response to Google Antigravity — our new agentic development platform — has been incredible and we’re working to meet this demand. One way we’re doing this is by offering enhanced support for Google AI subscribers.

    Google AI Pro and Ultra subscribers now receive priority access, featuring our highest, most generous rate limits with quotas that refresh every five hours.

    For users on the free plan, we’ve shifted to a larger, weekly based rate limit to minimize hitting rate limits quickly during a project. Remember, usage is correlated with the "work done" by the agent; straightforward tasks consume less quota than complex reasoning.

    Regardless of your tier, all users will continue to enjoy Gemini 3 Pro, unlimited tab code completions and access to all product features, such as the Agent Manager and Browser integration.

    Original source Report a problem
  • Dec 5, 2025
    • Parsed from source:
      Dec 5, 2025
    • Detected by Releasebot:
      Dec 6, 2025
    Google logo

    Gemini by Google

    15 examples of Gemini 3’s reasoning, coding and agentic capabilities

    Google unveils Gemini 3 a multimodal AI that helps you learn build and plan with real time tool use across apps and Search. With Gemini 3 Pro preview and new interactive interfaces it promises powerful productivity and hands on capabilities.

    Learn anything

    • Break down technical scientific topics with coded visualizations
      Gemini 3 is state-of-the-art on multimodal understanding and has a 1 million-token context window. So it can take any kind of input you give it — from text to video to code and beyond — and help you learn in ways that make sense for you, like with an interactive guide based on a dense research paper.

    • Get presentation coaching tailored to you
      With just a recording of your practice presentation and your slides, Gemini 3 can act as your presentation coach. The model uses advanced reasoning to not only understand and evaluate your performance, but also applies its in-depth knowledge to offer constructive and actionable advice.

    • Dive deep into a scientific concept
      AI Mode in Search now uses Gemini 3’s reasoning power, multimodal understanding and generative UI capabilities to dynamically create the ideal layout for your questions. When the model detects that an interactive tool will help you better understand the topic, it codes a custom simulation or tool in real-time and adds it into your response. That way it can show you what RNA does instead of just telling you, plus you’ll see links to keep exploring content from across the web.

    • Learn with detailed infographics
      Nano Banana Pro (Gemini 3 Pro Image) uses Gemini 3’s state-of-the-art reasoning and real-world knowledge to visualize information better than ever before and help you learn about new subjects. You can ask it to create content like infographics for anything from the weather in a specific city, to how to make Elaichi Chai, to the care and keeping of your latest house plant.

    • Make strides in your hobby
      With Gemini 3’s massive long-context window; state-of-the-art reasoning capabilities; and vision and spatial understanding, you can upload a video of yourself playing a sport for up to an hour and receive coach-level advice. Gemini 3 will identify that you’re the player, filter out noise and offer a detailed visual analysis, complete with information like form evaluation and suggested drills.

    • Generate custom interfaces to explore different concepts
      Gemini 3’s reasoning and multimodal capabilities have enabled generative interfaces like dynamic view, a new experiment in the Gemini app. Dynamic view uses the model’s agentic coding capabilities to design and code a custom user interface in real-time, perfectly suited to your prompt. For example, ask Gemini to “explain the Van Gogh Gallery with life context for each piece,” and you'll receive a stunning, interactive response that lets you tap, scroll and learn in ways static text can’t.

    • Evoke complex scientific topics with art
      It can be tough to wrap your mind around something like, say, nuclear fusion. But Gemini 3 can deeply comprehend this kind of nuanced scientific topic, then express that understanding creatively through both code and poetry simultaneously.

    Build anything

    • Vibe code rich, interactive web UI
      Gemini 3 is exceptional at zero-shot generation. It handles the heavy lifting of multi-step planning and coding details, allowing you to focus on the creative vision. With natural language, describe what you have in mind, like a website to promote a retro dance night. The model’s significantly improved complex instruction following and deep tool use can translate your high-level idea into an interactive landing page with a single prompt.

    • Make a static image interactive
      Thanks to Gemini 3, a static image can become a board game, a napkin sketch can turn into a full website and a diagram can take on new life through an interactive lesson. The model’s deep multimodal understanding can interpret when the content of an image might be most compelling in an interactive format, then translate it to be functional.

    • Illustrate massive differences in scale
      You know a sub-atomic particle is tiny, and the galaxy is huge. But with Gemini 3, you can build a way to see those differences in size. The model excels at coding because of its ability to synthesize disparate pieces of information and follow complex, creative instructions. It also understands the intent behind your idea, allowing you to go from a rough concept to a functional starting point in a single step.

    • Generate code that works IRL
      With our new agentic development platform Google Antigravity, you can build faster and manage intelligent agents that operate across the editor, terminal and browser. Antigravity uses Gemini 3’s advanced reasoning, tool use and agentic coding capabilities to act as your partner, generating code complex enough to work in the real world — like in the case of the classic cartpole problem.

    Plan anything

    • Create a custom trip itinerary
      Visual layout is another experimental generative interface experience in the Gemini app. It moves beyond text with an immersive, magazine-style view featuring photos and modules that you can interact with to further customize your response. For instance, ask it to “plan a 3-day trip to Rome,” and you get an explorable itinerary tailored to your preferences.

    • Get help with what’s important to you
      Gemini 3’s generative UI capabilities in Search can also help build tools for your more practical questions. Now, AI Mode can make you a custom-built interactive loan calculator to compare two different options and see which offers the most long-term savings, and gives you the ability to change and customize the inputs to help you get even more insights.

    • Achieve inbox zero
      Gemini Agent is an experimental feature that handles multi-step tasks directly inside the Gemini app, and it can help with things like triaging your emails. Built on insights from Project Mariner and using Gemini 3’s advanced reasoning, Gemini breaks down complex requests using tools like Deep Research; Canvas; your Google Workspace connected apps like Gmail and Calendar; and live web browsing. When using it, you remain in control: Gemini is designed to seek confirmation before critical actions, and you can take over anytime.

    • Make the most of your weekends
      With Gemini 3 in Search, you can ask even more complicated questions and get richer, more visual and helpful responses in AI Mode. Let’s say you have very specific conditions in mind for catching some waves — put those guidelines into AI Mode. Gemini 3 once again uses its multimodal understanding and agentic coding to generate the most helpful layout on the fly, even building visual elements like images, tables and grids to help you choose. Surf’s up!

    Original source Report a problem
  • Dec 5, 2025
    • Parsed from source:
      Dec 5, 2025
    • Detected by Releasebot:
      Dec 6, 2025
    Google logo

    Gemini API by Google

    December 5, 2025

    • Gemini 3 billing for Grounding with Google Search will begin on January 5, 2026.
    Original source Report a problem
  • Dec 5, 2025
    • Parsed from source:
      Dec 5, 2025
    • Detected by Releasebot:
      Dec 6, 2025
    Google logo

    Gemini by Google

    The latest AI news we announced in November

    Google’s November AI roundup announces Gemini 3 launch, Nano Banana Pro, Antigravity platform, smarter Maps navigation, SIMA 2 progress, WeatherNext 2 and new AI shopping features plus big AI investments.

    November AI updates recap

    Here’s a recap of some of our biggest AI updates from November, including the launch of Gemini 3, debut of Nano Banana Pro and major new investments in AI infrastructure.

    For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. Teams across Google are working on ways to unlock AI’s benefits in fields as wide-ranging as healthcare, crisis response and education. To keep you posted on our progress, we're doing a regular roundup of Google's most recent AI news.

    Here’s a look back at some of our AI announcements from November.

    November ushered in a new era of intelligence: with the launch of Gemini 3, we're improving everyone's ability to learn, plan or just get things done.

    From "vibe coding" with our powerful models, to generating professional grade visuals with Nano Banana Pro, to the autonomous workflows in our new Google Antigravity platform, the distance between a spark of imagination and reality has never been shorter. So, whether you’re a developer building complex agents or a traveler planning for the upcoming holidays with Canvas in AI Mode, these updates turn AI into a proactive partner ready to help you get things done, all season long.

    Gemini 3

    We released Gemini 3, AI for a new era of intelligence. Gemini 3 is built to bring any idea to life. It represents the next step in our work to push the frontiers of intelligence, agentic experiences and personalization — so that AI is truly helpful for everyone. Gemini 3 is the best model in the world for multimodal understanding and it's our most powerful agentic and vibe coding model to date. For developers, Gemini 3 Pro outperforms previous versions across major AI benchmarks. Gemini 3’s upgraded smarts and new capabilities are now available in the Gemini app, and you can review our hub with all the Gemini 3 announcements.

    We made Gemini 3 available in Google Search for our most intelligent search yet. Gemini 3’s state-of-the-art reasoning is now available in Google Search, starting with AI Mode — marking the first time we brought a Gemini model to Search on day one. Gemini 3 grasps depth and nuance, and unlocks new experiences in Search with dynamic visual layouts, interactive tools and simulations tailored specifically for your query. Google AI Pro and Ultra subscribers in nearly 120 countries and territories in English can use Gemini 3 Pro by selecting “Thinking with 3 Pro” from the model drop-down menu in AI Mode.

    Nano Banana Pro

    We unveiled Nano Banana Pro, built on Gemini 3. Nano Banana Pro, our newest image generation and editing model, is built on Gemini 3 and moves beyond spontaneous art into an era of high-fidelity, studio-quality visuals. You now have the choice between the original Nano Banana for fun, fast editing, or Pro for an even more powerful creative partner capable of handling complex tasks demanding the highest quality. To help get you started, we shared seven tips to get the most out of Nano Banana Pro.

    Google Antigravity

    We introduced Google Antigravity, a new agentic development platform. Antigravity is a platform designed to give developers an AI-powered coding experience that goes beyond simple editing. It delivers a new agent-first interface for deploying agents that autonomously plan, execute and verify complex tasks. Our vision for Antigravity is to enable anyone with an idea to experience liftoff and build that idea into reality. You can try Antigravity available for yourself today in public preview.

    Google Maps with Gemini

    We announced that Google Maps is getting smarter with Gemini. With the help of Gemini, you will soon have the first, hands-free, conversational driving experience in Google Maps that allows you to find places, report traffic, ask for suggestions along your route and more using just your voice. Plus, new landmark-based navigation will give you clear directions, so in addition to hearing “turn right in 500 feet,” you’ll also get directions based on helpful landmarks like “turn right after the Thai Siam Restaurant.” Landmark-based navigation is rolling out now on Android and iOS in the U.S, and Gemini in navigation on Google Maps is rolling out everywhere Gemini is available.

    Gemini in Android Auto

    We announced that Gemini has started rolling out in Android Auto. Android Auto is already available in over 250 million cars on the road, and Gemini is now coming along for the ride to make life on the road even better. You’ll be able to use natural language to add stops, send messages, access emails, create playlists and even brainstorm ideas while driving. Just make sure you have the Gemini app on your phone and look for the tooltip on your car display.

    SIMA 2

    We introduced SIMA 2, a significant step toward Artificial General Intelligence (AGI). SIMA 2 is a major milestone in our work to create general and helpful AI agents. By integrating the advanced capabilities of Gemini into SIMA 2, the model is evolving from an instruction-follower into an interactive gaming companion. Now, SIMA 2 can follow human-language instructions in virtual worlds and also think about its goals, converse with users and improve itself over time, marking an important step in the direction of robotics and AI-embodiment in general.

    WeatherNext 2

    We released WeatherNext 2: our most advanced weather forecasting model. WeatherNext 2 can generate forecasts 8x faster and with resolution up to 1-hour, and we’re already using this breakthrough technology to support weather agencies in making decisions.

    AlphaFold anniversary

    We marked the 5-year anniversary of AlphaFold cracking the protein folding problem, and spotlighted its ongoing impact. Five years ago AlphaFold 2 solved the protein structure prediction problem. The profound scientific and societal value of this work was recognized in 2024 with the Nobel Prize in Chemistry. In November, we looked back at how AlphaFold has unlocked new avenues of biological research and provided our first major proof point that AI can be a powerful tool to advance science.

    AI in Search planning

    We announced new ways to plan travel with AI in Search. Our new AI features in Search help you build the perfect itinerary, from snagging a great deal to turning your plans into actual bookings. To get started with planning, try the Canvas tool in AI Mode: Describe the kind of trip you want and what recommendations you need. Then hit "Create Canvas," and watch your custom travel plan come together.

    AI shopping in Search and Gemini

    We added new AI shopping features in Search and Gemini to help with the holidays. Our biggest upgrade to shopping means you can now use conversational AI and agentic AI to take the hard work out of your holiday shopping. Using AI Mode in Search to shop, you can now describe what you’re looking for and get an intelligently organized response that brings together rich visuals, price, reviews, inventory info, and more. Plus, new AI agents can now call stores to check stock and use agentic checkout to buy items automatically from eligible merchants when the price is right.

    AI learning and education commitments

    We announced new commitments to AI learning and education. At our AI for Learning Forum in London, we announced $30 million in new funding for learning. The event brought together experts in education and technology and continued our recent work to develop AI in a way that improves learning outcomes. We also released popular new features to help with daily studying, including the ability to create flashcards and quizzes right in the NotebookLM app.

    Gemini Live tips

    We shared essential tips for using Gemini Live. Gemini Live’s latest updates allow you to have more natural, two-way conversations with AI. Our tips show how to make the most of these updates by tailoring your learning, especially for complex subjects. You can now adjust Gemini's speech speed to learn at your own pace and improve accessibility. You can also practice a new language, rehearse for a job interview or even liven your conversation up with a fun accent.

    $40 billion investment in Texas for AI and cloud infrastructure

    We announced a new $40 billion investment in Texas for AI and cloud infrastructure. CEO of Google and Alphabet Sundar Pichai and Texas Governor Greg Abbott made the announcement at an event in Midlothian, TX. This latest announcement represented the capstone of our 2025 push to make AI investments that unlock economic opportunity, advance scientific breakthroughs and create opportunities that benefit everyone. It includes major investments across America, as well as Europe, Africa and the Asia-Pacific region — alongside a critical US workforce initiative to train 100,000 electrical workers and create 30,000 new apprentices.

    Original source Report a problem
  • Dec 5, 2025
    • Parsed from source:
      Dec 5, 2025
    • Detected by Releasebot:
      Dec 6, 2025
    Google logo

    Gemini by Google

    Gemini 3 Pro: the frontier of vision AI

    Gemini 3 Pro delivers the most capable multimodal AI yet, uniting document, spatial, screen and video understanding with true visual reasoning. New features include precise pointing, open vocabulary references and media_resolution controls for smarter assistants and automated workflows.

    1. Document understanding

    Real-world documents are messy, unstructured, and difficult to parse — often filled with interleaved images, illegible handwritten text, nested tables, complex mathematical notation and non-linear layouts. Gemini 3 Pro represents a major leap forward in this domain, excelling across the entire document processing pipeline — from highly accurate Optical Character Recognition (OCR) to complex visual reasoning.

    Intelligent perception

    To truly understand a document, a model must accurately detect and recognize text, tables, math formulas, figures and charts regardless of noise or format.

    A fundamental capability is "derendering" — the ability to reverse-engineer a visual document back into structured code (HTML, LaTeX, Markdown) that would recreate it. As illustrated below, Gemini 3 demonstrates accurate perception across diverse modalities including converting an 18th-century merchant log into a complex table, or transforming a raw image with mathematical annotation into precise LaTeX code.

    Sophisticated reasoning

    Users can rely on Gemini 3 to perform complex, multi-step reasoning across tables and charts — even in long reports. In fact, the model notably outperforms the human baseline on the CharXiv Reasoning benchmark (80.5%).

    To illustrate this, imagine a user analyzing the 62-page U.S. Census Bureau "Income in the United States: 2022" report with the following prompt: “Compare the 2021–2022 percent change in the Gini index for "Money Income" versus "Post-Tax Income", and what caused the divergence in the post-tax measure, and in terms of "Money Income", does it show the lowest quintile's share rising or falling?”

    Swipe through the images below to see the model's step-by-step reasoning.

    2. Spatial understanding

    Gemini 3 Pro is our strongest spatial understanding model so far. Combined with its strong reasoning, this enables the model to make sense of the physical world.

    • Pointing capability: Gemini 3 has the ability to point at specific locations in images by outputting pixel-precise coordinates. Sequences of 2D points can be strung together to perform complex tasks, such as estimating human poses or reflecting trajectories over time.
    • Open vocabulary references: Gemini 3 identifies objects and their intent using an open vocabulary. The most direct application is robotics: the user can ask a robot to generate spatially grounded plans like, “Given this messy table, come up with a plan on how to sort the trash.” This also extends to AR/XR devices, where the user can request an AI assistant to “Point to the screw according to the user manual.”

    3. Screen understanding

    Gemini 3.0 Pro’s spatial understanding really shines through its screen understanding of desktop and mobile OS screens. This reliability helps make computer use agents robust enough to automate repetitive tasks. UI understanding capabilities can also enable tasks like QA testing, user onboarding and UX analytics. The following computer use demo shows the model perceiving and clicking with high precision.

    Task: Summarize the total revenue for each promotion type in a new sheet (Sheet2) with the promotion names as the column headers using the Pivot Table feature.

    4. Video understanding

    Gemini 3 Pro takes a massive leap forward in how AI understands video, the most complex data format we interact with. It is dense, dynamic, multimodal and rich with context.

    1. High frame rate understanding: We have optimized the model to be much stronger at understanding fast-paced actions when sampling at >1 frames-per-second. Gemini 3 Pro can capture rapid details — vital for tasks like analyzing golf swing mechanics.

    By processing video at 10 FPS—10x the default speed—Gemini 3 Pro catches every swing and shift in weight, unlocking deep insights into player mechanics.

    1. Video reasoning with “thinking” mode: We upgraded "thinking" mode to go beyond object recognition toward true video reasoning. The model can now better trace complex cause-and-effect relationships over time. Instead of just identifying what is happening, it understands why it is happening.

    2. Turning long videos into action: Gemini 3 Pro bridges the gap between video and code. It can extract knowledge from long-form content and immediately translate it into functioning apps or structured code.

    5. Real-world applications

    Here are a few ways we think various fields will benefit from Gemini 3’s capabilities.

    Education

    Gemini 3.0 Pro’s enhanced vision capabilities drive significant gains in the education field, particularly for diagram-heavy questions central to math and science. It successfully tackles the full spectrum of multimodal reasoning problems found from middle school through post-secondary curriculums. This includes visual reasoning puzzles (like Math Kangaroo) and complex chemistry and physics diagrams.

    Gemini 3’s visual intelligence also powers the generative capabilities of Nano Banana Pro. By combining advanced reasoning with precise generation, the model, for example, can help users identify exactly where they went wrong in a homework problem.

    Medical and biomedical imaging

    Gemini 3 Pro stands as our most capable general model for medical and biomedical imagery understanding, achieving state-of-the-art performance across major public benchmarks in MedXpertQA-MM (a difficult expert-level medical reasoning exam), VQA-RAD (radiology imagery Q&A) and MicroVQA (multimodal reasoning benchmarks for microscopy based biological research).

    Law and finance

    Gemini 3 Pro’s enhanced document understanding helps professionals in finance and law tackle highly complex workflows. Finance platforms can seamlessly analyze dense reports filled with charts and tables, while legal platforms benefit from the model's sophisticated document reasoning.

    “We’re impressed by Gemini 3's improvements in advanced legal reasoning, especially its ability to understand and edit contracts with complex redlines. This has been particularly valuable for our in-house customers due to the high volume and variability of the legal contracts they handle.”

    Harvey.ai

    6. Media resolution control

    Gemini 3 Pro improves the way it processes visual inputs by preserving the native aspect ratio of images. This drives significant quality improvements across the board.

    Additionally, developers gain granular control over performance and cost via the new media_resolution parameter. This allows you to tune visual token usage to balance fidelity against consumption:

    • High resolution: Maximizes fidelity for tasks requiring fine detail, such as dense OCR or complex document understanding.
    • Low resolution: Optimizes for cost and latency on simpler tasks, such as general scene recognition or long-context tasks.

    For specific recommendations, refer to our Gemini 3.0 Documentation Guide.

    Build with Gemini 3 Pro

    We are excited to see what you build with these new capabilities. To get started, check out our developer documentation or play with the model in Google AI Studio today.

    Original source Report a problem
  • Dec 5, 2025
    • Parsed from source:
      Dec 5, 2025
    • Detected by Releasebot:
      Dec 6, 2025
    Google logo

    Google Workspace by Google

    Google Workspace Updates Weekly Recap - December 5, 2025

    Google Workspace updates roll out new access controls, AI tools and collaboration boosts across Gmail, Meet, Drive and Classroom. Highlights include turning off join requests, public notebooks in Classroom, Gemini folder insights and Studio for AI agents, with several features rolling out in coming weeks.

    A summary of announcements from the last week:

    The announcements below were published on the Workspace Updates blog over the last week. Please refer to the original blog posts for complete details.

    Control whether users can request to join a space in Google Chat

    New Google Chat Access Control Space owners and managers can now disable the "request to join" feature. Previously, users with a link to a restricted space could ask for permission to enter; with this new setting, managers can block these requests entirely, preventing users from asking to join via a link. | Learn more about controlling whether users can request to join a space in Google Chat.

    Educators can now assign public notebooks in Google Classroom

    Expanded NotebookLM Assignments in Google Classroom Educators can now attach public notebooks to assignments, rather than being limited to notebooks they personally create or own. This update allows teachers to easily integrate external shared resources—such as content from the OpenStax partnership—directly into their curriculum. | Learn more about assigning public notebooks in Google Classroom.

    New to Gmail: share emails in Google Chat

    We’re launching a new integration between Gmail and Google Chat designed to improve team collaboration and productivity. With this feature, you can easily share a conversation from your Gmail inbox to a Chat direct message or space. No need to start your chat conversation with, "Did you see the email I forwarded?" or dig through your inbox to find the message being discussed. | Learn more about sharing emails to Google Chat directly from Gmail.

    Choose your preferred caption language for Meet live streams on mobile devices

    Google Meet live stream viewers can select their own preferred language for translated captions on mobile devices. Individual language selection helps overcome language barriers during presentations and events, maximizing each viewer's potential to understand and engage with the content being shared. | Learn more about choosing your preferred caption language for Meet live streams on mobile devices.

    Google Meet translated captions now available in Cantonese

    Cantonese Support for Google Meet Translated Captions Google Meet has added Cantonese to its list of supported languages for translated captions. This allows real-time translation of Cantonese speech into other languages, significantly improving accessibility and collaboration for global teams and educational institutions operating in diverse linguistic environments. | Learn more about Cantonese support for Google Meet translated captions.

    A refreshed user interface for Google Meet hardware touch controllers

    In the coming weeks, we’ll roll out a streamlined user interface for the following Meet Hardware devices: Mimo Vue HD, Mimo Mist, Logitech Tap, Logitech Tap IP, and Lenovo Series One Touch controllers (with Android devices coming soon). This new experience will offer users a more efficient and intuitive way to manage their meetings. | Learn more about a refreshed user interface for Google Meet hardware touch controllers.

    Seamlessly join meetings on Google Meet hardware with “Connect room”

    In the coming weeks, we’ll introduce Connect room, a new way to seamlessly begin your meetings on Google Meet hardware directly from your personal device. This will be available in early preview. Connect room streamlines how you start meetings in a conference room. Instead of manually typing a meeting code, this feature uses ultrasound proximity detection to identify a nearby, available Google Meet hardware device. | Learn more about seamlessly joining meetings on Google Meet hardware with “Connect room”.

    Get quick insights on your Google Drive folders with Gemini

    Earlier this year, we introduced Gemini “nudges” at the top of folders in Google Drive, and we’re now making it even easier to get the context of your folders at a glance. Gemini will now proactively provide insights about the files within a folder, right at the top of the folder view. This makes it easier to quickly understand what’s inside without having to open individual files. | Learn more about getting quick insights on your Google Drive folders with Gemini.

    Now available: Create AI agents to automate work with Google Workspace Studio

    Today we’re introducing Google Workspace Studio: the place to create, manage, and share AI agents to automate work in Workspace—no coding required. | Learn more about creating AI agents to automate work with Google Workspace Studio.

    A more modern interface for viewing PDFs, videos, images, and audio files in Google Drive on the web

    Google Drive is making significant improvements to the viewing experience of third party file formats, such as PDFs, videos, images, and audio files. | Learn more about a more modern interface for viewing PDFs, videos, images, and audio files in Google Drive on the web.

    BYOD on Google Meet on Chrome OS touch controller rooms

    We're launching an integration with Lightware peripheral switchers, so that you and your team can bring your own devices (BYOD) to Google Meet on Chrome OS touch controller rooms. Now, you can plug your laptop into a Meet room with a single USB-C cable and easily use the room's display, speaker, microphone, and camera—along with your laptop—for video conferencing. | Learn more about BYOD on Google Meet for ChromeOS touch controller rooms.

    Original source Report a problem
  • Dec 4, 2025
    • Parsed from source:
      Dec 4, 2025
    • Detected by Releasebot:
      Dec 5, 2025
    Google logo

    Gemini by Google

    Gemini 3 Deep Think is now available

    Gemini 3 Deep Think launches for Google AI Ultra subscribers, delivering enhanced parallel reasoning for tough math science and logic tasks. It showcases strong benchmark results and can be tried today by selecting Deep Think in the prompt bar and Gemini 3 Pro in the model dropdown.

    Today, we’re rolling out Gemini 3 Deep Think mode to Google AI Ultra subscribers in the Gemini app. This new mode delivers a meaningful improvement in reasoning capabilities, designed to tackle complex math, science and logic problems that challenge even the most advanced state-of-the-art models.

    Gemini 3 Deep Think is industry leading on rigorous benchmarks like Humanity’s Last Exam (41.0% without the use of tools) and ARC-AGI-2 (an unprecedented 45.1% with code execution). This is because it uses advanced parallel reasoning to explore multiple hypotheses simultaneously — building on Gemini 2.5 Deep Think variants that recently achieved a gold-medal standard at the International Mathematical Olympiad and at the International Collegiate Programming Contest World Finals.

    Ultra subscribers can try Gemini 3 Deep Think mode today by selecting “Deep Think” in the prompt bar and Gemini 3 Pro in the model dropdown.

    Original source Report a problem
  • Dec 4, 2025
    • Parsed from source:
      Dec 4, 2025
    • Detected by Releasebot:
      Dec 5, 2025
    Google logo

    Gemini API by Google

    December 4, 2025

    Deprecation announcement

    • Deprecation announcement: The gemini-2.5-flash-image-preview model will be shut down January 15, 2026.
    Original source Report a problem
  • Dec 3, 2025
    • Parsed from source:
      Dec 3, 2025
    • Detected by Releasebot:
      Dec 4, 2025
    Google logo

    Gemini API by Google

    December 3, 2025

    • Deprecation announcement: The text-embedding-004 model will be shut down January 14, 2026.
    Original source Report a problem

Related vendors