- Apr 20, 2026
- Date parsed from source:Apr 20, 2026
- First seen by Releasebot:Apr 21, 2026
Start vibe coding in AI Studio with your Google AI subscription.
Gemini increases Google AI Studio usage limits for AI Pro and Ultra subscribers and adds Nano Banana Pro access.
Starting today, Google AI Pro and Ultra subscribers get increased usage limits in Google AI Studio. This update also includes access to Nano Banana Pro and Gemini Pro mo…
Original source - Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 17, 2026
New ways to create personalized images in the Gemini app
Gemini now uses personal context and Google Photos in Nano Banana 2 to create more personal images.
Nano Banana 2 now uses your personal context and Google Photos to create images that reflect your unique life.
Original source All of your release notes in one feed
Join Releasebot and get updates from Google and hundreds of other software products.
- Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 17, 2026
New ways to create personalized images in the Gemini app
Gemini introduces more personal image generation with Personal Intelligence, Nano Banana 2 and Google Photos, letting eligible subscribers create custom images with less prompting, no manual uploads and built-in controls to refine results while keeping privacy and opt-in settings intact.
Use Personal Intelligence to create more relevant, personal images using Nano Banana and your own Google Photos library — no manual uploads or long prompts required.
Personal Intelligence makes the Gemini app feel tailored to you, not just a generic tool that works the same for everyone. Today, we’re introducing new ways for Gemini to use your interests and preferences with Nano Banana 2 and Google Photos to make image generation — one of your favorite ways to use Gemini — feel deeply personal. This lets you create unique images more easily, so you can spend more time creating and less time explaining.
Powering your imagination
One of the biggest hurdles in AI image generation is finding the right prompt. Previously, to get a result that felt truly personal, you had to write long, detailed descriptions and manually upload a reference photo just to give Gemini the right context.
Now, Personal Intelligence gives Gemini an inherent understanding of your preferences from the start. By integrating this context directly with Nano Banana 2, Gemini can automatically fill in the blanks, grounding every creation in the things you care about most. And since this is built into how you normally use the Gemini app there’s no extra setup. If you’ve already linked your Google apps, that personal context is ready and waiting the moment you start creating images.
This removes the heavy lifting. Instead of writing out the intricate details of your life, you can use simple prompts like "Design my dream house" or "Create a picture of my desert island essentials" and the results will automatically reflect your specific tastes and lifestyle, gleaned from the Google apps you’ve connected to.
Starring you and your loved ones
A lot of your most significant moments live in your Google Photos library. By connecting your Google Photos library to Personal Intelligence, Gemini goes a step further than just understanding your interests. It can use actual images of you and your loved ones to guide the image generation process.
Since you can already organize and label groups of people and pets in your library, those labels provide the context that Gemini needs to make your images feel truly yours. Now your inner circle can become the stars of your images, whether you want a result that feels pulled straight from your life or one that takes your imagination a bit further.
With those labels in place, you can simply ask Gemini to “create a claymation image of me and my family enjoying our favorite activity” and Gemini can generate that specific image for you automatically. You can also experiment with different styles like watercolors, charcoal sketches or oil paintings. You can turn a quick idea into a custom creation, saving you the trouble of searching for, downloading and re-uploading files just to see a concept come to life.
Putting creative control in your hands
Because this is a brand-new experience, Gemini might not always pick the exact photo or detail you had in mind on the first try. To keep you in the driver’s seat, we’ve built in ways to refine your results. If the result isn’t quite right, you can simply tell Gemini what was incorrect and try again. You can also click the ‘+’ icon and select a different reference photo from your Google Photos library to try a new perspective. If you’re ever curious about how your context was applied, click on the Sources button, and it’ll show you which image was auto-selected to guide the creation. You can even ask Gemini directly for information on the attribution and sources used for that specific image.
Bringing personal details into your images shouldn't mean compromising on privacy, which is why our core commitments haven't changed. The Gemini app does not directly train its models on your private Google Photos library. We train on limited info, like specific prompts in Gemini and the model’s responses, to improve functionality over time. And connecting your Google apps to Gemini remains an opt-in experience that you can adjust in your settings at any time.
This new personalized image creation experience in the Gemini app is rolling out over the next few days to eligible Google AI Plus, Pro and Ultra subscribers in the U.S., and we plan to bring this to Gemini in Chrome desktops and more users soon.
Give it a try when it hits your app — we’re looking forward to seeing how these tools help you spend less time prompting and more time creating.
Original source - Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 16, 2026
The Gemini app is now on Mac
Gemini launches a native macOS app that puts AI help a keyboard shortcut away, with screen sharing for instant context, local file support, and creative tools like Nano Banana and Veo. It is available globally today for macOS 15 and up.
Today, we’re bringing the Gemini app to macOS as a native desktop experience, designed to live right where you work. It’s always just a keyboard shortcut away, so you can quickly get the help you need without losing your focus. Here are a few ways you can use it right now:
Share your window for instant context
With our new native desktop experience, you can share anything on your screen with Gemini to get help with exactly what you’re looking at, including local files. If you’re reviewing a complex chart, you can share your window and ask, 'What are the three biggest takeaways here?' to get an instant summary. This brings powerful context to your creative work as well.
Stay in your flow
Switching between windows on your desktop can be clunky and slow. Now, you can bring up Gemini from anywhere on your Mac with a quick shortcut (Option + Space) to get help instantly, without ever switching tabs. Whether you’re drafting a market report and need to verify a date or building a budget in a spreadsheet and need the right formula, you can get an answer and get right back to work. Creatives can also quickly generate images with Nano Banana or videos with Veo to bring an idea to life without breaking their creative stride.
Starting today, the native macOS app is available to all Gemini users on macOS versions 15 and up, globally, at no cost. To get started, you can download the app directly at gemini.google/mac and begin experiencing a faster, more integrated desktop workflow.
We’re starting today with an app that brings AI assistance right where your work happens, but this first release is just the beginning. We're building the foundation for a truly personal, proactive and powerful desktop assistant, with more news to share in the coming months.
Original source - Apr 15, 2026
- Date parsed from source:Apr 15, 2026
- First seen by Releasebot:Apr 16, 2026
The Gemini app is now on Mac
Gemini brings its app to macOS as a native desktop experience.
Google is bringing the Gemini app to macOS as a native desktop experience.
Original source - Apr 15, 2026
- Date parsed from source:Apr 15, 2026
- First seen by Releasebot:Apr 16, 2026
Gemini 3.1 Flash TTS: the next generation of expressive AI speech
Gemini introduces Gemini 3.1 Flash TTS, a new text-to-speech model with better controllability, expressivity and speech quality. It rolls out in preview for developers, enterprises and Workspace users, adds audio tags, supports 70+ languages and includes SynthID watermarking.
Today, we’re introducing Gemini 3.1 Flash TTS, the latest text-to-speech model that delivers improved controllability, expressivity and quality — empowering developers, enterprises and everyday users to build the next generation of AI-speech applications.
Starting today, 3.1 Flash TTS is rolling out:
- For developers in preview via the Gemini API and Google AI Studio
- For enterprises in preview on Vertex AI
- For Workspace users via Google Vids
Improved speech quality and controllability
We’ve improved the overall speech quality of Gemini 3.1 Flash TTS, making it our most natural and expressive model to date. On the Artificial Analysis TTS leaderboard, a benchmark that captures thousands of blind human preferences, 3.1 Flash TTS achieved an impressive Elo score of 1,211.
Artificial Analysis has also positioned Gemini 3.1 Flash TTS within its “most attractive quadrant” for its ideal blend of high-quality speech generation and low cost. The model stands out further with native multi-speaker dialogue, support for 70+ languages, and granular creative control via natural language.
New audio tags for more expressive speech generation
3.1 Flash TTS also introduces audio tags — an intuitive way to control vocal style, pace and delivery. By embedding natural language commands directly into the text input, you can steer AI-speech output with improved levels of granularity.
3.1 Flash TTS enables enterprises to utilize audio tags within Vertex AI, empowering the next generation of enterprise applications.
You can start experimenting with these audio tags along with other updates to the developer experience in Google AI Studio with configurable controls that place the developer in the “director’s chair”:
- Scene direction: Set the stage by defining the environment and providing specific dialogue instructions. This world-building context helps characters remain “in-character” and react to one another naturally across multiple turns.
- Speaker-level specificity: Cast characters using unique Audio Profiles, then specify Director’s Notes to toggle pace, tone and accent. Using inline tags, speakers can pivot from these high-level settings to change expression mid-sentence.
- Seamless export: Once the performance is perfected, these exact parameters can be exported as Gemini API code to ensure consistent, recognizable voices across various projects and platforms.
With these new configurations, developers can enhance precision for specific scenarios, creating memorable characters and immersive audio experiences.
Get started with high-fidelity speech generation in the Google AI Studio Playground.
Built for global scale
Gemini 3.1 Flash TTS delivers high-fidelity speech and more precise control across more than 70 languages. These core optimizations bring advanced style, pacing and accent control to major markets — helping developers create localized, expressive speech experiences for users at global scale.
Early developer and enterprise testers are already seeing the impact of 3.1 Flash TTS, highlighting its impressive controllability and expressivity. They’ve told us how audio tags provide a new level of creative precision, transforming simple text into a high-fidelity vocal performance.
Watermarked with SynthID
All audio generated by Gemini 3.1 Flash TTS is watermarked with SynthID. This imperceptible watermark is interwoven directly into the audio output, allowing the reliable detection of AI-generated content to help prevent misinformation. For more information on our approach to safety and responsibility, you can review the model card.
Original source - Apr 15, 2026
- Date parsed from source:Apr 15, 2026
- First seen by Releasebot:Apr 16, 2026
2026.04.15
Gemini launches a native Mac desktop app that brings AI help alongside any app with a keyboard shortcut and screen sharing, available globally for free on macOS 15 and up.
Meet your new desktop assistant: Gemini for Mac
- What: We’re introducing the Gemini app for Mac, a native desktop experience built to help you get more done without disrupting your flow. With a simple keyboard shortcut (Option + Space), you can bring up Gemini alongside any application you're using. You can also seamlessly share your window, allowing Gemini to understand your context and assist with exactly what’s on your screen.
- The new desktop app is available to users on macOS versions 15 and up, globally, for free.
- Download the app directly at gemini.google/mac.
- Why: By bringing a native app to Mac, we're cutting down on context switching and giving you a faster, more integrated way to use Gemini alongside the tools you use every day.
- Apr 15, 2026
- Date parsed from source:Apr 15, 2026
- First seen by Releasebot:Apr 15, 2026
Gemini 3.1 Flash TTS: the next generation of expressive AI speech
Gemini introduces 3.1 Flash TTS, a new text-to-speech model with stronger controllability, richer expressivity and higher speech quality. It rolls out in preview across Gemini API, Google AI Studio, Vertex AI and Google Vids, with audio tags, 70+ language support and SynthID watermarking.
Today, we’re introducing Gemini 3.1 Flash TTS, the latest text-to-speech model that delivers improved controllability, expressivity and quality — empowering developers, enterprises and everyday users to build the next generation of AI-speech applications.
Starting today, 3.1 Flash TTS is rolling out:
- For developers in preview via the Gemini API and Google AI Studio
- For enterprises in preview on Vertex AI
- For Workspace users via Google Vids
Improved speech quality and controllability
We’ve improved the overall speech quality of Gemini 3.1 Flash TTS, making it our most natural and expressive model to date. On the Artificial Analysis TTS leaderboard, a benchmark that captures thousands of blind human preferences, 3.1 Flash TTS achieved an impressive Elo score of 1,211.
Artificial Analysis has also positioned Gemini 3.1 Flash TTS within its “most attractive quadrant” for its ideal blend of high-quality speech generation and low cost. The model stands out further with native multi-speaker dialogue, support for 70+ languages, and granular creative control via natural language.
New audio tags for more expressive speech generation
3.1 Flash TTS also introduces audio tags — an intuitive way to control vocal style, pace and delivery. By embedding natural language commands directly into the text input, you can steer AI-speech output with improved levels of granularity.
3.1 Flash TTS enables enterprises to utilize audio tags within Vertex AI, empowering the next generation of enterprise applications.
You can start experimenting with these audio tags along with other updates to the developer experience in Google AI Studio with configurable controls that place the developer in the “director’s chair”:
- Scene direction: Set the stage by defining the environment and providing specific dialogue instructions. This world-building context helps characters remain “in-character” and react to one another naturally across multiple turns.
- Speaker-level specificity: Cast characters using unique Audio Profiles, then specify Director’s Notes to toggle pace, tone and accent. Using inline tags, speakers can pivot from these high-level settings to change expression mid-sentence.
- Seamless export: Once the performance is perfected, these exact parameters can be exported as Gemini API code to ensure consistent, recognizable voices across various projects and platforms.
With these new configurations, developers can enhance precision for specific scenarios, creating memorable characters and immersive audio experiences.
Get started with high-fidelity speech generation in the Google AI Studio Playground.
Built for global scale
Gemini 3.1 Flash TTS delivers high-fidelity speech and more precise control across more than 70 languages. These core optimizations bring advanced style, pacing and accent control to major markets — helping developers create localized, expressive speech experiences for users at global scale.
Early developer and enterprise testers are already seeing the impact of 3.1 Flash TTS, highlighting its impressive controllability and expressivity. They’ve told us how audio tags provide a new level of creative precision, transforming simple text into a high-fidelity vocal performance.
Watermarked with SynthID
All audio generated by Gemini 3.1 Flash TTS is watermarked with SynthID. This imperceptible watermark is interwoven directly into the audio output, allowing the reliable detection of AI-generated content to help prevent misinformation.
Original source - Apr 15, 2026
- Date parsed from source:Apr 15, 2026
- First seen by Releasebot:Apr 10, 2026
- Modified by Releasebot:Apr 16, 2026
Gemini 3.1 Flash TTS: the next generation of expressive AI speech
Gemini 3.1 Flash TTS is now available across Google products.
Gemini 3.1 Flash TTS is now available across Google products.
Original source - Apr 14, 2026
- Date parsed from source:Apr 14, 2026
- First seen by Releasebot:Apr 15, 2026
Gemini in Google Classroom is now available in all Classroom-supported languages
Gemini expands in Google Classroom to all supported languages, making AI lesson planning, quiz creation, text generation, translations, and study tools available to more educators and higher education students. The update also adds starter prompts and continued access controls for admins.
Last year, we launched Gemini in Google Classroom in English to support educators with planning and creating engaging lessons, and to help students in higher education to study and learn. Starting this week, we’re beginning to expand availability to all Classroom-supported languages in which Gemini is also available. This expansion makes Gemini tools in Classroom increasingly accessible to educators and higher education students whose preferred language is now supported.
In the Gemini tab in Classroom, educators can get help creating and adapting resources, based on learning objectives, starting with the following features:
- Outline a lesson plan
- Generate a quiz
- Write an informational text
- Tackle common misconceptions
- Translate text
More content generation features will become available in these languages in the coming weeks.
Educators can also collaborate with Gemini using starter prompts for the Gemini app that help with common tasks like:
- Brainstorming real world examples
- Gamifying an activity
- Generating differentiation strategies
- Drafting an exemplar and non-exemplar
- Creating Depth of Knowledge (DOK) questions
Higher education students using the Gemini tab can:
- Learn about a topic: Get personalized and step-by-step explanations with Guided Learning.
- Take a quiz: Prepare for upcoming exams by testing your knowledge and getting hints and feedback.
- Make flashcards: Turn class materials into custom flashcards for extra practice.
- Create a study guide: Make a study guide about a certain topic, or upload class materials for more personalized resources.
Getting started
Admins:
- Gemini in Classroom: As an administrator of your organization's Google Accounts, you can control who is allowed to use Gemini in Google Classroom to generate content and resources. Access to Gemini in Google Classroom is ON by default. These capabilities are only available to users who are designated as 18 years of age and over in your institution’s age-based access settings. Visit the Help Center to learn about managing access to Gemini in Classroom and the option to turn the service on or off for users in your Admin console.
- Gemini app: Educators and students can access Gemini starter prompts in the Gemini tab in Classroom, and they will open in the Gemini app. Users’ ability to access these starter prompts and continue chats in the Gemini app is controlled by the Gemini app setting. Visit the Help Center to learn how to Turn the Gemini app on or off for users.
- Note: Ensure roles in Classroom are appropriately assigned to users. Learn more about teacher and student roles here.
End users:
- Visit the Help Center to learn more about Gemini in Classroom and feel free to take this course to learn more about generative AI for Educators.
- When using generated content, you should always review the outputs as AI can make mistakes and refine the output so that it fits your context and local policies.
Rollout pace
- Rapid Release and Scheduled Release domains: Extended rollout (potentially longer than 15 days for feature visibility) starting on April 13, 2026
Availability
- Education: Education Fundamentals, Standard, and Plus
Resources
- Google Workspace Updates Blog: Gemini in Google Classroom is now available to all Google Workspace for Education editions, with added features
- Google Workspace Updates Blog: Gemini in Google Classroom is expanding to students in higher education
- Google Help: Learn about Gemini in Google Classroom
- Google Help: Google Classroom Supported Languages
- Google Help: Verify teachers and set permissions
- Google Workspace Admin Help: Manage access to Gemini in Classroom
- Google Workspace Admin Help: Control access to Google services by age
- Google Workspace Admin Help: Turn the Gemini app on or off
- Google Workspace Admin Help: Select your organization type for Google Workspace for Education
- Apr 14, 2026
- Date parsed from source:Apr 14, 2026
- First seen by Releasebot:Apr 14, 2026
Gemini Robotics-ER 1.6: Powering real-world robotics tasks through enhanced embodied reasoning
Gemini releases Robotics-ER 1.6, a reasoning-first robotics model with sharper spatial and multi-view understanding, improved pointing and success detection, new instrument reading for gauges and sight glasses, stronger safety compliance, and developer access through the Gemini API and Google AI Studio.
For robots to be truly helpful in our daily lives and industries, they must do more than follow instructions, they must reason about the physical world. From navigating a complex facility to interpreting the needle on a pressure gauge, a robot’s “embodied reasoning” is what allows it to bridge the gap between digital intelligence and physical action.
Today, we’re introducing Gemini Robotics-ER 1.6, a significant upgrade to our reasoning-first model that enables robots to understand their environments with unprecedented precision. By enhancing spatial reasoning and multi-view understanding, we are bringing a new level of autonomy to the next generation of physical agents.
This model specializes in reasoning capabilities critical for robotics, including visual and spatial understanding, task planning and success detection. It acts as the high-level reasoning model for a robot, capable of executing tasks by natively calling tools like Google Search to find information, vision-language-action models (VLAs) or any other third-party user-defined functions.
Gemini Robotics-ER 1.6 shows significant improvement over both Gemini Robotics-ER 1.5 and Gemini 3.0 Flash, specifically enhancing spatial and physical reasoning capabilities such as pointing, counting, and success detection. We are also unlocking a new capability: instrument reading, enabling robots to read complex gauges and sight glasses — a use case we discovered through close collaboration with our partner, Boston Dynamics.
Starting today, Gemini Robotics-ER 1.6 is available to developers via the Gemini API and Google AI Studio. To help you get started, we are sharing a developer Colab containing examples of how to configure the model and prompt it for embodied reasoning tasks.
Pointing: The foundation of spatial reasoning
Pointing is a fundamental capability for an embodied reasoning model, evolving with each model generation. Points can be used to express many concepts, including:
- Spatial reasoning: Precision object detection and counting
- Relational logic: Making comparisons, such as identifying the smallest item in a set; defining "from-to" relationships (e.g move X to location Y)
- Motion reasoning: Mapping trajectories and identifying optimal grasp points
- Constraint compliance: Reasoning through complex prompts like "point to every object small enough to fit inside the blue cup"
Gemini Robotics-ER 1.6 can use points as intermediate steps to reason about more complex tasks. For example, it can use points to count items in an image, or to identify salient points on an image to help the model perform mathematical operations to improve its metric estimations.
The example below shows Gemini Robotics-ER 1.6’s strengths in pointing to multiple elements, and knowing when and when not to point.
Gemini Robotics-ER 1.6 correctly identifies the number of hammers (2), scissors (1), paintbrushes (1), pliers (6), and a collection of garden tools which can be interpreted as a single group or multiple points. It does not point to requested items that are not present in the image — a wheelbarrow and Ryobi drill. In comparison Gemini Robotics-ER 1.5 fails to identify the correct number of hammers or paint brushes, misses the scissors altogether, hallucinates a wheelbarrow and lacks precision on plier pointing. Gemini 3.0 Flash is close to Gemini Robotics-ER 1.6, but does not handle the pliers as well.
Success Detection: The engine of autonomy
In robotics, knowing when a task is finished is just as important as knowing how to start it. Success detection is a cornerstone of autonomy, serving as a critical decision-making engine that allows an agent to intelligently choose between retrying a failed attempt or progressing to the next stage of a plan.
Achieving visual understanding in robotics is challenging, requiring sophisticated perception and reasoning capabilities combined with broad world knowledge in order to handle complicating factors such as occlusions, poor lighting and ambiguous instructions. Additionally, most modern robotics setups include multiple camera views such as an overhead and wrist-mounted feed. This means a system needs to understand how different viewpoints combine to form a coherent picture at each moment and across time.
Gemini Robotics-ER 1.6 advances multi-view reasoning, enabling the system to better understand multiple camera streams and the relationship between them, even in dynamic or occluded environments, as demonstrated in the typical multi-view scenario below.
Gemini Robotics-ER 1.6 takes cues from multiple camera views to determine when the task "put the blue pen into the black pen holder" is complete.
Instrument reading: Real-world visual reasoning
To understand a key strength of Gemini Robotics-ER 1.6, we must look at how it combines capabilities like spatial reasoning and world knowledge to solve complex, real-world problems. A perfect example is instrument reading.
This task stems from facility inspection needs, a critical focus area for our partners at Boston Dynamics. Industrial facilities contain many instruments — thermometers, pressure gauges, chemical sight glasses and more — that require constant monitoring. Spot, a Boston Dynamics robot product, is able to visit the instruments throughout the facility and capture images of them.
Gemini Robotics-ER 1.6 enables robots to interpret a variety of instruments, including circular pressure gauges, vertical level indicators and modern digital readouts.
Instrument reading requires complex visual reasoning. One must precisely perceive a variety of inputs — including the needles, liquid level, container boundaries, tick marks and more — and understand how they all relate to each other. In the case of sight glasses, this involves estimating how much the liquid fills the sightglass taking into account distortion from the camera perspective. Gauges typically have text describing the unit, which must be read and interpreted, and some have multiple needles referring to different decimal places that need to be combined.
Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously.
Gemini Robotics-ER 1.6 achieves its highly accurate instrument readings by using agentic vision, which combines visual reasoning with code execution. The model takes intermediate steps: first zooming into an image to get a better read of small details in a gauge, then using pointing and code execution to estimate proportions and intervals and get an accurate reading, and ultimately applying its world knowledge to interpret meaning.
This example demonstrates how the model uses pointing and code execution for zooming to derive the reading of gauge down to sub tick accuracy.
Our safest robotics model yet
Safety is integrated into every level of our embodied reasoning models. Gemini Robotics-ER 1.6 is our safest robotics model to date, demonstrating superior compliance with Gemini safety policies on adversarial spatial reasoning tasks compared to all previous generations.
The model also shows a substantially improved capacity to adhere to physical safety constraints. For example, it makes safer decisions through spatial outputs like pointing regarding which objects can be safely manipulated under gripper or material constraints (e.g., “don't handle liquids”, “don't pick up objects heavier than 20kg“).
We also tested how well the model identifies safety hazards in text and video scenarios based on real-life injury reports. On these tasks, our Gemini Robotics-ER models improve over baseline Gemini 3.0 Flash performance (+6% in text, +10% in video) in perceiving injury risks accurately.
Gemini Robotics-ER 1.6 improves substantially compared to Gemini Robotics-ER 1.5 on Safety Instruction Following which tests the ability to adhere to physical safety constraints. It improves compared to Gemini 3.0 Flash on pointing, and both models have very high accuracy for text. Gemini 3.0 Flash does better on bounding boxes.
Collaborate with us to improve embodied reasoning for robotics
We are committed to ensuring Gemini Robotics-ER provides maximum value to the robotics community. If current capabilities are limited for your specialized application, we invite you to submit this form with 10–50 labeled images illustrating specific failure modes to help us build more robust reasoning features. We look forward to collaborating with you to enhance these capabilities in our upcoming releases.
Try Gemini Robotics-ER 1.6 now on Google AI Studio
Original source - Apr 13, 2026
- Date parsed from source:Apr 13, 2026
- First seen by Releasebot:Apr 14, 2026
Prepare for the NEET UG with practice tests in Gemini
Gemini expands its full-length, no-cost practice tests to the NEET UG, building on SAT and JEE Main support. The feature uses vetted content from education partners to deliver a more test-like prep experience and is available in English for signed-in users.
We recently launched full-length, no-cost practice tests in Gemini, starting with the SAT and JEE Main. Today, we’re expanding practice tests to support the NEET UG.
We have grounded practice tests in rigorously vetted content from leading education companies like Physics Wallah and Careers360, to build a best in class experience for learners coming to Gemini. This helps ensure that you’re not just practicing — you’re preparing with material that more closely resembles what you’ll see on test day.
To try it out, just tell Gemini “I want to take a NEET mock exam.”
Note: This feature is currently available in English only.
Getting started
Admins: The Gemini app and related in-app tools are controlled by the Generative AI settings in the Workspace Admin console. Practice tests in Gemini are subject to these existing controls. Visit the Help Center for more information on turning the Gemini app on or off.
End users: End users of all ages who have access to the Gemini app will receive access to practice tests automatically. To get started, tell Gemini which practice test you want to take.Rollout pace
Rapid Release and Scheduled Release domains: Available now
Availability
Available to all Google Workspace customers, Workspace Individual subscribers, and users with personal Google accounts who are signed in to the Gemini app
Resources
Google Workspace Admin Help: Turn the Gemini app on or off
Original source
Keyword Blog: Prep for the SAT with practice tests in Gemini
Keyword Blog: New AI Tools to Support India’s Next Generation
Workspace Updates: Prepare for the SAT with full-length practice tests in Gemini
Workspace Updates: Prepare for the JEE Main with practice tests in Gemini - Apr 10, 2026
- Date parsed from source:Apr 10, 2026
- First seen by Releasebot:Apr 11, 2026
6 easy ways to study for finals with Gemini
Gemini adds new study tools for finals, including notebooks, AI-generated study guides, flashcards, Audio Overviews, interactive visualizations, custom quizzes and Guided Learning to help students turn notes into a smarter study partner.
Learn how to use Gemini as your personal study partner — from turning messy lecture notes into podcasts to testing your knowledge with custom quizzes.
Get ahead of finals week and all the studying that goes with it. You can use Gemini to turn your messy pile of notes into a streamlined study plan with this six-step guide to getting through finals.
1. Put all your materials in one place
Stop managing endless tabs for every major project or final. Gemini notebooks turn your handpicked sources into a study command center that remembers your progress and picks up exactly where you left off. You can create a dedicated notebook, then upload it all: lecture PDFs, photos of whiteboards, messy class notes and even past chat history from this semester. We're rolling out notebooks in Gemini this week, starting with Google AI Ultra, Pro and Plus subscribers on the web who are 18+ with personal Google accounts. In the coming weeks we'll expand access to mobile, more countries across Europe and to free users.
2. Generate study guides
Once you upload your files, let Gemini do the heavy lifting. Gemini can distill hundreds of pages of raw notes into a logical, structured study guide or a set of flashcards. And there’s no need to waste time on what you already know: Tell Gemini to bypass the basics and dive deep into the most complex topics.
Upload your document, then try this prompt: Create a study guide based on my course materials for my exams. To get a tailored study guide, upload your documents before hitting submit.
3. Turn your notes into a podcast
If you don’t retain information best by reading, why not learn while you listen? Let Gemini turn your static notes into an engaging, podcast-style conversation so you can prep for finals while walking to class or doing laundry. With Audio Overviews, two AI hosts hold an engaging back-and-forth conversation deconstructing your uploaded course materials and lecture notes. It can be a wide-ranging conversation across all of your materials, or all about a specific topic where you need to dive deeper. Try Audio Overviews in the Gemini app or NotebookLM.
4. Create custom and interactive visualizations
Gemini can transform your questions and complex topics into interactive simulations and models — directly within your chat. Whether you’re rotating a molecule or simulating a complex physics system, you can explore further with just one prompt. Select the Pro model in the prompt bar, then ask Gemini to “show me” or “help me visualize” a complex concept. This feature is now rolling out globally to all Gemini app users with personal Google accounts.
5. Figure out what you still need to learn
Stress-test your knowledge by asking Gemini to create a custom practice exam focused on a subject’s most complex topics. You can even specify how long you want the exam to be.
Or you can explain a concept out loud to Gemini Live and ask it to spot any gaps in your logic. Gemini can ask follow-up questions to test your knowledge and clarify any confusion, just like a study partner who’s already tackled the coursework.
Upload your document, then this prompt: Create a quiz based on the course materials for my exam.
6. Master tough topics step by step
When a topic feels impossible, don’t just ask for the answer — ask for the logic. Click on Guided Learning and ask a question about your exam topic. Gemini probes with open-ended questions to help you build a deeper understanding instead of just getting the final answer.
You can even snap a photo of your handwritten math or a diagram you drew and ask Gemini to help explain a concept or check if you made a mistake.Try this prompt: Help me with this homework problem. A 10.5kg test rocket is fired vertically from Cape Canaveral. Its fuel gives it a kinetic energy of 1925J by the time the rocket engine burns all the fuel. What additional height will the rocket rise? Assume that air resistance is negligible.
It’s easier than ever to try these features, even if you need to switch to Gemini. You can bring your memories, preferences and chat history from other AI apps to Gemini and get right back into studying.
Original source - Apr 9, 2026
- Date parsed from source:Apr 9, 2026
- First seen by Releasebot:Apr 10, 2026
The Gemini app can now generate interactive simulations and models.
Gemini now transforms questions and complex topics into custom interactive visualizations.
Gemini can now transform your questions and complex topics into custom and interactive visualizations.
Original source - Apr 9, 2026
- Date parsed from source:Apr 9, 2026
- First seen by Releasebot:Apr 10, 2026
The Gemini app can now generate interactive simulations and models.
Gemini adds custom interactive visualizations and functional simulations in chat, turning complex questions into hands-on, explorable explanations. Users can adjust variables, rotate models, and see topics like physics or orbital motion come to life in the Gemini app.
Gemini can transform your questions and complex topics into custom and interactive visualizations — directly within your chat.
Previously, responses were largely just text with static diagrams. Now, we’re delivering functional simulations that can help you better understand the topic you’re asking Gemini about. Whether you’re rotating a molecule or simulating a complex physics system, you can explore further with just one prompt.
When exploring how the moon orbits the Earth, you aren't stuck with a fixed diagram. You can manually adjust sliders or input exact numbers for initial velocity and gravity strength to instantly see how those specific variables create a stable orbit.
This feature is now rolling out globally to all Gemini app users.¹ Head to gemini.google.com and select the Pro model in the prompt bar, then ask Gemini to “show me” or “help me visualize” a complex concept so you can see for yourself.
Original source