Amazon Release Notes
Last updated: Jan 10, 2026
Amazon Products
All Amazon Release Notes (77)
- Feb 17, 2026
- Date parsed from source:Feb 17, 2026
- First seen by Releasebot:Jan 10, 2026
- Modified by Releasebot:Feb 17, 2026
February 2026 Updates
Amazon Connect expands admin control with Service Quotas visibility and auto‑approval, bigger multi‑line fields, per‑channel auto‑accept and ACW timeouts, and new Audio Enhancement for clearer agent calls. It now supports CSV uploads for dependent field options and in‑app notifications to surface urgent updates.
Amazon Connect Updates
AWS Service Quotas support in Amazon Connect Cases
Amazon Connect Cases now supports AWS Service Quotas, giving administrators a centralized way to view applied limits, monitor utilization, and scale case workloads without hitting unexpected service constraints. You can request quota increases directly from the Service Quotas console, and eligible requests are automatically approved without manual intervention.
Larger text fields in Amazon Connect Cases
Amazon Connect Cases now supports larger, multi-line text fields with up to 4,100 characters. Administrators can use the Admin UI to select the appropriate configuration (single-line or multi-line) on a per-field basis, improving case documentation capabilities.
Per-channel auto-accept and ACW timeouts
Amazon Connect now enables per-channel auto-accept and after contact work timeout settings for chat, tasks, emails, and callbacks to optimize how agents spend their time. Previously, these settings were available only for inbound voice contacts. To learn more, see Configure agent settings.
Please note that if you currently integrate with the UpdateUserPhoneConfig API, we recommend you migrate to the newly released UpdateUserConfig API instead. Per-channel auto-accept and ACW timeouts can only be updated via UpdateUserConfig API.
Audio Enhancement for agents
Amazon Connect now offers Audio Enhancement to improve audio quality on the agent's side by reducing background noise and isolating the agent's voice during calls. Administrators can enable noise suppression or voice isolation modes for agents through user management settings. Agents with the appropriate security profile permissions can also adjust their own Audio Enhancement settings during work sessions.
For more information, see Enable Audio Enhancement.
CSV upload for dependent field options (Amazon Connect Cases)
Amazon Connect Cases now supports CSV upload for dependent field options
Amazon Connect Cases now enables you to bulk configure cascading dropdown menus for case fields by uploading CSV files containing field option mappings. This capability significantly reduces manual configuration time for complex hierarchical data structures such as geographic hierarchies (Country → State → City) or product categorizations (Category → Subcategory). You can include multiple field pairs in a single CSV file.For more information, see CSV upload for dependent field options.
In-app notifications keep users informed of urgent updates and actions
In-app notifications keep users informed of urgent updates and actions
Amazon Connect Now users in the Amazon Connect admin website can be provided notifications in their header, so urgent updates and follow-on actions can be seen from any page within the Amazon Connect admin website. APIs allow services and customers to publish brief messages (including URLs) to a specified audience, and a new header icon will indicate when unread messages are available. On click, the user can read the message, mark as unread if necessary, and follow links to reports or other UIs if follow-on actions are advised.For more information, see In-app notifications keep users informed of urgent updates and actions.
Original source Report a problem - Jan 26, 2026
- Date parsed from source:Jan 26, 2026
- First seen by Releasebot:Jan 27, 2026
AWS Weekly Roundup: Amazon EC2 G7e instances, Amazon Corretto updates, and more (January 26, 2026)
AWS headlines a wave of product updates with NVIDIA Blackwell powered GPU instances, including G7e GA for faster AI inference, and enhancements across ECR, CloudWatch Insights regions, and Connect Step-by-Step Guides. Corretto quarterly security updates also released.
Hey! It’s my first post for 2026, and I’m writing to you while watching our driveway getting dug out. I hope wherever you are you are safe and warm and your data is still flowing!
This week brings exciting news for customers running GPU-intensive workloads, with the launch of our newest graphics and AI inference instances powered by NVIDIA’s latest Blackwell architecture. Along with several service enhancements and regional expansions, this week’s updates continue to expand the capabilities available to AWS customers.
Last week’s launches
I thought these projects, blog posts, and news items were also interesting:- Amazon EC2 G7e instances are now generally available — The new G7e instances accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs deliver up to 2.3 times better inference performance compared to G6e instances. With two times the GPU memory and support for up to 8 GPUs providing 768 GB of total GPU memory, these instances enable running medium-sized models of up to 70B parameters with FP8 precision on a single GPU. G7e instances are ideal for generative AI inference, spatial computing, and scientific computing workloads. Available now in US East (N. Virginia) and US East (Ohio).
- Amazon Corretto January 2026 Quarterly Updates — AWS released quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) versions of OpenJDK. Corretto 25.0.2, 21.0.10, 17.0.18, 11.0.30, and 8u482 are now available, ensuring Java developers have access to the latest security patches and performance improvements.
- Amazon ECR now supports cross-repository layer sharing — Amazon Elastic Container Registry now enables you to share common image layers across repositories through blob mounting. This feature helps you achieve faster image pushes by reusing existing layers and reduce storage costs by storing common layers once and referencing them across repositories.
- Amazon CloudWatch Database Insights expands to four additional regions — CloudWatch Database Insights on-demand analysis is now available in Asia Pacific (New Zealand), Asia Pacific (Taipei), Asia Pacific (Thailand), and Mexico (Central). This feature uses machine learning to help identify performance bottlenecks and provides specific remediation advice.
- Amazon Connect adds conditional logic and real-time updates to Step-by-Step Guides — Amazon Connect Step-by-Step Guides now enables managers to build dynamic guided experiences that adapt based on user interactions. Managers can configure conditional user interfaces with dropdown menus that show or hide fields, change default values, or adjust required fields based on prior inputs. The feature also supports automatic data refresh from Connect resources, ensuring agents always work with current information.
Upcoming AWS events
Keep a look out and be sure to sign up for these upcoming events:- Best of AWS re:Invent (January 28-29, Virtual) — Join us for this free virtual event bringing you the most impactful announcements and top sessions from AWS re:Invent. AWS VP and Chief Evangelist Jeff Barr will share highlights during the opening session. Sessions run January 28 at 9:00 AM PT for AMER, and January 29 at 9:00 AM SGT for APJ and 9:00 AM CET for EMEA. Register to access curated technical learning, strategic insights from AWS leaders, and live Q&A with AWS experts.
- AWS Community Day Ahmedabad (February 28, 2026, Ahmedabad, India) — The 11th edition of this community-driven AWS conference brings together cloud professionals, developers, architects, and students for expert-led technical sessions, real-world use cases, tech expo booths with live demos, and networking opportunities. This free event includes breakfast, lunch, and exclusive swag.
Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse for upcoming in-person and virtual developer-focused events in your area.
That’s all for this week. Check back next Monday for another Weekly Roundup!
~ micah
Original source Report a problem All of your release notes in one feed
Join Releasebot and get updates from Amazon and hundreds of other software products.
- Jan 20, 2026
- Date parsed from source:Jan 20, 2026
- First seen by Releasebot:Jan 21, 2026
Amazon Quick Suite launches expanded size, faster ingestion, and richer data type support for SPICE datasets
Amazon Quick Suite expands SPICE with bigger scale, faster ingestion, and broader data types for AI workloads. Datasets now support up to 2TB, 64K Unicode strings, and timestamps from year 0001 to 1400 in Enterprise Editions. Faster loads and richer analytics—details in docs.
Amazon Quick Suite SPICE engine enhancements
Amazon Quick Suite SPICE engine is now supporting higher scale, faster ingestion, and broader data types to power advanced analytics and AI-driven workloads. With this launch, customers can load up to 2TB of data per dataset, doubling the previous 1TB limit, when using the new data preparation experience. Despite the increased dataset size, SPICE continues to deliver strong performance, with ingestion further optimized to enable even faster data loading and refresh to reduce time to insight. We’ve also expanded SPICE’s data type support by increasing string length limits from 2K to 64K Unicode characters and extending the supported timestamp range from year 1400 back to year 0001. As Quick Suite customers bring richer, more complex, and increasingly AI-driven workloads into SPICE, these enhancements enable broader data coverage, faster data onboarding, and more powerful analytics, without compromising performance. To learn more, visit our documentation.
The new SPICE dataset size limitation is now available in Amazon Quick Sight Enterprise Editions across all supported Amazon Quick Sight regions.
This is a companion discussion topic for the original entry at https://aws.amazon.com/about-aws/whats-new/2026/01/amazon-quick-suite-launches-expanded-spice
Original source Report a problem - Jan 14, 2026
- Date parsed from source:Jan 14, 2026
- First seen by Releasebot:Jan 14, 2026
Amazon Quick Suite browser extension now supports Quick Flows
Amazon Quick Suite browser extension now supports Quick Flows, letting you run workflows directly in your browser by passing page content to prebuilt or shared flows. Available in multiple regions with no extra charges; install from Chrome, Firefox, or Edge stores.
Amazon Quick Suite browser extension now supports Quick Flows
Amazon Quick Suite browser extension now supports Amazon Quick Flows, enabling you to run workflows directly within your web browser, eliminating the need to manually extract information from each web page. You can invoke workflows that you’ve created or that have been shared with you, and pass web page content as input—all without leaving your browser.
This capability is great for completing routine tasks such as analyzing contract documents to extract key terms, or generating weekly reports from project dashboards that automatically notify stakeholders.
Quick Flows in browser extension is available now in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). There are no additional charges for using the browser extension beyond standard Quick Flows usage.
To get started, visit your Chrome, Firefox, or Edge store page to install browser extension and sign in with your Quick Suite account. Once you sign in, look for the Flows icon below the chat box to invoke your flows. To learn more about invoking Quick Flows in browser extension, please visit our documentation.
Original source
Amazon Web Services, Inc.
Original source Report a problem
Discover more about what's new at AWS with Amazon Quick Suite browser extension now supports Quick Flows - Jan 14, 2026
- Date parsed from source:Jan 14, 2026
- First seen by Releasebot:Jan 14, 2026
Amazon Quick Suite now supports memory for chat agents
Amazon Quick Suite adds memory for chat agents to remember user preferences across chats, enabling personalized, context-aware responses. Users can view, edit, or delete memories and even run in Private Mode to keep conversations private. Availability currently US East and US West.
Memory for chat agents in Amazon Quick Suite
We are announcing memory for chat agents in Amazon Quick Suite – a feature that allows users to get personalized responses based on their previous conversations. With this feature, Quick Suite remembers the preferences users specify in chat and generate responses that are tailored to them. Users can also view their inferred preferences and remove any memory they don’t want Quick chat agents to use.
Previously, chat users needed to repeat their preferences around response format, acronyms, dashboards, and integrations in every conversation. They also had to clarify ambiguous topics and entities in chat, increasing the tedious back and forth needed to get accurate and insightful responses. Memory addresses this pain point by remembering facts and details about users in a way that ensures responses provided to users continuously learn and improve. Users also control what Quick Suite remembers about them – all the memories are viewable and removable by users, and users have the choice to start chat in Private Mode in which conversations are not used to infer memories.
Memory in Quick Suite chat agents is available in US East (N. Virginia) and US West (Oregon). To learn more, visit the Amazon Quick Suite User Guide.
Original source: Amazon Quick Suite now supports memory for chat agents - AWS
Original source Report a problem - January 2026
- No date parsed from source.
- First seen by Releasebot:Jan 7, 2026
Database Migration Service by Amazon
AWS Database Migration Service 3.4.4 release notes
AWS DMS 3.4.4 introduces TLS and Kafka target auth with MSK, plus broad reliability tweaks across Oracle, PostgreSQL, S3, MongoDB, DocumentDB, and more. Expect improved logging, CDC resilience, and richer datatype handling for multiple sources and targets.
Features and enhancements
The following table shows the new features and enhancements introduced in AWS DMS version 3.4.4.
- AWS DMS now supports TLS encryption and TLS or SASL authentication using Amazon MSK and on-premises Kafka cluster as a target. For more information on using encryption and authentication for Kafka endpoints, see Connecting to Kafka using Transport Layer Security (TLS).
Issues resolved in AWS DMS 3.4.4
The issues resolved in AWS DMS 3.4.4 include the following:
- Improved AWS DMS logging on task failures when using Oracle endpoints.
- Improved AWS DMS task execution continues processing when Oracle source endpoints switch roles after Oracle Data Guard fail over.
- Improved error handling treats ORA—12561 as a recoverable error when using Oracle endpoints.
- Fixed an issue where EMPTY_BLOB() and EMPTY_CLOB() columns are migrated as null when using Oracle as a source.
- Fixed an issue where AWS DMS tasks fail to update records after add column DDL changes when using SQL Server as a source.
- Improved PostgreSQL as a source migration by supporting the TIMESTAMP WITH TIME ZONE data type.
- Fixed an issue where the afterConnectScript setting does not work during a full load when using PostgreSQL as a target.
- Introduced a new mapUnboundedNumericAsString setting to better handle the NUMERIC date type without precision and scale when using PostgreSQL endpoints.
- Fixed an issue where AWS DMS tasks fail with “0 rows affected” after stopping and resuming the task when using PostgreSQL as a source.
- Fixed an issue where AWS DMS fails to migrate the TIMESTAMP data type with the BC suffix when using PostgreSQL as a source.
- Fixed an issue where AWS DMS fails to migrate the TIMESTAMP value “±infinity” when using PostgreSQL as a source.
- Fixed an issue where empty strings are treated as NULL when using S3 as a source with the csvNullValue setting set to other values.
- Improved the timestampColumnName extra connection attribute in a full load with CDC to be sortable during CDC when using S3 as a target.
- Improved the handling of binary data types in hex format such as BYTE, BINARY, and BLOB when using S3 as a source.
- Fixed an issue where deleted records are migrated with special characters when using S3 as a target.
- Fixed an issue to handle empty key values when using Amazon DocumentDB (with MongoDB compatibility) as a target.
- Fixed an issue where AWS DMS fails to replicate NumberDecimal or Decimal128 columns when using MongoDB or Amazon DocumentDB (with MongoDB compatibility) as a source.
- Fixed an issue to allow CDC tasks to retry when there is a fail over on MongoDB or Amazon DocumentDB (with MongoDB compatibility) as a source.
- Added an option to remove the hexadecimal “0x” prefix to RAW data type values when using Kinesis, Kafka, or OpenSearch as a target.
- Fixed an issue where validation fails on fixed length character columns when using Db2 LUW as a source.
- Fixed an issue where validation fails when only the source data type or the target data type is FLOAT or DOUBLE.
- Fixed an issue where validation fails on NULL characters when using Oracle as a source.
- Fixed an issue where validation fails on XML columns when using Oracle as a source.
- Fixed an issue where AWS DMS tasks crash when there are nullable columns in composite keys using MySQL as a source.
- Fixed an issue where AWS DMS fails to validate both UNIQUEIDENTIFIER columns from SQL Server source endpoints and UUID columns from PostgreSQL target endpoints.
- Fixed an issue where a CDC task does not use an updated source table definition after it is modified.
- Improved AWS DMS fail over to treat task failures caused by an invalid user name or password as recoverable errors.
- Fixed an issue where AWS DMS tasks fail because of missing LSNs when using RDS for SQL Server as a source.
- Jan 1, 2026
- Date parsed from source:Jan 1, 2026
- First seen by Releasebot:Jan 21, 2026
Amazon Quick Sight expands dashboard customization in tables and pivot tables
Amazon QuickSight now lets readers customize dashboards directly by adding/removing fields, changing aggregations, and tweaking formatting without author updates. This boosts flexibility for sales and finance analyses and is available now in Enterprise Edition across supported regions.
Building on our recent launch of customizable tables and pivot tables, Amazon Quick Sight now enables readers to add or remove fields, change aggregations, and modify formatting directly in dashboards—all without requiring updates from dashboard authors.
These enhanced capabilities empower readers with even greater flexibility to tailor their data views for specific analytical needs. For example, sales managers can add revenue breakdowns by product category to identify growth opportunities, while finance teams can change aggregations from sum to average to better understand spending patterns across departments.
These new customization features are now available in Amazon Quick Sight Enterprise Edition across all supported Amazon Quick Sight regions. To get started with these new customization features, see our blog post.
Original source Report a problem - Jan 1, 2026
- Date parsed from source:Jan 1, 2026
- First seen by Releasebot:Nov 7, 2025
- Modified by Releasebot:Feb 9, 2026
January 2026 Updates
Amazon Connect rolls out wait time estimates, task file attachments, tag-based access controls, easy case linking with flows, a recurring hours calendar, CloudFormation support, and live agent screen recording status. Rich data, automation, and security upgrades boost coaching and efficiency.
Amazon Connect Launches Wait Time Estimates to Improve Customer Experience
Amazon Connect now delivers improved estimated wait time metrics for queues and enqueued contacts, empowering organizations to enhance customer satisfaction. This allows contact centers to set accurate customer expectations, provide convenient options such as callbacks when hold times are extended, and balance workloads effectively across multiple queues. By leveraging the estimated wait time metric, contact centers can make strategic routing choices across queues while gaining enhanced visibility for better resource planning. For example, a customer calling about billing during peak hours with a 15-minute wait is seamlessly transferred to a cross-trained team with 2-minute availability, getting help faster without repeating their issue. The metric works seamlessly with routing criteria and agent proficiency configurations.
This feature is available in all AWS regions where Amazon Connect is offered. To learn more about estimated wait time see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.
Amazon Connect now supports file attachments for tasks via StartTaskContact API
Amazon Connect now enables you to include file attachments when creating tasks using the StartTaskContact API. You can attach up to 5 files per task in various formats such as .pdf, .docx, .csv, .txt, .png, .jpg, .mp4, and more. This capability allows you to provide agents with relevant documents, images, or other files directly within the task context, streamlining workflows and improving agent efficiency.
Amazon Connect now supports tag-based access controls for cases
Amazon Connect now enables you to use tag-based access controls to define who can access specific cases. You can associate tags with case templates and configure security profiles to determine which users can access cases with those tags. For example, you can restrict access to fraud-related cases so that only agents in the fraud department can view or edit them.
Amazon Connect now simplifies linking related contacts to cases using flows
Amazon Connect now makes it easier to link related contacts such as email replies, call transfers, persistent chats, and queued callbacks to the same case so agents can view the complete customer journey and resolve issues faster. You can use flows to search for a case associated with a prior contact in the chain to follow-up contacts more easily.
In addition, you can now use flows to link a related contact to a case. For example, when you create a case via a Step-by-Step Guide, you can link that case to the main contact (e.g., voice, chat, email, or tasks) directly using flows.
Recurring overrides and visual calendar for hours of operation
Amazon Connect now makes it easier to manage contact center operating hours for recurring events like holidays, maintenance windows, and promotional periods, with a visual calendar that provides at-a-glance visibility by day, month, or year. You can set up recurring overrides that automatically take effect weekly, monthly, or every other Friday, and use them to provide customers with personalized experiences, all without having to manually revisit configurations. For example, every January 1st you can automatically greet customers with "Happy New Year!" and route them to a special holiday message before checking if agents are available, then on January 2nd your contact center automatically returns to normal operations.
For more information, see Set overrides for extended, reduced, and holiday hours.
Cases now supports AWS CloudFormation
Amazon Connect Cases now supports AWS CloudFormation, enabling you to model, provision, and manage case resources as infrastructure as code. With this launch, administrators can create CloudFormation templates to programmatically deploy and update their Cases configuration—such as templates, fields, and layouts—across Amazon Connect instances, reducing manual setup time and minimizing configuration errors.
For more information, see documentation.
Agent screen recording status tracking
Amazon Connect now offers customers the ability to view status of agent screen recordings in near real time in CloudWatch using Amazon EventBridge. With screen recording, supervisors can identify areas for agent coaching (e.g., non-compliance with business processes) by not only listening to customer calls or reviewing chat transcripts, but also watching agents' actions while handling a contact (i.e., a voice call, chat and task). Using Amazon EventBridge, customers can see status of each agent screen recording including success/failure, failure codes with description, installed client version, agent web browser version, agent operating system, screen recording start and end times from CloudWatch.
Customers can start using Amazon Connect screen recording status tracking by subscribing to Screen Recording Status Changed event type in Amazon EventBridge event bus.
For more information, see Set up and review agent screen recordings in Amazon Connect Contact Lens.
Store nested JSON object and looping arrays
Amazon Connect now enables you to store and work with complex data structures in your flows, making it easy to build dynamic automated experiences that use rich information returned from your internal business systems. You can save complete data records, including nested JSON objects and lists, and reference specific elements within them, such as a particular order from a list of orders returned in JSON format.
Additionally, you can automatically loop through lists of items in your customer service flows, moving through each entry in sequence while tracking the current position in the loop. This allows you to easily access item-level details and present relevant information to end-customers. For example, a travel agency can retrieve all of a customer's itineraries in a single request and guide the caller through each booking to review or update their reservations. A bank can similarly walk customers through recent transactions one by one using data retrieved securely from its systems. These capabilities reduce the need for repeated calls to your business systems, simplify workflow design, and make it easier to deliver advanced automated experiences that adapt as your business requirements evolve.
For more information, see Flows in Amazon Connect.
Original source Report a problem - December 2025
- No date parsed from source.
- First seen by Releasebot:Dec 20, 2025
- Modified by Releasebot:Feb 15, 2026
Database Migration Service by Amazon
AWS Database Migration Service 3.5.1 release notes
AWS DMS 3.5.1 updates numeric handling for streaming targets, switching large integers to INT64 and sometimes scientific notation, which may affect downstream formats. The release adds broad target support (PostgreSQL 15, DocumentDB Elastic, Redshift Serverless, Timestream) and numerous stability fixes.
Summary of Change
In AWS DMS version 3.5.1, there is a change in how large integer and high-precision numeric values are handled when streaming data to targets like Kafka and Kinesis. Specifically, AWS DMS changed its internal data type representation, handling these values as INT64 instead of INT8. This shift can result in different data formats on the streaming endpoints, particularly when the values exceed the limits of INT8. Consequently, the representation of these numeric types may differ from their previous formatting when streamed to destinations like Kafka and Kinesis, potentially impacting downstream systems and processes that consume the data from these targets.
- In previous versions (e.g., 3.4.7/3.4.6), large integer values were represented as integers.
- Starting with version 3.5.1, these values may appear in scientific notation (e.g., 7.88129934789981E15), potentially leading to precision and formatting differences.
Affected Data Types
The recent change affects the representation of several numeric types when streamed to endpoints like Kafka and Kinesis. The impacted types are:
- Large integer types (e.g., bigint)
- Floating-point types (FLOAT, DOUBLE)
- High-precision decimal types (DECIMAL, NUMERIC)
Affected Scenarios
- Full load migrations to streaming targets
- Change Data Capture (CDC) to streaming targets
This change specifically impacts streaming endpoints such as Kafka and Kinesis, while non-streaming targets remain unaffected.
Mitigation
To mitigate this change, you can implement a data type transformation that reverts to the previous formatting, representing large numbers as integers. However, it's important to note that this workaround may not be suitable for all scenarios, as it could potentially introduce limitations or compatibility issues.
Recommendations
- Test your specific use case in a non-production environment before deploying AWS DMS version 3.5.1 or later to identify and address any impacts of this change.
- Affected customers can implement the change-data-type transformation workaround, if applicable, to revert to the previous formatting for large numbers as integers. However, this approach may not suit all scenarios.
We are reviewing this behavior to ensure consistent data type handling across endpoints in future releases.
New features in AWS DMS 3.5.1
AWS DMS version 3.5.1 supports PostgreSQL version 15.x. For more information, see Using PostgreSQL as a source and Using PostgreSQL as a target.
AWS DMS version 3.5.1 supports Amazon DocumentDB Elastic Clusters with sharded collections. For more information, see Using Amazon DocumentDB as a target for AWS Database Migration Service.
Support for using Amazon Amazon Redshift Serverless as a target endpoint. For more information, see Using an Amazon Redshift database as a target for AWS Database Migration Service.
Enhanced PostgreSQL target endpoint settings for providing Babelfish support. For more information, see Using a PostgreSQL database as a target for AWS Database Migration Service.
AWS DMS 3.5.1 improves the methodology of handling open transactions when starting a CDC-Only task from the Start Position for an Oracle source. For more information, see OpenTransactionWindow in the Endpoint settings when using Oracle as a source for AWS DMS section.
Support for using Amazon Timestream as a target endpoint. For more information, see Using Amazon Timestream as a target for AWS Database Migration Service.AWS DMS version 3.5.1 includes the following resolved issues
The representation of large numeric values on streaming targets has been updated. Review the 'Handling of Large Numeric Values in Streaming Targets' documentation for details on potential impacts.
Original source Report a problem
Fixed an issue for Oracle source where CDC-only tasks had continuously growing inactive sessions, resulting in the following exception: ORA-00020: maximum number of processes exceeded on the source database.
Fixed an issue for DocumentDB as a target where UPDATE statements were not properly replicated in some scenarios.
Improved error handling for the data validation feature to properly fail the task when data validation is disabled for validation-only tasks.
Fixed an issue for Amazon Redshift target where the DMS task would not retry applying changes on the target when the target has ParallelApplyThreads set greater than zero after connection termination, which would result in data loss.
Fixed an issue for MySQL to MySQL replication of mediumtext data types with full-LOB mode.
Fixed an issue for DMS tasks with BatchApplyEnabled set to true where DMS would stop replicating data after Secrets Manager rotated the password.
Fixed an issue for MongoDB / DocDB source where range segmentation would not work properly when the primary key column contained a large value.
Fixed an issue for Oracle target where DMS would recognize a value of unbound data type NUMERIC as a STRING during data validation.
Fixed an issue for SQL Server endpoints where DMS data validation constructed an invalid SQL statement.
Improved the functionality of automatic partitioning of data when migrating documents in parallel from MongoDB as a source.
Fixed an issue for data validation feature where validation would fail on NUL (0x00) characters.
Fixed an issue for babelfish endpoint where tables names with mixed case would be suspended.
Fixed an issue for Amazon S3 source where files were not process due to a file name validation issue.
Fixed an issue for Db2 LUW source where "table-type" option in selection rules was being ignored.
Fixed an issue for Amazon Redshift target where data loss would occur when ParallelLoadThreads was >0 under certain conditions.
Enhanced the data validation feature for Amazon Redshift target to support HandleCollationDiff setting.
Fixed an issue for Amazon S3 target data validation where validation would fail when there were no other columns than the PK in the table.
Fixed an issue for data validation feature where the CloudWatch metrics would be missing for validation which took a short amount of time to complete.
Fixed an issue for data validation feature where the re-validation option was unavailable in certain situations.
Fixed an issue where the maximum number of events per transaction was limited to 201,326,592 under certain conditions.
Fixed an issue where a task with the BatchApplyEnabled task setting set to true would fail after migrating from AWS DMS version 3.4.6 to 3.5.1 in some cases.
Fixed an issue with SQL Server AlwaysOn as a source where a task would fail with case-sensitive collation.
Fixed an issue with MySQL as a source where a task would hang instead of failing when the source was not properly configured.
Fixed an issue with S3 as a source where a task would fail on resume after upgrading from AWS DMS version 3.4.6 or 3.4.7 to version 3.5.1.
Fixed an issue with PostgreSQL as a source where DDLs were not properly handled with the CaptureDDLs endpoint setting set to false.
Fixed an issue with Oracle as a source where a task would crash on resume due to incorrect data in the column name.
Fixed an issue with MySQL as a source where an LOB lookup would fail when the ParallelApplyThreads task setting was set to a value greater than zero.
Fixed an issue with SQL Server as a source where a task would fail with an illogical LSN sequencing state error error after upgrading from AWS DMS version 3.4.7 to version 3.5.1.
Fixed an issue with PostgreSQL as a source where a task using the pglogical plugin would fail when the task was stopped, a table was removed from selection rules, the task was resumed, and changes were made to the removed table.
Fixed an issue for Aurora MySQL as a source where an incorrect recovery checkpoint would be saved as a result of an Aurora failover or Aurora source stop and start.
Fixed an issue for SQL Server as a source where a task would crash when SafeguardPolicy was set to RELY_ON_SQL_SERVER_REPLICATION_AGENT.
Fixed an issue for MySQL as a target there where CDC replication would fail as a result of incorrect data type casting in the batch-apply phase.
Fixed an issue for PostgreSQL as a source where a task would fail due to a DDL being treated as a DML when the CaptureDDLs endpoint setting was set to false.
Fixed an issue for MongoDB as a source where the task would crash due to an empty collection.
Fixed an issue for Amazon Redshift as a target where a task would crash during the full load phase when the recovery checkpoint control table was enabled.
Fixed an issue for S3 to S3 replication where AWS DMS would not replicate the data if the bucketFolder was not specified.
Fixed an issue for S3 as a target where excessive latency would occur when GlueCatalogGeneration was set to true.
Fixed an issue with Oracle as a target where AWS DMS truncates data in VARCHAR2 columns.
Fixed an issue for PostgreSQL as a source where the behavior of the '_' wildcard in the selection rules was not working as documented.
Fixed an issue for PostgreSQL as a source where the task would fail due to an empty WAL header received from the replication slot.
Fixed an issue for MySQL and MariaDB as sources where a proper error message was not emitted when AWS DMS detected BINLOG compression.
Improved S3 data validation to handle special characters in primary and non-primary key columns.
Fixed an issue for Amazon Redshift as a target where misleading entires were present in the task log reporting batch-apply statement failures on UPDATES and DELETES.
Fixed an issue for SQL Server to S3 migrations where the task would crash while applying cached changes.
Fixed an issue for the batch-apply feature where an error in applying a batch would result in missing data.
Improved logging for SQL Server source to unclude the storage unit value. Enhanced logging for SQL Server source in AlwaysOn configuration to properly indicate missing permissions.
Several loggin enhancements introduced to provide better visiblitiy and troubleshooting capalities for the Kafka target.
Enhanced logging for Oracle source with binary reader to properly indicate tables being skipped due to missing primary keys.
Enhanced logging for SQL Server source in AlwaysOn configuration to properly indicate missing permissions.
Enhanced logging for migrations with disabled DDL replication to indicate unexpected target table structure after its modified outside of AWS DMS.
Fixed an issue for Db2 target where the task would fail when the AWS DMS status table is enabled.
Fixed an issue for MongoDB / Amazon DocumentDB endpoints where the credentials could not be retrieved from Secret Manager which resulted in an error.
Fixed an issue for MongoDB / Amazon DocumentDB where the task would fail with ParallelApply enabled while replicating certain sequence of events.
Enhanced logging for Amazon Redshift target to include more detailed information in default logging level.
Fixed an issue for Amazon S3 target where AWS DMS task would crash after receiving alter table DDL when GlueCatalogGeneration is enabled.
Fixed an issue for data validation feature where validation would fail on NUL (0x00) characters.
Fixed an issue for babelfish endpoint where tables names with mixed case would be suspended.
Fixed an issue for Amazon S3 source where files were not process due to a file name validation issue.
Fixed an issue for Db2 LUW source where "table-type" option in selection rules was being ignored.
Fixed an issue for Amazon Redshift target where data loss would occur when ParallelLoadThreads was >0 under certain conditions.
Enhanced the data validation feature for Amazon Redshift target to support HandleCollationDiff setting.
Fixed an issue for Amazon S3 target data validation where validation would fail when there were no other columns than the PK in the table.
Fixed an issue for data validation feature where the CloudWatch metrics would be missing for validation which took a short amount of time to complete.
Fixed an issue for data validation feature where the re-validation option was unavailable in certain situations.
Fixed an issue where the maximum number of events per transaction was limited to 201,326,592 under certain conditions.
Fixed an issue where a reload of multiple tables was canceled when at least one of the table was invalid.
Fixed an issue for MySQL to S3 migration where first DML executed after "add column" DDL would be missed resulting in data loss.
Fixed a memory leak issue for batch apply feature which would occur under certain conditions.
Fixed an issue where AWS DMS task start woudl take a very long time and never complete.
Fixed an issue for PostgreSQL source where data loss would occur due to unknown events in the replication slot.
Fixed an issue with MySQL source and target migrating Lob columns. Now DMS uses the column id from the target table instead of source table when deciding to which column we need to write the LOB Data.
Fixed an issue with MySQL 5.5 Source, added retry mechanism to prevent task failure when DMS would fail to read binary log events during on going replication (CDC).
Fixed an issue for PostgreSQL Source where certain onging replication (CDC) events failed to be parsed correctly when using the test_decoding plugin for Postgres.
Fixed an issue with DocumentDB target with parallel apply setting which was preventing the use of multiple threads while using this feature.
Fixed an issue with Oracle HCC compression DIRECT INSERT with parallel DML hint causing missing and duplicate data.
Fixed an issue with Oracle Source, DMS task with binary reader were failing due to Oracle July 2024 CPU.
Fixed an issue with TaskRecoveryTableEnabled enabled, where DMS attempts to update the target system table awsdms_txn_state after the target connection is terminated.
Fixed an issue with PostgreSQL source where some transactions would be replicated twice when the TaskrecoveryTableEnabled setting was enabled.
Fixed an issue with S3 source to S3 target where DMS task was not replicating data during full load and on going replication.
Fixed an issue for S3 source where DMS task was seg faulting during on going replication for DMS vesion 3.5.3
Fixed an issue with DB2 soruce with CcsidMapping, CCSID mappin ECA is now properly applied to task when code page is 0 and data is migrated properly.
Fixed an issue where DMS migration from Aurora PostgreSQL to Redshift Serverless was seeing issue with Boolean value.
Data validation operation can now accurately processes unbound characters and TEXT data types, ensuring correct validation results.
PostgreSQL source replication maintains connectivity during Multi-AZ failover events, preventing task failures.
Data validation now correctly compares datetime values when using Babelfish as a target, improving cross-platform compatibility.
MySQL source replication now correctly handles mid-table duplicate column additions, preventing task interruptions.
MySQL source replication maintains column sequence integrity when multiple columns are added during CDC operations.
DynamoDB target replication now correctly processes LOB data during CDC, ensuring complete data transfer.
PostgreSQL source data validation now correctly interprets boolean data type mappings, producing accurate comparison results.
Fixed an issue for Oracle as a source where trailing spaces were being truncated in VARCHAR2(4000) columns when using extended data type support.
SQL Server source replication maintains connectivity during DDL operations on secondary replicas, preventing task interruptions.
AWS Secrets Manager connection strings now support special characters while preserving security protocols.
MongoDB and Amazon DocumentDB replication prevents record duplication that previously triggered key constraint errors.
Oracle source replication accurately processes timestamp values across various session time zone configurations.
Oracle source replication now handles data type conversions more robustly, preventing ORA-01460 errors and associated task failures.
AWS DMS ignores DDL statements for tables not configured in table mappings, preventing unnecessary processing of source database schema changes for unmapped tables.
When migrating data from source SQL Server view to target endpoints, the column length is correctly preserved and no columns are truncated.
Fixed an issue where AWS DMS now correctly identifies the primary replica in Microsoft SQL Server Always On Availability Groups, resolving previous case sensitivity detection errors.
Fixed an issue where AWS DMS tasks failed when migrating unbound numeric data types to PostgreSQL target endpoints.
AWS DMS now correctly validates CHAR and VARCHAR data during migrations, eliminating false positive reports in validation tasks.
AWS DMS prevents data corruption in Large Object (LOB) data during parellel batch operations when using Amazon Redshift as a target endpoint.
Validation failures are prevented and queries are successfully executed on Amazon S3 target when source column filters are applied to VARCHAR, CHAR, DATE, and DATETIME data types.
When using Amazon S3 targets, consistent data validation states are maintained throughout Change Data Capture (CDC) operations.
Fixed an issue where data validation failed due to string formatting errors during record comparison operations.
Resolved an issue where empty VARCHAR values from IBM DB2 LUW source databases were incorrectly captured as NULL values instead of empty strings during Change Data Capture (CDC) operations.
Fixed memory leak in Oracle source endpoints during LOB lookups when full LOB mode is enabled. This prevents continuous memory growth and out-of-memory failures during replication tasks with LOB data.
Redshift target endpoints now correctly map new boolean columns added during CDC when MapBooleanAsBoolean is enabled. This maintains data type consistency with PostgreSQL sources instead of creating varchar columns.
PostgreSQL source endpoints now correctly migrate UUID array data types when inline LOB mode is enabled.
Data validation now correctly reports mismatch details for nullable columns.
Data validation now correctly formats date filter conditions for Oracle target endpoints when using SQL Server sources.
Added warning log message when AWS DMS automatically creates target tables in DO_NOTHING mode. This improves visibility and helps users identify when tables are created despite selecting a mode that implies no automatic table creation.
Fixed SQL Server BIT to PostgreSQL SMALLINT conversion during CDC operations.
Enhanced S3 bucket ownership validation to improve security when using S3 as a migration target or source.
Redshift endpoints now properly reconnect after connection termination and correctly validate credentials during initialization. This prevents "Server name must be supplied" errors and task failures during recovery.
Fixed memory leak in data validation. This prevents out-of-memory failures during long-running validation tasks.
Fixed false positive MISSING_TARGET errors when validating SQL Server to PostgreSQL migrations with CHAR/NCHAR primary keys containing trailing spaces.
Fixed race condition when multiple tasks share S3 endpoints by isolating Athena databases per task. - Dec 20, 2025
- Date parsed from source:Dec 20, 2025
- First seen by Releasebot:Dec 22, 2025
December 2025 Updates
Agent performance dashboard provides insights into evaluation scores, metrics like handle time, online time breakdown, and evaluations by evaluator across agent hierarchies.
Original source Report a problem