Splunk Cloud Platform Release Notes
Last updated: Feb 20, 2026
- Feb 18, 2026
- Date parsed from source:Feb 18, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
The Ingest Processor solution
Splunk Cloud’s Ingest Processor receives a steady stream of feature releases, from AI-assisted field extraction and batch data aggregation to decryption, XML/JSON tooling, templates, and expanded cloud regions. This is a genuine release log showing new capabilities.
This page contains information about new features, known issues, and resolved issues for the Ingest Processor solution, grouped by the release date. The Ingest Processor solution is a service within Splunk Cloud Platform designed to help you manage your data processing configurations and monitor your ingest traffic through a centralized Splunk Cloud service. Use the Ingest Processor solution to filter, mask, and transform your data before routing the processed data to external environments. For more information, see About Ingest Processor.
Note: The release date indicates when updates to the Ingest Processor solution were made available to Splunk Cloud Platform customers. For more information, contact your Splunk account representative.
Use the links to navigate to a specific section:
- New features, enhancements, and fixed issues
- Known issues
New features, enhancements, and fixed issues
Splunk Inc. releases frequent updates to the Ingest Processor solution. This list is periodically updated with the latest functionality and changes to the product.
February 18, 2026
The Ingest Processor solution now includes the following new features or enhancements.
- Support for Automated Field Extraction (AFE): The Ingest Processor solution now allows you to use Generative AI to suggest regular expressions (regex) for extracting multiple fields at ingest-processing time. See Extract fields from event data using Ingest Processor for more information.
January 27, 2026
The Ingest Processor solution now includes the following new features or enhancements.
- Support for the stats function: The Ingest Processor solution now allows you to aggregate your event data in batches and reduce the volume of logs sent to your destination. See Aggregate event data using Ingest Processor for more information.
December 19, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Additional pipeline templates: The Ingest Processor solution has been updated with pipeline templates that process AWS CloudTrail logs, CrowdStrike FDR logs, and Microsoft Office 365 Management Activity events, as well as templates that demonstrate common data processing workflows using generic sample data. You can use these templates as starting points for your own pipelines, or as references to learn how to write SPL2 to fulfill various use cases. See Use templates to create pipelines for Ingest Processor for more information.
October 20, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Decryption: The Ingest Processor solution now allows you to send encrypted data through your pipelines, and decrypt it before it reaches its destination. That way, you do not have to decrypt your data before processing it in Ingest Processor pipelines. To decrypt your data, apply the Decrypt command to your pipelines. See Use the Decrypt command to decrypt data in the Ingest Processor solution for more information.
October 8, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- xml_to_object and object_to_xml functions: The Ingest Processor solution and the Edge Processor solution now support two new functions to modify pipeline data: xml_to_object and object_to_xml. Use the xml_to_object function to convert an XML string into a JSON object. The JSON format makes your data easier to manipulate and modify, and increases the efficiency to store and query from Splunk indexes. You can also set the xml_to_object function to infer the data types within your XML string. It can infer booleans, integers, floats, and recognizes the value null. Use the object_to_xml function to convert a JSON object into an XML string. You can use this function to return your data to its original format after modifying it to JSON. These functions are not available for searches. See Apply the xml_to_object and object_to_xml functions to pipelines for more information.
July 31, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Index partitioning for Ingest Processor pipelines: You can now add index partitioning to your Ingest Processor pipelines. This feature, available in Splunk Cloud Platform versions 10.0.2503 and higher, allows you to create an index predicate for your pipelines, in addition to the host, source, and source type partition options. See Create pipelines for Ingest Processor for more information.
June 18, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Conversion to OCSF format: You can now use the Ingest Processor to convert incoming event data to the Open Cybersecurity Schema Framework (OCSF) format, ensuring that the data can be used effectively in security applications. Conversions are supported for specific source types and event types. See Convert data to OCSF format using Ingest Processor for more information.
June 11, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Configurable timestamp rounding for metrics: By default, when you use the logs_to_metrics SPL2 command in a pipeline to generate metrics, the Ingest Processor rounds the timestamps of these metrics to the nearest 10 seconds. Rounding the timestamps prevents Splunk Observability Cloud from dropping data that arrives out of order. You can now choose to round metric timestamps to a different interval of seconds or turn off timestamp rounding by setting the round_timestamp parameter in the logs_to_metrics SPL2 command. See Generate logs into metrics using Ingest Processor for more information.
June 5, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Expanded support for Perl-compatible Regular Expressions 2 (PCRE2): To improve consistency and alignment with the Splunk platform new and existing pipelines will use PCRE2 syntax instead of RE2 syntax for regular expressions. All existing and new pipelines have been updated to use PCRE2 syntax. For more information about PCRE2 regular expressions in your existing pipelines, see Convert RE2 regular expressions to PCRE2 regular expressions.
April 30, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Apply custom command function: To process the incoming data before sending it to a destination, you can now discover, select, and apply custom command functions, which are user-defined SPL2 functions. This is particularly helpful for customers with less experience using SPL2.
March 5, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Support for Perl-compatible Regular Expressions 2 (PCRE2): To improve consistency and alignment with the Splunk platform, starting on March 5, 2025, new pipelines will use PCRE2 syntax instead of RE2 syntax for regular expressions. On June 5, 2025, all existing and new pipelines will be updated to use PCRE2 syntax. For information about converting and validating the regular expressions in your existing pipelines, see Convert RE2 regular expressions to PCRE2 regular expressions.
March 4, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Support for lookups: You can now use lookups in your pipelines to enrich incoming event data with additional information from CSV or KV collection lookup tables. See Enrich data with lookups using Ingest Processor for more information.
February 10, 2025
The Ingest Processor solution now includes the following new features or enhancements.
- Support for persistent queues (PQ) alongside in-memory queues to prevent data loss during system congestion: Ingest Processor supports persistent queues (PQ) alongside in-memory queues to prevent data loss during system congestion. When congestion occurs, Ingest Processor temporarily stores data by writing it to disk. Once the system returns to normal operation, Ingest Processor automatically forwards the stored data from these persistent queues. See the Resiliency and queueing in Ingest Processor topic in the Use Ingest Processors manual for more information.
- Support for previewing up to a chosen action in pipeline statements: You can now preview your pipeline statement at different actions in the SPL2 editor to analyze your data at different points of processing. For more information about how to preview up to an action, see Create pipelines for Ingest Processor.
November 19, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Support for gzip compression on data being sent to Amazon S3: When sending data from the Ingest Processor to Amazon S3, you can now compress that data using gzip. See Send data from Ingest Processor to Amazon S3 in the Use Ingest Processors manual for more information.
October 28, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Cloud region availability: Ingest Processor is now available in the following cloud regions: eu-south-1, eu-west-3. See About Ingest Processor for all cloud region availability.
September 10, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Support for sending your data from Ingest Processor to a Splunk platform metrics index destination: You can now send metrics data from Ingest Processor to a Splunk platform metrics index. Selecting a Splunk platform metrics index as a destination involves selecting a metrics destination and a corresponding metrics index. For information about how to configure sending metrics data to a Splunk Platform index, see Send metrics data from Ingest Processor to a Splunk platform metrics index. For information about how to send metrics to multiple destinations, see Send metrics to multiple destinations.
August 7, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Improved user interface for configuring index routing: The user interface for configuring index routing has been updated to present the configuration options more clearly. For information about how to configure index routing, see Create pipelines for Ingest Processor. For information about how the destination index for your data is determined by a precedence order of configurations, see How does Ingest Processor know which index to send data to?
July 19, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Updates to custom function support in SPL2: When defining a custom SPL2 function in a pipeline, you must now declare mandatory parameters before optional parameters. See Custom eval functions in the SPL2 Search Manual for more information.
July 17, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Ingest Processor General Availability: The Ingest Processor solution is now publicly available to all Splunk Cloud Platform users. See Get started with the Ingest Processor solution.
- Support for Premier and Essentials tier subscriptions: The Ingest Processor Essentials tier is included with a Splunk Cloud Platform subscription, and accommodates a maximum Daily Processing Volume of 500 GB/day. The Premier tier is a priced SKU for Daily Processing Volumes over 500 GB/day. For more information, contact your Splunk Sales representative. For more information about licensing in Splunk Cloud Platform, see the Use the License Usage dashboards topic in the Splunk Cloud Platform Admin Manual. For more information about Splunk Cloud Platform subscriptions, see the Subscription types section of the Splunk Cloud Platform Service Details topic in the Splunk Cloud Platform manual.
- Cloud region availability: Ingest Processor is available in the following cloud regions: us-east-1, us-west-2, ap-northeast-1, ap-southeast-1, ap-southeast-2, ca-central-1, eu-central-1, eu-west-1, eu-west-2.
May 14, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Support for the branch SPL2 command: You can now use the branch command to process and route copies of the incoming data in different ways. See Routing data in the same Ingest Processor pipeline to different actions and destinations for more information.
April 17, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Availability on HIPAA, IRAP, and PCI DSS compliant cloud environments: Splunk Cloud Platform has attained a number of compliance attestations and certifications from industry-leading auditors as part of Splunk's commitment to adhere to industry standards worldwide and Splunk's efforts to safeguard customer data. Generally Available products and features that are currently in scope of Splunk's compliance program may not be a part of the third-party audit report until the next assessment cycle. The Ingest Processor solution is in scope of the following compliance programs and will be audited at the next assessment cycle. Information Security Registered Assessors Program (IRAP): IRAP is an initiative of the Australian Signals Directorate (ASD) through the Australian Cyber Security Center (ACSC), designed to provide cyber security assessments on Information and Communications Technology (ICT) services to government organizations. IRAP is also a recognised standard with robust security controls for cloud services in the private sector across Australia.
April 15, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Cloud region availability: Ingest Processor is now available in the following cloud regions: ap-southeast-2, eu-central-1, eu-west-1. See Get started with the Ingest Processor solution.
April 4, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Support for the mvappend and mvdedup SPL2 functions: You can now use the following evaluation functions in pipelines for the Ingest Processor: mvappend, mvdedup. See SPL2 evaluation functions for Ingest Processor pipelines for more information.
March 26, 2024
The Ingest Processor solution now includes the following new features or enhancements.
- Updated workflow for configuring hashing functions: You can now use the Compute hash of action in the pipeline builder to add and configure hashing functions in your pipelines. See See Hash fields using Ingest Processor for more information for more information.
February 20, 2024
This is the first publicly available preview of the Ingest Processor solution. The following functionalities are available within this public preview to capture feedback from early adopters of Ingest Processor:
- Set up the Ingest Processor solution. See First-time setup instructions for the Ingest Processor solution.
- Process data using pipelines. See Create pipelines for Ingest Processors.
- Write metrics to Splunk Observability Cloud using pipelines. See Generate logs into metrics using Ingest Processor.
- Route data using pipelines. See Process a subset of data using Ingest Processor.
- View and configure destinations to route data to, including Splunk platform deployments, Splunk Observability Cloud environments, and Amazon S3 buckets. See Add or manage destinations.
- View the health status and data flow metrics of an Edge Processor. See View data flow information about Ingest Processor.
Known issues
The Ingest Processor solution is subject to the following limitations.
Browsers
- Multiple browser sessions are not supported since it is possible for users to try to edit the same pipeline in more than one browser session and make conflicting edits.
Ingest Processors
The following limitations exist for Ingest Processors:
CAUTION: Ingest Processors provide no data delivery guarantees. Data loss can occur if an Ingest Processor experiences high back pressure on connections to destinations, or when a data destination has a prolonged outage.- Only Splunk Cloud tenant administrators can create and view Ingest Processor pipelines.
Forwarders
- The following limitations exist for forwarders:
- The useACK property in outputs.conf must be disabled in forwarders that are sending data to Ingest Processor pipelines.
HTTP Event Collector (HEC)
- When you receive data through HEC, the Enable indexer acknowlIngestment setting on the HEC token must be turned off.
Lookups
- CIDR matching is not supported. When configuring your lookup definition, make sure that the Match type advanced option is not set to CIDR.
Metrics
- Historical metrics presented in the detailed view of an Ingest Processor pipeline does not include metrics for deleted pipelines.
Pipelines
- The following limitations exist for pipelines:
- Only tenant administrators can create, edit, delete, apply, or remove pipelines.
- Some SPL2 functions work differently in Ingest Processor pipelines than they do in searches. For example, regular expressions in functions are interpreted differently because Ingest Processor pipelines support Regular Expression 2 (RE2) syntax while Splunk searches support Perl Compatible Regular Expressions (PCRE) syntax. See Ingest Processor pipeline syntax for more information.
Splunk Cloud Experience tenants
When you go through the first-time setup process for the Ingest Processor solution, you create a connection between your Splunk Cloud Experience tenant and your Splunk Cloud Platform deployment. This connection enables the tenant to surface specific indexes from that deployment as pipeline destinations.
The following limitations exist for this initial connection between your Splunk Cloud Experience tenant and your Splunk Cloud Platform deployment:- You cannot connect your tenant to more than one Splunk Cloud Platform deployment using this method. To send data from a pipeline to an index that belongs to a different Splunk Cloud Platform deployment, you must configure a destination that corresponds to the indexer tier of that deployment and then include an eval expression that specifies the target index in your pipeline.
- If you create additional indexes in your Splunk Cloud Platform deployment after completing the first-time setup process, you must refresh the connection in order to make those indexes available in the tenant.
- Feb 12, 2026
- Date parsed from source:Feb 12, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
The Edge Processor solution
The Edge Processor in Splunk Cloud Platform has a steady stream of release notes through 2025 and 2026, adding real world capabilities like bigger source type sync, batch event aggregation, and new pipeline templates. It also brings decryption, PCRE2 support, HEC improvements, and expanded data handling options.
Note:
The Edge Processor solution is being gradually rolled out to Splunk Cloud Platform and may not be available immediately. If you have an urgent need for this capability and do not see it yet in your Splunk Cloud Platform environment, then please contact your Splunk Cloud Platform sales representative.This page contains information about new features, known issues, and resolved issues for the Edge Processor solution, grouped by the generally available release date.
The Edge Processor solution is a service within Splunk Cloud Platform designed to help you manage data ingestion within your network boundaries. Use the Edge Processor solution to filter, mask, and transform your data close to its source before routing the processed data to external environments. For more information, see About the Edge Processor solution.
The Edge Processor solution is available on Splunk Cloud Platform version 9.0.2209 or higher. Updates are released frequently, and become available across all the supported Splunk Cloud Platform versions at the same time.
Note:
The release date indicates when updates to the Edge Processor solution were made available to Splunk Cloud Platform customers. For more information, contact your Splunk account representative.Use these links to navigate to a specific section:
- New features, enhancements, and fixed issues
- Known issues
New features, enhancements, and fixed issues
Splunk releases frequent updates to the Edge Processor solution. This list is periodically updated with the latest functionality and changes to the product.
February 12, 2026
- The Edge Processor solution now includes the following new features or enhancements.
- Improvements to source type syncing: The maximum number of source types that the Edge Processor service can sync from Splunk Cloud Platform has increased from 1000 to 4000. See Using source types to break and merge data in Edge Processors for more information.
January 27, 2026
- The Edge Processor solution now allows you to aggregate your event data in batches and reduce the volume of logs sent to your destination. See Aggregate event data using Edge Processor for more information.
January 26, 2026
- Bulk configuration of indexers for Splunk platform S2S destinations: When configuring a Splunk platform S2S destination, you can now enter a list of indexers or upload a list from a .txt or .csv file instead of specifying each indexer manually. See Send data from Edge Processors to non-connected Splunk platform deployments using S2S for more information.
December 19, 2025
- Additional pipeline templates: The Edge Processor solution has been updated with pipeline templates that process AWS CloudTrail logs, CrowdStrike FDR logs, and Microsoft Office 365 Management Activity events, as well as templates that demonstrate common data processing workflows using generic sample data. You can use these templates as starting points for your own pipelines, or as references to learn how to write SPL2 to fulfill various use cases. See Use templates to create pipelines for Edge Processors and SPL2 pipeline templates reference for more information.
December 4, 2025
- Incoming HTTP Event Collector (HEC) data size limit increased to 800 MB.
- Improved communication with Splunk indexer during unhealthy status with indefinite retry and exponential backoff.
- Edge Processor support for JSON array format as input. See Get data into an Edge Processor using HTTP Event Collector for more information.
November 6, 2025
Agnostic dot prefixes for datasets: The Edge Processor solution is now agnostic to dot prefixes for datasets, ensuring consistent handling.October 15, 2025
- Decryption: The Edge Processor solution now allows you to send encrypted data through your pipelines and decrypt it before it reaches its destination. Apply the Decrypt command to your pipelines. See Use the Decrypt command to decrypt data in the Edge Processor solution for more information.
October 8, 2025
- xml_to_object and object_to_xml functions: Convert XML strings to JSON objects and vice versa to facilitate data manipulation and storage efficiency. These functions are not available for searches.
- Large lookups: Edge Processor now supports large lookup datasets up to 3 GB, enabling scalable data enrichment. Contact your Splunk sales representative to enable.
September 16, 2025
- Updated systemd configuration instructions to ensure more graceful shutdown procedures by specifying KillMode=mixed in the systemd unit file. See Install an instance and configure systemd section in Set up an Edge Processor for more information.
July 23, 2025
- Source type syncing from Splunk Cloud Platform to the Edge Processor service: Manage source types centrally in Splunk Cloud Platform, eliminating manual updates in Edge Processor service. See Using source types to break and merge data in Edge Processors for more information.
June 18, 2025
- Conversion to OCSF format: Use Edge Processors to convert incoming event data to the Open Cybersecurity Schema Framework (OCSF) format for effective use in security applications. See Convert data to OCSF format using an Edge Processor for more information.
June 11, 2025
- Edge Processor monitoring dashboards: Updated UI to visualize metrics and health of Edge Processors, including data volume, logs, CPU, memory, disk I/O, and disk space. Use dashboards to understand health and data flow.
June 5, 2025
- Expanded support for Perl-compatible Regular Expressions 2 (PCRE2): New and existing pipelines use PCRE2 syntax instead of RE2 syntax for regular expressions. See Convert RE2 regular expressions to PCRE2 regular expressions for more information.
May 27, 2025
- Common Vulnerabilities and Exposures (CVE) fixes: Fixes for CVE-2023-44487, CVE-2024-45339, CVE-2025-29786, CVE-2025-30204.
- Fixed issues include TLS cipher suite mismatches and supervisor log parsing errors.
May 6, 2025
- Parquet format for data sent to Amazon S3: Option to store data as .parquet files when sending from Edge Processor to Amazon S3. See Send data from Edge Processors to Amazon S3 for more information.
April 30, 2025
- Apply custom command function: Discover, select, and apply user-defined SPL2 functions to process incoming data before sending to destination.
April 18, 2025
- Data compression option in Splunk platform S2S destinations: Compress data to reduce bandwidth when sending from Edge Processor to Splunk platform using S2S protocol. Option is on by default for new destinations.
March 20, 2025
- Common Vulnerabilities and Exposures (CVE) fix: Fix for CVE-2025-22869.
March 6, 2025
- Improvements to pipeline previews: Upgrade to improve accuracy of pipeline previews, rolled out in phases starting March 6, 2025.
March 5, 2025
- Support for PCRE2 syntax in new pipelines starting March 5, 2025; all pipelines updated by June 5, 2025. Support for exporter_error_count health metric. Fix for CVE-2024-45337.
- Fixed issues include optimized retry behavior for Amazon S3 data sending and reduced noisy error logs.
February 10, 2025
- Support for previewing up to a chosen action in pipeline statements: Preview pipeline statements at different actions in SPL2 editor.
November 19, 2024
- Support for gzip compression on data sent to Amazon S3.
September 26, 2024
- Edge Processor acknowledgement for HTTP Event Collector (HEC) data: Verify receipt of data sent through HEC.
- Support for Amazon Data Firehose events.
September 25, 2024
- Edge Processor queue resiliency: Back pressure batches of data to upstream clients until ready to send to destination, fixing vCPU usage limitation.
September 16, 2024
- Fix for Time zone assignment option for syslog data not working as expected; specified time zone is now respected.
August 31, 2024
- Updated Warning status for Edge Processor instances to include cases with incomplete status information.
August 22, 2024
- Improved error handling and Edge Processor restart behavior; action required for existing users to set recommended limits on service account role.
August 7, 2024
- Improved user interface for configuring index routing.
July 30, 2024
- Time zone assignment for syslog data using RFC 3164 protocol.
July 19, 2024
- Updates to custom function support in SPL2: Mandatory parameters must be declared before optional parameters.
May 28, 2024
- Support for Amazon Linux 2 installation and running of Edge Processors.
May 14, 2024
- Support for thru and branch SPL2 commands to process and route copies of incoming data.
April 24, 2024
- Additional SPL2 functions supported: abs, exp, ln, log, pi, pow, sqrt, hypot.
April 18, 2024
- Renamed Global settings to Shared settings and updated side navigation.
April 4, 2024
- Support for json_valid, mvappend, mvdedup, and tojson SPL2 functions.
April 2, 2024
- HTTP Event Collector (HEC) token authentication support.
March 26, 2024
- Updated workflow for configuring hashing functions using Compute hash of action.
March 12, 2024
- Updated workflow for configuring lookups using Enrich events with lookup action.
February 27, 2024
- Updated configuration settings for TLS and mTLS; renamed configuration option for Splunk platform HEC destinations to HEC URI.
February 12, 2024
- Updated UI component for selecting data destinations in pipeline builder; renamed Append data to destination action to Send data to destination.
January 31, 2024
- Support for mvcount, mvrange, and mv_to_json_array SPL2 functions.
January 24, 2024
- Updated workflow for adding data processing actions to pipelines using plus icon in Actions section.
January 23, 2024
- Pipeline previews for multiple destinations; select specific destination to preview data.
January 22, 2024
- Updates to interpretation of where commands in pipelines; now consistently interpreted as filters, with data not matching dropped.
January 8, 2024
- Support for route SPL2 command to send subset of incoming data to different destination.
December 7, 2023
- Support for lookup SPL2 command to enrich incoming event data with CSV or KV Store lookup tables.
- Raw data ingestion using HTTP Event Collector (HEC) services/collector/raw endpoint.
November 17, 2023
- Updated workflow for configuring system connections through new System connections page.
- Additional pipeline partitioning options.
November 8, 2023
- Updated workflows for sending data to specific Splunk indexes using Target index action.
October 30, 2023
- Additional SPL2 functions including cryptographic, trigonometric, hyperbolic, and statistical eval functions.
October 27, 2023
- Updated diagnostic tool edge_diagnostic to fix omission of compressed log files; checksum value changed.
September 18, 2023
- Syslog data transmission support; configure Edge Processor to receive syslog data.
August 22, 2023
- Support for split SPL2 function.
August 9, 2023
- Additional SPL2 functions: json_delete, filter, map, reduce.
August 4, 2023
- Availability on HIPAA, IRAP, and PCI DSS compliant cloud environments; Edge Processor solution in scope of these compliance programs.
July 27, 2023
- New pipeline builder for streamlined pipeline creation.
June 1, 2023
- Data transmission using HTTP Event Collector (HEC) services/collector endpoint.
May 19, 2023
- Pipeline previews using parsed sample data in CSV format.
April 27, 2023
- Default Destination assignment for Edge Processor to route unprocessed data.
March 25, 2023
- Time extraction and normalization in pipeline editor.
March 16, 2023
- Search results as sample data using Copy field values option.
March 15, 2023
- Field extraction support with dedicated UI in pipeline editor.
February 13, 2023
- First generally available release of the Edge Processor solution with functionalities for setup, pipeline creation, source type configuration, data forwarding, destination management, and health monitoring.
Known issues
The Edge Processor solution is subject to Tested and recommended service limits (Soft limits) in the Splunk Cloud Platform Service Details, as well as the following known issues.
Browsers
- Multiple browser sessions are not supported due to potential conflicting edits.
Edge Processors
- No data delivery guarantees; data loss can occur under high back pressure or prolonged destination outage.
- Uninstalling Edge Processor instances improperly causes Disconnected status in Manage instances panel.
- Only tenant administrators can create and view Edge Processors.
Forwarders
- useACK property in outputs.conf must be disabled in forwarders sending data to Edge Processors.
- props.conf configurations in forwarders and destination indexers can override or conflict with Edge Processor pipeline logic.
HTTP Event Collector (HEC)
- Enable indexer acknowledgement setting on HEC token must be turned off.
- Edge Processor may parse JSON-formatted event data sent to services/collector/raw endpoint as HEC event instead of raw string data.
Lookups
- CIDR matching is not supported; Match type advanced option must not be set to CIDR.
Metrics
- Historical metrics in detailed view do not include metrics for deleted pipelines.
Pipelines
- Only tenant administrators can manage pipelines.
- Some SPL2 functions behave differently in Edge Processor pipelines compared to searches.
Splunk Cloud Experience tenants
- Tenant can connect to only one Splunk Cloud Platform deployment via first-time setup.
- Additional indexes created after setup require connection refresh to be available in tenant.
All of your release notes in one feed
Join Releasebot and get updates from Splunk and hundreds of other software products.
- Feb 12, 2026
- Date parsed from source:Feb 12, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
Splunk Cloud Platform Maintenance Patch Release Information
Splunk Cloud Platform maintenance patch notes reveal a broad set of fixes across 10.2.2510.x and earlier, including Python 3.13.11 upgrade, improved stability, and performance enhancements. The page confirms a regular weekday release cadence and deployment windows.
10.2.2510.X Fixed Issues
This section includes information on fixed issues in 10.2.2510.X
10.2.2510.7
Publication Date:
February 12, 2026Fixed Issues:
Uncategorized issuesIssue Number | Description
SPL-294995 | Make go binary splunk-spotlight portable
SPL-295105 | Ingest Actions File System Memory deadlock issue
SPL-295364 | WLM is not logging placement rules triggers
SPL-295511 | Upgrade to Python 3.13.11
SPL-295600 | Restore full backtracing capability to Splunk (crashlogs, etc).
SPL-295649 | Skip KV Store invalidation for search‑session tokens to reduce logout‑invalidation load
SPL-295685 | Respect minFreeSpace for Bulk Data Move split operations
SPL-295856 | Contention on WLM can cause search head to hang
SPL-295973 | Remove AssertionConsumerServiceURL from unsigned SAML AuthnRequests
SPL-296086 | Create a migration tool that can help Splunk app authors update their apps to use Python 3.13 version
SPL-296094 | Debug Sync issues in GCP w/ ES ITSI10.2.2510.6
Publication Date:
February 02, 2026Fixed Issues:
Admin and CLI issuesIssue Number | Description
SPL-293952 | Splunk calls systemctl with LD_LIBRARY_PATH pointed to $SPLUNK_HOME/libDistributed search and search head clustering issues
Issue Number | Description
SPL-294629 | Indexer cluster peer crashes at startup on Windows right after logging that it's downloaded its cluster bundle and it's about to restart.Uncategorized issues
Issue Number | Description
SPL-292190 | SF/RF values shown wrong on the Monitoring console and Indexers
SPL-295087 | Fixed incorrect app update notifications caused by version comparison errors... [The release notes continue with detailed fixed issues for multiple patch versions including 10.1.2507.X, 10.0.2503.X, 9.3.2411.X, and others, listing publication dates, security fixes, and detailed issue numbers with descriptions.]
Original source Report a problem - Feb 5, 2026
- Date parsed from source:Feb 5, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
Cloud Monitoring Console
Promote metrics now appear on Ingest and Workload dashboards with Federated Analytics and standard ingestion, including a 7‑day total ingestion view and a new license usage filter. The release also fixes replication factor display, removes false MC alert for lastchanceindex, and updates Health to properly show Splunk 10.2 forwarders.
Ingest and Workload dashboards
On the Ingest and Workload dashboards, users can view Promote: Amazon S3 ingestion metrics or SVC usage metrics alongside the Federated Analytics: AWS Security Lake and standard data ingestion scenarios. Ingest users can view Promote metrics on the Ingest dashboard. The Total ingestion volume card shows the total data ingestion from the last 7 days and the usage by Federated Analytics: AWS Security Lake, Promote: Amazon S3, and Standard ingestion. An Ingestion scenarios filter has been added to the license usage bar chart. Workload users can view Promote metrics on the Workload dashboard on the Indexing workload panel under the Workload metrics tab.
Fixes
Fixed an error that could cause incorrect Replication Factor values to appear in the Monitoring console and Indexers. False positive triggers for MC Alert - New Data in Index Specified as "lastchanceindex" have been removed. On the Health dashboard, Splunk 10.2 forwarders are now correctly shown as supported.
Original source Report a problem - Jan 13, 2026
- Date parsed from source:Jan 13, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
SPL2
SPL2 expands SPL with SQL syntax, a unified search and streaming engine, and parallel SPL compatibility. It simplifies data access across Splunk products and federated stores with a single language workflow.
SPL2 overview
SPL2 extends the existing SPL language by incorporating several powerful features. These features simplify data access and analysis while also providing support for complex investigations and data management workflows. With SPL2, you can write searches using either SPL or SQL syntax. This simplifies learning and using the language, and adds consistency to the language.
SPL2 is a unified search and streaming language, offering a single syntax for searching data in Splunk indexes, accessing federated data stores, and preparing data in-stream across various Splunk products. SPL2 is fully compatible, and can operate in parallel, with SPL.
SPL2 release notes
For information about what's new, known issues, and fixed issues, see SPL2 release notes in the SPL2 Overview manual.
Original source Report a problem - Jan 13, 2026
- Date parsed from source:Jan 13, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
Known Issue: Changes to how the Splunk platform automatically maps SAML groups to Splunk roles
Splunk Cloud Platform briefly disabled auto-mapping of SAML groups to Splunk roles in 9.2.2403.102–104, causing login errors. The behavior is being reversed in 9.2.2403.105+ and a UI-based restore workflow is provided for users to re-enable auto-mapped roles. This change affects SAML only, not native Splunk users.
Product
Splunk Cloud Platform
Version(s)
9.2.2403.102 to 9.2.2403.104
Component
Authentication, Security Assertion Markup Language (SAML) protocol
Problem Class
Authentication failure, incorrect authorization
Problem
When you attempt to log into a Splunk Cloud Platform instance that uses the SAML protocol as an authentication scheme, you might receive an error message "No valid Splunk role found in local mapping." You might also log in successfully, but your account might not receive the roles you expect.
Cause
Splunk implemented a change on some Splunk Cloud Platform deployments where the Splunk platform no longer auto-maps SAML groups to Splunk roles by default. For more information on why Splunk made this change, see the "Background" section of this topic.
Prior to the change, the Splunk platform performed auto-mapping of groups that it retrieved from a SAML identity provider to Splunk roles with the same name. For example, if there is an "admin" group on the SAML IdP, the Splunk platform maps that group to the "admin" Splunk role, and any SAML user who is a member of the "admin" SAML group receives administrator-equivalent privileges on the Splunk platform instance through its "admin" role by virtue of the automatic role mapping.
If you used SAML for authentication previously in your Splunk Cloud Platform deployment, and Splunk subsequently upgraded the deployment to versions 9.2.2403.102 to 9.2.2403.104, auto-mapping of groups to roles no longer occurs, which can result in authentication failures for SAML users in the deployment, as described in the "Problem" section of this topic.
This change only affects Splunk Cloud Platform instances that use SAML as an authentication scheme. It does not affect native Splunk users on the platform. Those users can continue to log in and have access to all Splunk roles you have assigned to them.
Splunk is reversing this change in Splunk Cloud Platform version 9.2.2403.105 and higher based on customer feedback. If you are currently experiencing the login problems that this topic describes, and Splunk has not yet reversed the change on your deployment, you can reverse it yourself by following the procedure in the "Solutions" section of the topic.
Background
In version 9.1.2312 of Splunk Cloud Platform, Splunk changed which SAML groups that the Splunk platform automatically mapped to Splunk roles by default. It eliminated auto-mapping of the "admin" and "power" Splunk roles and advised customers to either create unique alternative role maps or turn auto-mapping back on if it was necessary. It also provided an option in Splunk Web to turn auto-mapping on or off.
In version 9.2.2403 of Splunk Cloud Platform, Splunk eliminated auto-mapping of SAML groups to Splunk roles by default entirely. Splunk is reversing this change on Splunk Cloud Platform due to customer feedback.
Splunk implemented both of these changes to address concerns that multiple parties raised as a result of routine security assessments.
Solution
To restore the auto-mapping of SAML groups to Splunk roles, you can turn on auto-mapping of SAML groups to Splunk roles in Splunk Web.
- Log into the Splunk Cloud Platform instance as sc_admin.
- From the system bar, select Settings > Authentication Methods.
- In the Authentication Methods page, under External, select SAML.
- Select the Configure Splunk to use SAML link.
- In the SAML Configuration dialog box that appears, under General settings, select Enable Auto Mapped Roles.
- Select Save.
- Reload the authentication configuration. From the system bar, select Settings > Authentication Methods, and in the Authentication Methods page that appears, select Reload authentication configuration.
- Jan 13, 2026
- Date parsed from source:Jan 13, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
Splunk Cloud Platform Field alias behavior change
Splunk Cloud Platform 7.2.4 fixes field alias behavior by removing alias fields when no source exists and adds ASNEW to keep existing values intact. It also introduces Overwrite field values and guidance on using calculated fields to resolve conflicts.
When you upgrade to version 7.2.4+ of Splunk Cloud Platform, the behavior of certain field alias configurations changes.
A field alias is a way of setting up an alternate name for a field. You can then use that alternate name to search for events that contain that field. Ideally, you should be able to define multiple aliases for a single field, but each alias you define should apply only to one source field. Additionally, when you apply a field alias configuration to a search, the expectation is that the source field is in the events, but the alias field is not in the events.
This issue involves events that already include the alias field, but which are missing the source field or have no value for the source field.
Before the 7.2.4 field alias fix:
In versions of Splunk Cloud Platform previous to 7.2.4, when you applied a field alias configuration to events that had the alias field but not the source field, no changes were made to those events. The alias fields were allowed to stay.
This behavior is an erroneous application of the field alias concept. It allows users to have alias field values that do not correspond to source fields.
Example of the old field alias behavior:
Here are four events of sample log data. This is what they look like before we apply field alias processing to them. Each of these events has a sourcetype of st1 and a source of example.log.
[Sample events shown]
Now, say you want to apply this pair of props.conf field alias configurations to that set of events.
[FIELDALIAS-class1 = uid AS user] [FIELDALIAS-class2 = id AS user]With this pair of configurations, events that share a sourcetype of st1 and a source of example.log have the user field aliased to two different source fields: uid and id. These colliding configurations are problematic because field aliases are supposed to reference only one source field at a time.
In addition, you know that user, the alias field, already exists in the events. If your field alias configurations say that the value of user should match a value of either uid or id but the user field in the event already has a value of jessica, how does the search head resolve this? It replaces the user field value with one of the alias field values, according to lexicographical sort order logic.
But the real issue here is with the fourth event, where the alias field exists, but no source field exists. The pre-7.2.4 rules allowed the alias field to stay in an event when there were no source fields in the event.
Here is what the sample events look like after field alias processing with the pre-7.2.4 rules.
[Sample processed events shown]
This results in the overwriting of user values in the first two events. The search head resolves the conflict between id and uid in the first event by selecting id.
Note: The search head resolves collisions between two or more AS configurations by applying each of the FIELDALIAS class names in lexicographical sort order. It uses the last class that it applies. So in the first event of this example, it first applies class1 and then applies class2. Because class2 is the last class applied, the user field takes on the value of the id field.
The third event gets a new field. Before processing, it had a source field, but no alias field. After processing it has an alias field with the value of the source field. This is how field aliases are supposed to work.
But the fourth event has an alias field without a source field. After the field alias configuration is applied, the alias field should not appear in events that do not have the corresponding source fields. The logic set by the configuration is not consistent.
After the 7.2.4 field alias fix:
In Splunk Cloud Platform 7.2.4, this bug was fixed. The fix changed the behavior when you apply a field alias configuration to an event where the alias field is already present but no source fields exist. This table explains how the behavior has changed and why.
[Table describing behavior changes]
Example of the 7.2.4 fix:
This example shows you how the 7.2.4 fix changed the results of some searches. Say you start with the same two field alias configurations:
[FIELDALIAS-class1 = uid AS user] [FIELDALIAS-class2 = id AS user]You apply those configurations to the same four events as the preceding examples. Here are the results:
[Sample events after fix shown]
As you can see, the difference in this set of events is that the user field is removed from the fourth event. It is removed because it is an alias field and there is no source field in the event.
The introduction of ASNEW in 7.2.4:
Version 7.2.4 of the Splunk Cloud Platform introduced the ASNEW field alias configuration.
ASNEW allows you to combine field aliases without overriding or removing values.
For example, say you have a search that runs over events that include the dst field, and you want to apply the following props.conf field alias configuration to it:
[FIELDALIAS-classx src AS dst]In the case of events that already have dst, you want the field and its values to be undisturbed by the field alias processing. You do not want dst to be removed, and you do not want the value of dst to be altered. In this case you must change the configuration from AS to ASNEW:
[FIELDALIAS-classx src ASNEW dst]When you apply this configuration, the search head passes over instances of dst that are already present in your events. It does not remove them or overwrite them.
Configure settings on Splunk Web:
You can use the Overwrite field values setting to determine how alias fields are treated when they are already present in events at the time that field alias processing takes place.
Select Overwrite field values for a field alias that uses the corrected field alias behavior. This means that it does what it takes during processing to ensure that the alias fields in the event share the values of their corresponding source fields.
When Overwrite field values is not selected, the field alias uses the uncorrected behavior, which means that the alias field is not changed or removed if it exists in an event without a source field when field alias processing takes place.
When you create a new field alias, Overwrite field values is not selected by default.
Using calculated fields to apply an alias field to multiple source fields:
Calculated fields provide a more versatile method for applying an alias field to multiple source fields. Use eval functions such as coalesce to determine the order in which colliding source fields are applied to your alias fields.
Calculated fields that use functions like mvappend and mvdedup also enable you to deal with situations where your field alias configuration collides with a field extraction. For example, say you have this combination of a field alias configuration and a field extraction configuration:
[FIELDALIAS-class1 = uid AS user] [EXTRACT-class2 = 123(?<user>[0-9]+)789]During a search, the EXTRACT-class2 configuration extracts user field values for events with a source of example.log. Later in the search pipeline, the FIELDALIAS-class1 configuration applies a field alias to events with a source type of st1. FIELDALIAS-class1 gives the user field the same value as uid even when uid is null. As a result, events with a source of example.log and a source type of st1 have the extracted value of the user field overwritten by the contents of the uid field.
This configuration is fine if you intend for the extracted value of user to be overwritten. But if that is not the case, one of the following three calculated field configurations would be a better choice than the FIELDALIAS-class1 configuration, depending on the effect you are trying to achieve:
- EVAL-user = coalesce(user, uid): Retains only one field value. Prioritizes the extracted value over the aliased value.
- EVAL-user = mvappend(user, uid): Maintains both the extracted and aliased values. Could lead to duplicated values.
- EVAL-user = mvdedup(mvappend(user, uid)): Maintains both the extracted and aliased values. No duplicates.
- Jan 13, 2026
- Date parsed from source:Jan 13, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
Deprecated and removed in Splunk Cloud Platform
Splunk releases a deprecations and removals rundown outlining features that will be retired and what to migrate to. Highlights include moved Search API to v2, Federated Search, and dashboard modernization, plus removals of SPG, internal library settings, and legacy commands. Plan migrations now.
This page lists features for which Splunk Inc. has deprecated or removed support in this version of the Splunk Platform.
What are "deprecated" and "removed" features?
- Deprecated features continue to work and Splunk supports them until support is removed. However, customers need to begin planning now for the future removal of support.
- Removed features are features that Splunk no longer supports and no longer work with the Splunk platform. Customers must find alternatives to removed features.
Deprecated features
The following table summarizes the features that are deprecated. These features continue to be supported, but Splunk reminds customers that deprecated features might be removed in a future release.
Newly disabled in this version What do I need to know? First deprecated Deprecated feature What do I need to know? First deprecated Deprecated version 1.0 endpoints for the Search API are now disabled by default Select version 1.0 endpoints for the Search API have been deprecated and disabled, and will be removed in a future release. Customers and app developers should upgrade usage of these disabled endpoints to the new API version, Search API version 2.0. These new Semantic Versioned Rest API endpoints for search improve platform contracts and resiliency to platform updates. If your organization has business-critical apps that still need to use the disabled endpoints, you can turn them on for a limited time as a temporary fix. See Semantic API versioning in the Splunk Cloud Platform REST API Reference Manual. Version 9.0.2208 Node.js support Node.js support is deprecated and will be removed in a future release of Splunk Cloud Platform. As soon as possible, update any private apps that depend on Node.js. For more information about the Node.js deprecation and updating your apps, see Node.js deprecation FAQ in Splunk Lantern. Version 10.0.2503 Hybrid search is deprecated. Hybrid search reached end-of-life on October 30, 2024 and is no longer a supported feature. Customers who currently use hybrid search must migrate to Federated Search for Splunk. See Migrate from hybrid search to Federated Search for Splunk in Federated Search. Version 10.0.2503 Deprecation of exporting PDFs, scheduling PDF delivery, and printing PDFs with Classic Simple XML dashboards. Exporting dashboard PDFs, scheduling PDF delivery, and printing PDFs with Classic Simple XML dashboards is deprecated and will be removed in a future release. Version 9.3.2408 The Splunk platform REST API spawn_process parameter is deprecated. Do not use the spawn_process parameter. It is deprecated and will be removed in a future release. Version 9.2.2403 The /services/search/commands REST API endpoint is deprecated. The undocumented /services/search/commands REST API endpoint is deprecated and will be removed in a future release. If you have been inadvertently using this endpoint, stop using it. Version 9.1.2312 Deprecated Splunk platform search execution methods The phased_execution_mode setting is deprecated. Contact Splunk Support to remove this setting from the limits.conf file for your Splunk Cloud Platform deployment if your users get the following warning message: Contact your administrator to remove the 'phased_execution_mode' setting in limits.conf, so this message is not displayed again. Version 9.0.2305 jQuery 3.5 by default Splunk Cloud Platform now uses jQuery 3.5 by default. The self-service toggle in the UI to re-enable the old jQuery libraries has been removed. Splunk Cloud administrators can no longer choose to enable lower versions in the Internal Library Settings. Users must use the version 3.5 jQuery libraries that are packaged with the Splunk platform by default. Splunk will remove support for all older versions of jQuery in a future release. Version 9.0.2305 Deprecated use of the _reload action with the rest search command Use of the _reload action with the rest command is deprecated. Do not use the _reload action with the rest command. Version 9.0.2208 Disabled audit search command The previously deprecated audit search command is now disabled for all customers as of 8.2.2203. Version 8.2.2203 Disabled createrss command The previously deprecated createrss command is now disabled for all customers as of 8.2.2203. Version 8.2.2203 HTML Dashboards Deprecation As of Splunk Cloud Platform 8.2.2105 and Splunk Enterprise 8.2, Splunk has deprecated HTML Dashboards. If you choose to continue to use HTML dashboards, you are responsible for maintaining the dashboards. You can rebuild your HTML dashboards in Dashboard Studio. Version 8.2.2105Removed features
Removed feature | What do I need to know? | First deprecated
Original source Report a problem
Splunk Product Guidance (SPG) application | The Splunk Product Guidance (SPG) application is removed in Splunk Cloud Platform versions 9.3.2411 and higher. | Version 9.3.2411
Internal Library Settings | The Internal Library Settings page is removed. Deprecated libraries and unsupported hotlinked imports are restricted, and Splunk Cloud Platform no longer offers a self-service option to use them. For more information about Internal Library Settings, see Control access to jQuery and other internal libraries in the jQuery Upgrade Readiness manual. | Version 9.2.2403
The relevancy command is removed. | Do not use the relevancy command. | Version 9.1.2312
The timeout argument for the append command is removed. | Do not use the timeout argument. It has no effect on searches. | Version 9.1.2312
Removal of the populate_lookup alert action | The legacy alert action, populate_lookup , has been removed. Use the lookup alert action instead. | Version 9.1.2308
Stats V1 removal | Version 1 of the stats command has been removed and replaced with version 2 of the stats command. | Version 9.0.2303
Removed file command | The previously disabled file command is now removed for all customers as of 9.1.2312. | Prior to version 8.2.2202
The etc/searchscripts directory | Support for the etc/searchscripts directory has been removed, as of version 8.2.2201. All search commands must now be declared in the commands.conf file. | Prior to version 8.2.2201
Offload UI state from SHC conf | The ability for Apps to specify custom user interface preferences via ui-prefs.conf such as time picker has been removed. This means that application specific UI preferences will not be applied. Users will still be able to set their UI preferences. | Version 8.2.2105
Removed biased language | Biased language has been removed from the Splunk Web UI, in keeping with Splunk's commitment to equality in our actions and products. | Version 8.2.2105
Documentation set improvements | In response to customer feedback, the information in the Splunk Cloud User Manual has been added to the Splunk Cloud Platform Admin Manual and the Splunk Cloud Security Manual, and the Splunk Cloud User Manual has been removed from the documentation set. | Version 8.2.2105
Removed ability to convert dashboards to HTML | This option is no longer available to users in Splunk Web. | Prior to version 8.0.2004 - Jan 13, 2026
- Date parsed from source:Jan 13, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
Known and fixed issues for Splunk Cloud Platform
Splunk Cloud Platform 10.2.2510 release notes reveal current known issues and fixes. Highlights include restore reliability with SmartStore, PDF export accuracy for dashboards, and federated search and authentication fixes.
Known and fixed issues for Splunk Cloud Platform
This page lists selected known issues and fixed issues for this release of Splunk Cloud Platform. Use the Version drop-down list to see known issues and fixed issues for other versions of Splunk Cloud Platform.
See also the release notes for the Cloud Monitoring Console app and the Admin Configuration Service for their respective known and fixed issues.
Version 10.2.2510
This version includes the following known issues:
Date filed or added: 2025-12-15
Issue number: SPL-292559
Description: In deployments using the Azure Victoria Experience, during DDAA restores, buckets involved in an active restore might be removed from SmartStore. It can occur if the buckets were part of a recent restore that was just cleared. It results in incomplete restores where those buckets don't appear in search results, even though the restore request completes successfully.
Note: This issue applies only to restored data. It doesn't affect the original archived data.
Workaround: Avoid quickly repeating the restore, clear, and restore sequences of operations that reuse the same buckets or time ranges in an index. Allow more than 2 hours for any previously scheduled remote freeze operations to complete before starting another restore involving the same buckets.Date filed or added: 2025-06-13
Issue number: SPL-279299
Description: Some dashboard panels reflect inconsistent results when exported to PDF. If a dashboard panel uses a post-processing search that contains newline characters, the data shown in the exported PDF might not match what is shown in the dashboard itself. This happens because Splunk generates a different cache key for the PDF export compared to the dashboard view, which can cause the PDF to display incorrect or outdated results. For example, you might notice that the "Number of distinct users by privacy setting" column chart shows much lower values in the PDF than in the dashboard panel. You should check your exported PDFs carefully and look for errors in your system logs if your dashboards use post-processing searches with newlines.Date filed or added: 2025-02-04
Issue number: SPL-270271
Description: Scheduled email exports of large dashboards compress images to approximately 1440 x 960 pixels, leading to blurry PDFs.
Workaround: Reduce the dimensions of the dashboard, or split up large dashboards into separate smaller dashboards. Scheduled export compresses the studio dashboard to a resolution of approximately 1440 x 960 pixels before it is screenshotted for the PDF. Reducing the dimensions of the dashboard closer to this resolution should improve the visibility and quality of the export. If you split the dashboard into smaller dashboards, and schedule them to export separately, this effectively reduces the dimensions of each dashboard and improves the quality of each exported PDF.Date filed or added: 2024-02-02
Issue number: SPL-270072
Description: Federated Search for Splunk - Proxy bundles are not reaped when external authentication is enabled on the remote deployment.
Workaround: On the remote search head, turn external authentication on only for token authentication. On the remote deployment go to Settings, then Authentication methods, then SAML configuration, and then Authentication extensions. Select For Token Authentication Only. Set Get User Info time-to-live to 21600s.Date filed or added: 2024-12-11
Issue number: SPL-267847
Description: Federated Analytics - Users cannot remove data lake indexes from a federated provider via the UI. When a data lake index is removed, it is no longer part of a federated provider definition, and it no longer ingests data from a Amazon Security Lake dataset. Removed data lake indexes can still be managed through the Indexes page.Date filed or added: 2024-08-12
Issue number: SPL-260620
Description: When a dashboard's permissions are changed in Dashboard Studio, this creates a new version of the dashboard. If you revert to a previous version of the dashboard, permissions changes are not automatically reverted. To work around this issue, change permissions manually.Date filed or added: 2024-08-06
Issue number: SPL-260273
Description: If you select the tips actions when comparing dashboards in the Monaco editor, the page might fail to render properly. Do not select the tips, represented by a lightbulb icon, in the dashboard version history source comparison view.Date filed or added: 2024-06-04
Issue number: SPL-237180
Description: Saved searches on Splunk Cloud Platform that are owned by nobody are scheduled using the default time zone settings in the user-prefs.conf file instead of the system time zone in Splunk Cloud. But, searches are run internally as splunk-system-user, which is tied to system time in Splunk Cloud Platform and is based on UTC (Coordinated Universal Time). The mismatch between the default time zone settings in the user-prefs.conf file and Splunk Cloud system time can lead to potential discrepancies in search results under certain conditions when the time zones for nobody and splunk-system-user get out of sync.
Workaround: If you're experiencing mismatched time zones with nobody owned searches following migration from Splunk Enterprise to Splunk Cloud Platform, reassign searches to a user account attached to a role, so searches aren't assigned to nobody. An alternative workaround is to set the schedules for nobody-owned saved searches to UTC, which ensures that searches are the same as system time.Date filed or added: 2024-05-09
Issue number: SPL-255559
Description: Federated Search for Amazon S3: IAM Managed Policy limit of 6144 chars exceeded due to large number of resource level policies. If a Federated Search for Amazon S3 provider configuration within a single provider (or across several providers) for a given user have a large number of AWS Glue data catalog or AWS S3 resources, the IAM policy that is generated for attachment to the stack's IAM role can, in some cases, exceed the default 6144 character limit that is enforced by AWS. When the character limit is exceeded the IAM policy can't be applied and the connection to AWS fails.
Workaround: To reduce the chance of generating an IAM managed policy that exceeds the 6144 character limit, when listing AWS Glue table names in the AWS Glue tables field, use wildcards to capture all tables that begin with the same prefix. For example, if you have several AWS Glue tables that begin with product_, enter product_* in the AWS Glue tables field. Do not do this for a given set of tables if there are tables beginning with the prefix to which you do not want to grant access.Date filed or added: 2024-04-12
Issue number: SPL-254077
Description: CIDR match for tstats with ipv6 addresses isn't supported. The tstats command currently doesn't filter events with CIDR match on fields that contain IPv6 addresses. Running tstats searches containing IPv6 addresses might result in the following error indicating that the addresses are treated as non-exact queries: Copy Error in 'TsidxStats': WHERE clause is not an exact query Error in 'TsidxStats': WHERE clause is not an exact queryDate filed or added: 2024-01-05
Issue number: SPL-240774
Description: The DELIMS setting or the kvdelim option may not be applied correctly when the k/v delim character appears 2 or more times in a field value
Workaround: Perform field extractions by modifying your searches using other commands, such as the rex command or eval command.Date filed or added: 2023-07-20
Issue number: SPL-240969
Description: props and transforms created with 000-self-services (000-self-services/local/transforms.conf) as the destination app get removed during sync triggered by actions such as saving rulesets in Ingest Actions.
Workaround: Do not save search time field transformations to the 000-self-services app. Move the existing 000-self-services/local/transformations.conf under a different app.Date filed or added: 2023-05-30
Issue number: Not applicable
Description: ACS endpoint connections fail after June 4, 2023 or HEC sessions fail after June 14, 2023 with error messages that mention SSL, TLS, or HTTP error 503 or 525. See Cloud Platform Discontinuing support for TLS version 1.0 and 1.1.Date filed or added: 2022-08-23
Issue number: SPL-228969
Description: Federated Search: In Splunk Web federated index UI you cannot provide data model Dataset Name values that contain a dot ( . ) character
Workaround: This is a limitation for users of standard mode federated search who want to set up federated indexes that map to data model datasets. It means that such users cannot set up federated indexes for data model datasets that are subordinate to a root dataset. For example, if the root data model dataset is Network_Traffic, you cannot map a federated index to the subordinate data model dataset Network_Traffic.All_Traffic. As a workaround, users can run tstats searches that use the nodename argument to filter out data that does not belong to a specific data model dataset: | tstats ... where nodename=Network_Traffic.All_Traffic.Date filed or added: 2022-07-29
Issue number: SPL-227633
Description: Error: Script execution failed for external search command 'runshellscript'
Workaround: The setting precalculate_required_fields_for_alerts=0 can be set on saved searches that have no other alert actions attached aside from the "Run A Script" action, to quash the error. For saved searches that have multiple alert action attached, this might not be safe, because it will disable back propagation of required fields for all alert actions, which might result in the parent search extracting more fields than required, which could negatively impact performance for that search.Date filed or added: 2022-06-15
Issue number: SPL-226877
Description: Federated Search UI Error: Cannot create saved search dataset for federated index if dataset name contains space
Workaround: Use REST API to create the federated saved search instead: curl -k -u : -X POST https://localhost:8089/servicesNS/nobody/search/data/federated/index -d name=federated:index_kathy -d federated.dataset='savedsearch:ss with space' -d federated.provider=remote_deployment_1. See Federated search endpoint descriptions in the REST API Reference Manual.
This version fixes the following issues:
Date filed or added: 2025-09-03
Issue number: SPL-285235
Description: SPL: Unbounded count for the makeresults command results in OOMDate filed or added: 2023-03-02
Issue number: SPL-236780, SPL-245274
Description: Splunk Cloud Platform: Unable to delete SAML users and authorization tokens
- Jan 13, 2026
- Date parsed from source:Jan 13, 2026
- First seen by Releasebot:Feb 20, 2026
Splunk Cloud Platform by Splunk
Version 10.2.2510
Federated provider names are now case-insensitive across Federated Search and Analytics, with a noted upgrade breaking-change. SPL2 expands searches and Dashboard Studio, TLS verification for inter-sidecar, Azure DDAA, redesigned index archive workflow, and targeted app installation on Victoria Experience.
Federated provider names are now case-insensitive
As of this release, federated provider names are case-insensitive for the following products:
- Federated Search for Splunk
- Federated Search for Amazon S3
- Federated Analytics for Amazon Security Lake
For example, say you have a provider named MyProvider and you try to create a new provider with a Provider name of myprovider. In this instance, Splunk software prevents you from creating the new provider until you choose a Provider name that is unique, regardless of alphabetical character case.
Note: If you are upgrading from a previous version of the Splunk platform, this might be a breaking change. If you have two or more federated providers in your Splunk platform deployment with names that differ only by case (such as one named MyProvider and another named myprovider), you must change the duplicate provider names to unique strings.
There are two ways to accomplish this:
- You can delete and recreate the federated providers with duplicate names.
- If you have access to the .conf files for your Splunk platform deployment, you can edit the duplicate federated provider names directly in federated.conf. You cannot edit federated provider names in Splunk Web.
If you choose to not delete or replace duplicate provider names, Splunk software uses the first name that appears in federated.conf. For example, if the MyProvider stanza appears before the myprovider stanza in federated.conf, Splunk software references only the MyProvider stanza when it receives any version of the string "myprovider".
Federated Analytics for Amazon Security Lake: Optional workload optimization for data lake indexes
If you use Federated Analytics for Amazon Security Lake, you now have the option to drive down the storage cost of your data lake indexes by turning off raw term search on those indexes. With this optimization enabled, your data lake indexes have reduced indexing and storage cost, with the trade-off being that the optimized indexes support only key-value search.
For more information, see Set up data ingest and retention rules for data lake indexes.SPL2
SPL2 extends the existing SPL language by incorporating several powerful features. These features simplify data access and analysis while also providing support for complex investigations and data management workflows. With SPL2, you can write searches using either SPL or SQL syntax. This simplifies learning and using the language, and adds consistency to the language.
SPL2 is a unified search and streaming language, offering a single syntax for searching data in Splunk indexes, accessing federated data stores, and preparing data in-stream across various Splunk products. SPL2 is fully compatible, and can operate in parallel, with SPL.- For specific release notes for SPL2, see SPL2 release notes
- For more information about SPL2, see What is SPL2? in the SPL2 Overview manual.
TLS verification for inter-sidecar communication
To enhance security, each sidecar uses a server data plane certificate when communicating with other sidecars through the direct port of the destination sidecar. Over a Transport Layer Security (TLS) connection on the direct port, the connecting sidecar verifies the certificate of the destination sidecar to ensure a trusted connection.
For more information, see Inter-sidecar communication.DDAA supported on Azure
Splunk Dynamic Data Active Archive (DDAA), now supported on Azure, provides secure, long-term data retention for Splunk Cloud Platform. Using this Splunk-managed archive, you can restore your archived data within 24 hours and make it searchable for up to 30 days. DDAA eliminates the need for continuous indexing or additional infrastructure. It ensures data durability and security by extending retention beyond the searchable retention period.
Redesigned workflow: Index Archiving Configuration
To enhance user experience, the Add new index dialog box has been redesigned to offer a clearer and more intuitive workflow for configuring index archive settings. The dialog box now displays the Your Total Retention (days) section that indicates the number of days the archive is retained. The Splunk software calculates this number based on the Searchable retention (days) and Total retention settings that you specify.
For more information, see Configure archive settings for an index.SPL2 support for Dashboard Studio
In Dashboard Studio, you can use SPL2 data sources in dashboards by doing one of the following:
- Create an SPL2 query from within a dashboard
- Reference an existing view from an SPL2 module
See Create search-based visualizations with SPL2.
Targeted app installation on Victoria Experience (AWS only) (controlled availability)
Splunk Cloud Platform on Victoria Experience now offers targeted app installation. Previously, Splunk Cloud Platform installed apps by default on all search heads across a Victoria Experience deployment. With targeted app installation, you can now install apps on specific search heads or search head clusters, making it easier to isolate apps and control user access.
Original source Report a problem
This enhancement aligns app installation features in Victoria Experience with Splunk Cloud Platform Classic Experience and Splunk Enterprise.
See Targeted app installation on Victoria Experience.