- April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
GitLab 18.6 Security Dashboards Update
Gitlab introduces improved security dashboards with advanced vulnerability management, giving teams richer vulnerability trends, severity views, risk scores, CWE insights, filters, and PDF export across project and group security dashboards.
Security dashboards
GitLab 18.6 introduced an improved version of the security dashboards that use advanced vulnerability management.
The new dashboards are enabled by default on GitLab.com and GitLab Dedicated. GitLab Self-Managed users must enable advanced vulnerability management to access the new dashboards.
If your organization has not enabled advanced vulnerability management, see legacy security dashboards.
Use security dashboards to assess the security posture of your applications. GitLab provides you with a collection of metrics, ratings, and charts for the vulnerabilities detected by the security scanners run on your project. The security dashboards provide the following data:
- Vulnerability trends over a 30, 60, or 90-day time frame for all projects in a group.
- The total number of open vulnerabilities by severity.
- The total risk score to compare vulnerability risk across projects.
Prerequisites
To view the security dashboard for a project or a group you must have:
- The Developer role or higher for the group or project.
- At least one security scanner configured in your project.
- A successful security scan performed on the default branch of your project.
- At least one detected vulnerability in the project.
- Advanced vulnerability management with Advanced search enabled.
The security dashboards show results of scans from the most recently completed pipeline on the default branch. Dashboards are updated with the results of completed pipelines run on the default branch. They do not include vulnerabilities discovered in pipelines from other un-merged branches.
Viewing the security dashboard
The security dashboard shows filterable charts and panels built with data from vulnerabilities detected in the default branch. Charts include vulnerabilities over time and severity counts. The data in many charts is grouped into two categories:
- Open: Includes vulnerabilities with the Needs triage or Confirmed statuses
- Closed: Includes vulnerabilities with the Dismissed and Resolved statuses
Charts and panels include only open vulnerabilities unless otherwise noted.
You can view a security dashboard for a project or a group. Each dashboard provides a unique viewpoint into your security posture.
Both dashboards include:
- Charts
- Vulnerabilities over time
- Vulnerability severity panels
- Risk score
- Vulnerabilities by age
- Top 10 CWEs
- Filter the entire dashboard
- Export as PDF
To view a security dashboard:
- In the top bar, select Search or go to and find your project.
- Select Secure > Security dashboard.
Project security dashboard
The project security dashboard shows vulnerabilities detected in the project’s default branch. It includes:
- The Vulnerabilities over time chart, which includes up to 90 days of history.
- The Severity panels, which show open vulnerabilities by severity.
- The Risk score panel, which shows the overall security risk of the project.
- The Vulnerabilities by age chart, which groups open vulnerabilities by age buckets.
- The Top 10 CWEs chart, which shows the 10 most common CWEs.
Open vulnerabilities are those with Needs triage or Confirmed status. Closed vulnerabilities with Dismissed or Resolved status are not included in these charts.
Group security dashboard
The group security dashboard provides an overview of vulnerabilities found in the default branches of all projects in a group and its subgroups. The group security dashboard supplies the following:
- The Vulnerabilities over time chart, which includes up to 90 days of history.
- The Severity panels, which show open vulnerabilities by severity.
- The Risk score panel, which shows total risk and risk for each project.
- The Vulnerabilities by age chart, which groups open vulnerabilities by age buckets.
- The Top 10 CWEs chart, which shows the 10 most common CWEs.
Charts
Security dashboards include several charts that help you understand and act on vulnerabilities in your projects and groups.
Vulnerabilities over time
The Vulnerabilities over time chart is available on both project and group dashboards. It shows the open vulnerabilities trends over 30, 60, or 90-day periods. The default range is 30 days. GitLab retains vulnerability data for 365 days.
Use the chart to identify when vulnerabilities were introduced and how they change over time.
To view details:
- Hover over a data point to see the vulnerability count for that day.
- Use the time frame selector to switch between 30, 60, or 90 days.
- Drag the range handles (scroll-handle) to zoom in on a specific period.
- Use the dropdown to filter by Severity (for example, Critical, High, Medium).
- Use the buttons to group the data by either Severity (Critical, high, medium, low, info, and unknown) or Report type (SAST, DAST, and dependency scanning and others).
- To explore data beyond 90 days, but within the last 365 days, use the SecurityMetrics.vulnerabilitiesOverTime GraphQL API.
- Vulnerabilities that are no longer detected are not automatically counted as closed. Use vulnerability management policies to automatically close them if needed.
Starting in GitLab 18.8 (available January 2026) on GitLab.com and in GitLab 18.9 (available February 2026) on GitLab Self-Managed and GitLab Dedicated, the Vulnerabilities over time chart excludes no longer detected vulnerabilities. This approach more accurately reflects the number of detected vulnerabilities that require attention. This change might result in a drop in the total number of vulnerabilities shown in the chart. This change applies automatically to vulnerabilities no longer detected in pipelines run from GitLab 18.9 onward. A background migration handles remaining vulnerabilities from earlier pipelines.
Due to issue 590022 and issue 590018, vulnerability counts in the Vulnerabilities over time chart may not be accurate. The first issue affects dependency scanning and container scanning vulnerabilities. The second issue affects vulnerabilities that were dismissed or resolved, and then confirmed.
Vulnerability severity panel
The vulnerability severity panel shows the total number of open vulnerabilities by severity.
To view details:
- In the severity panel, locate the severity you want to investigate.
- Select View.
- The vulnerability report opens and includes only vulnerabilities of that severity.
- Any page-level filters you have set are also applied.
Risk score panel
The risk score panel shows the overall security risk for the group or project. The panel has two views:
- The No grouping (default) view shows the total risk score of the group:
- The circular gauge shows the calculated risk score in the center.
- The color bars indicate the risk level:
- Green: Low risk
- Yellow: Medium risk
- Orange: High risk
- Red: Critical risk
- Select Project to compare risk scores for each project:
- Each project tile is color-coded according to the project’s risk level.
- Hover over a tile to see details, including the project name and risk score.
- Select a tile and select the project’s name to open that project’s vulnerability report.
Risk scores are calculated from multiple factors, including:
- Severity of vulnerabilities
- Age of vulnerabilities
- KEV (Known Exploited Vulnerabilities) status
- EPSS (Exploit Prediction Scoring System) score
Vulnerabilities by age
The Vulnerabilities by age chart is available on group and project dashboards. It shows the distribution of unresolved vulnerabilities based on the amount of time since they were first detected. You can group vulnerabilities by severity or by report type, helping you identify where remediation activities may be needed.
To view details:
- Hover over a data point to see the vulnerability count for that age grouping.
- Use the dropdown list to filter by Severity (for example, Critical, High, Medium).
- Use the buttons to group the data by either Severity (Critical, high, medium, low, info, and unknown) or Report type (SAST, DAST, and dependency scanning and others).
Top 10 CWEs
The availability of this feature is controlled by a feature flag.
The Top 10 CWEs chart is available on group and project dashboards. It shows the 10 most common CWE identifiers associated with the open vulnerabilities in the group or project.
To view details:
- Hover over a data point to see the total number of vulnerabilities of each CWE type.
- Use the dropdown list to filter by Severity (for example, Critical, Medium, or High).
Filter the entire dashboard
You can filter results at two levels:
- Dashboard filters: Apply to the entire dashboard. All charts update when you use these filters.
- Chart and panel filters: Apply only to the chart or panel you are viewing.
Available dashboard filters include:
- Report type: Filter by scanner, including SAST, DAST, dependency scanning, and others.
- Project: Limit results to specific projects. Available only for group security dashboards.
On the group security dashboard, you can also filter by:
- Security attributes: Filter by the security attributes applied to your projects, which include categories for business impact, application, business unit, internet exposure, and location. These filters can be inclusive (using the is one of operator) or exclusive (using the is not one of operator). To configure your security attributes and apply them to projects, see security attributes.
Dashboard filter behavior:
- Filters apply immediately across all dashboard charts and panels.
- Filters that you apply continue to apply throughout your session unless you remove them.
- When you open a vulnerability report from the dashboard, active filters are automatically applied to the vulnerability report.
To apply a filter to the whole dashboard:
- In the filter bar at the top of the dashboard, select Filter results…
- From the dropdown list, choose the filter type.
- Select one or more filter values.
Export as PDF
You can export the security dashboard as a PDF for use in reports and presentations. The export captures the current state of all of the charts and panels in the dashboard, including any active filters.
To export the dashboard as a PDF:
- In the top bar, select Search or go to and find your project or group.
- Select Secure > Security dashboard.
- Optional. Apply filters to customize the data included in the export.
- Select Export as PDF.
Legacy security dashboards
GitLab Self-Managed customers that have not enabled advanced vulnerability management cannot access the latest security dashboards. In this case, you still have access to the legacy security dashboards.
Security dashboards are used to assess the security posture of your applications. GitLab provides you with a collection of metrics, ratings, and charts for the vulnerabilities detected by the security scanners run on your project. The security dashboard provides data such as:
- Vulnerability trends over a 30, 60, or 90-day time-frame for all projects in a group
- A letter grade rating for each project based on vulnerability severity
- The total number of vulnerabilities detected within the last 365 days including their severity
Use security dashboard data to improve your security posture. For example, the 365-day trend view shows which days had a spike in vulnerabilities. Examine the code changes from those days to perform a root-cause analysis and build better policies to prevent future vulnerabilities.
For an overview, see Security Dashboard - Advanced Security Testing.
Prerequisites for the legacy dashboards
To view the security dashboards, the following is required:
- You must have the Developer role for the group or project.
- At least one security scanner configured in your project.
- A successful security scan performed on the default branch of your project.
- At least 1 detected vulnerability in the project.
The security dashboards show results of scans from the most recent completed pipeline on the default branch. Dashboards are updated with the result of completed pipelines run on the default branch; they do not include vulnerabilities discovered in pipelines from other un-merged branches.
Viewing the legacy security dashboard
The security dashboard can be seen at the project, group, and the Security Center levels. Each dashboard provides a unique viewpoint of your security posture.
Project security dashboard
The Project security dashboard shows the total number of vulnerabilities detected over time, with up to 365 days of historical data for a given project. The dashboard is a historical view of open vulnerabilities in the default branch. Open vulnerabilities are those of only Needs triage or Confirmed status (Dismissed or Resolved vulnerabilities are excluded).
To view a project’s security dashboard:
- In the top bar, select Search or go to and find your project.
- Select Secure > Security dashboard.
- Filter and search for what you need.
- To filter the chart by severity, select the legend name.
- To view a specific time frame, use the time range handles (scroll-handle).
- To view a specific area of the chart, select the left-most icon (marquee-selection) and drag across the chart.
- To reset to the original range, select Remove Selection (redo).
Downloading the vulnerability chart
You can download an image of the vulnerability chart from the Project security dashboard to use in documentation, presentations, and so on. To download the image of the vulnerability chart:
- In the top bar, select Search or go to and find your project.
- Select Secure > Security dashboard.
- Select Save chart as an image (download).
You are prompted to download the image in SVG format.
Group security dashboard
The group security dashboard provides an overview of vulnerabilities found in the default branches of all projects in a group and its subgroups. The group security dashboard supplies the following:
- Vulnerability trends over a 30, 60, or 90-day time frame
- A letter grade for each project in the group according to its highest-severity open vulnerability. The letter grades are assigned using the following criteria:
- Grade F: One or more critical vulnerabilities
- Grade D: One or more high or unknown vulnerabilities
- Grade C: One or more medium vulnerabilities
- Grade B: One or more low vulnerabilities
- Grade A: Zero vulnerabilities
To view group security dashboard:
- In the top bar, select Search or go to and find your group.
- Select Security > Security dashboard.
- Hover over the Vulnerabilities over time chart to get more details about vulnerabilities.
- You can display the vulnerability trends over a 30, 60, or 90-day time frame (the default is 90 days).
- To view aggregated data beyond a 90-day time frame, use the VulnerabilitiesCountByDay GraphQL API. GitLab retains the data for 365 days.
- Select the arrows under the Project security status section to see which projects fall under a particular letter-grade rating:
- You can see how many vulnerabilities of a particular severity are found in a project
- You can select a project’s name to directly access its project security dashboard
Vulnerability metrics in the value streams dashboard
There are additional vulnerability metrics available in the value streams dashboard comparison panel, which helps you understand security exposure in the context of your organization’s software delivery workflows.
Related topics
- Security center
- Vulnerability reports
- Vulnerability Page
- April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
CI/CD inputs
Gitlab adds CI/CD inputs to make pipeline configuration more flexible, with typed parameters, built-in validation, reusable templates, and support for includes, triggers, and pipeline inputs alongside CI/CD variables.
Use CI/CD inputs to increase the flexibility of CI/CD configuration
Inputs and CI/CD variables can be used in similar ways, but have different benefits:
- Inputs provide typed parameters for reusable templates with built-in validation at pipeline creation time. To define specific values when the pipeline runs, use inputs instead of CI/CD variables.
- CI/CD variables offer flexible values that can be defined at multiple levels, but can be modified throughout pipeline execution. Use variables for values that need to be accessible in the job’s runtime environment. You can also use predefined variables with rules for dynamic pipeline configuration.
CI/CD Inputs and variables comparison
Inputs:
- Purpose: Defined in CI configurations (templates, components or .gitlab-ci.yml) and assigned values when a pipeline is triggered, allowing consumers to customize reusable CI configurations.
- Modification: Once passed at pipeline initialization, input values are interpolated in the CI/CD configuration and remain fixed for the entire pipeline run.
- Scope: Available only in the file they are defined, whether in the .gitlab-ci.yml or a file being included. You can pass them explicitly to other files - using include:inputs - or pipeline using trigger:inputs.
- Validation: Provide robust validation capabilities including type checking, regex patterns, predefined option lists, and helpful descriptions for users.
CI/CD Variables:
- Purpose: Values that can be set as environment variables during job execution and in various parts of the pipeline for passing data between jobs.
- Modification: Can be dynamically generated or modified during pipeline execution through dotenv artifacts, conditional rules, or directly in job scripts.
- Scope: Can be defined globally (affecting all jobs), at the job level (affecting only specific jobs), or for the entire project or group through the GitLab UI.
- Validation: Simple key-value pairs with minimal built-in validation, though you can add some controls through the GitLab UI for project variables.
Define input parameters with spec:inputs
Use spec:inputs in the CI/CD configuration header to define input parameters that can be passed to the configuration file.
Use the $[[ inputs.input-id ]] interpolation format outside the header section to declare where to use the inputs.
Example:
spec: inputs: job-stage: default: test environment: default: production --- scan-website: stage: $[[ inputs.job-stage ]] script: - ./scan-website $[[ inputs.environment ]]Inputs are mandatory if default is not specified.
Inputs are evaluated and populated when the configuration is fetched during pipeline creation.
A string containing an input must be less than 1 MB.
A string inside an input must be less than 1 KB.
Inputs can use CI/CD variables, but have the same variable limitations as the include keyword.
If the file that defines spec:inputs also contains job definitions, add a YAML document separator (---) after the header.
Set the values for the inputs when you:
- Trigger a new pipeline using this configuration file. Always set default values when using inputs to configure new pipelines with any method other than include. Otherwise the pipeline could fail to start if a new pipeline triggers automatically, including in merge request pipelines, branch pipelines, and tag pipelines.
- Include the configuration in your pipeline. Any inputs that are mandatory must be added to the include:inputs section, and are used every time the configuration is included.
Input configuration
To configure inputs, use:
- spec:inputs:default to define default values for inputs when not specified. When you specify a default, the inputs are no longer mandatory.
- spec:inputs:description to give a description to a specific input. The description helps people understand the input details or expected values.
- spec:inputs:options to specify a list of allowed values for an input.
- spec:inputs:regex to specify a regular expression that the input must match.
- spec:inputs:type to force a specific input type, which can be string (default), array, number, or boolean.
- spec:inputs:rules to define conditional options and default values based on the values of other inputs.
You can define multiple inputs per CI/CD configuration file, and each input can have multiple configuration parameters.
Example in scan-website-job.yml:
spec: inputs: job-prefix: description: "Define a prefix for the job name" job-stage: default: test environment: options: ['test', 'staging', 'production'] concurrency: type: number default: 1 version: type: string regex: ^v\d\.\d+(\.\d+)$ export_results: type: boolean default: true --- $[[ inputs.job-prefix ]]-scan-website: stage: $[[ inputs.job-stage ]] script: - echo "scanning website -e $[[ inputs.environment ]] -c $[[ inputs.concurrency ]] -v $[[ inputs.version ]]" - if $[[ inputs.export_results ]]; then echo "export results"; fiIn this example:
- job-prefix is a mandatory string input and must be defined.
- job-stage is optional. If not defined, the value is test.
- environment is a mandatory string input that must match one of the defined options.
- concurrency is an optional numeric input. When not specified, it defaults to 1.
- version is a mandatory string input that must match the specified regular expression.
- export_results is an optional boolean input. When not specified, it defaults to true.
Input types
You can specify that an input must use a specific type with the optional spec:inputs:type keyword.
The input types are:
- array
- boolean
- number
- string (default)
When an input replaces an entire YAML value in the CI/CD configuration, it is interpolated into the configuration as its specified type.
Example:
spec: inputs: array_input: type: array boolean_input: type: boolean number_input: type: number string_input: type: string --- test_job: allow_failure: $[[ inputs.boolean_input ]] needs: $[[ inputs.array_input ]] parallel: $[[ inputs.number_input ]] script: $[[ inputs.string_input ]]When an input is inserted into a YAML value as part of a larger string, the input is always interpolated as a string.
Example:
spec: inputs: port: type: number --- test_job: script: - curl "https://gitlab.com:$[[ inputs.port ]]"Array type
The content of the items in an array type can be any valid YAML map, sequence, or scalar. More complex YAML features like !reference cannot be used.
When using the value of an array input in a string, the array input is converted to its string representation, which might not match your expectations for complex YAML structures such as maps.
Example:
spec: inputs: rules-config: type: array default: - if: $CI_PIPELINE_SOURCE == "merge_request_event" when: manual - if: $CI_PIPELINE_SOURCE == "schedule" --- test_job: rules: $[[ inputs.rules-config ]] script: lsArray inputs must be formatted as JSON when manually passing inputs for manually triggered pipelines, pipeline triggers API, pipelines API, Git push options, and pipeline schedules.
Access individual array elements
Use bracket notation with an index number to access individual elements of an array input. Array items are indexed starting at [0].
Example:
spec: inputs: supported_versions: type: array default: - '2.0' - '1.0' - '0.1' --- job: script: - echo 'Latest version is $[[ inputs.supported_versions[0] ]]'You can chain array indexing with dot notation to access nested values.
Example:
spec: inputs: servers: type: array default: - host: server1.example.com port: 8080 --- job: script: - curl "https://$[[ inputs.servers[0].host ]]:$[[ inputs.servers[0].port ]]"For multi-dimensional arrays, use multiple indices in a row.
Example:
spec: inputs: matrix: type: array default: - ['a', 'b'] - ['c', 'd'] --- job: script: - echo $[[ inputs.matrix[0][1] ]]You can chain together a maximum of 5 indices per segment.
Multi-line input string values
Inputs support different value types. You can pass multi-string values using a YAML block scalar.
Define conditional input options with spec:inputs:rules
Use spec:inputs:rules to define different options and default values for an input based on the values of other inputs.
Rules are evaluated in order. The first rule with a matching if condition is used. The last rule without an if condition acts as a fallback.
Example:
spec: inputs: cloud_provider: options: ['aws', 'gcp', 'azure'] default: 'aws' description: 'Cloud provider' environment: options: ['development', 'staging', 'production'] default: 'development' description: 'Target environment' instance_type: description: 'VM instance type' rules: - if: $[[ inputs.cloud_provider ]] == 'aws' && $[[ inputs.environment ]] == 'development' options: ['t3.micro', 't3.small'] default: 't3.micro' - if: $[[ inputs.cloud_provider ]] == 'aws' && $[[ inputs.environment ]] == 'production' options: ['t3.xlarge', 't3.2xlarge', 'm5.xlarge'] default: 't3.xlarge' - if: $[[ inputs.cloud_provider ]] == 'gcp' options: ['e2-micro', 'e2-small', 'e2-standard-4'] default: 'e2-micro' - if: $[[ inputs.cloud_provider ]] == 'azure' options: ['Standard_B1s', 'Standard_B2s', 'Standard_D2s_v3'] default: 'Standard_B1s' - default: 'small' --- deploy: script: | echo "Deploying to $[[ inputs.cloud_provider ]]" echo "Environment: $[[ inputs.environment ]]" echo "Instance: $[[ inputs.instance_type ]]"Allow user-entered values with default: null
Use spec:inputs:rules with default: null and without options to allow users to enter their own value for an input.
Example:
spec: inputs: deployment_type: options: ['standard', 'custom'] default: 'standard' custom_config: description: 'Custom configuration value' rules: - if: $[[ inputs.deployment_type ]] == 'custom' default: null --- deploy: script: - echo "Config: $[[ inputs.custom_config ]]"Use boolean inputs with spec:inputs:rules
Boolean values can be compared using boolean literals (true/false) in rule conditions.
Example:
spec: inputs: publish: type: boolean default: true publish_stage: rules: - if: $[[ inputs.publish ]] == true default: 'publish' - if: $[[ inputs.publish ]] == false default: 'test' --- job: stage: $[[ inputs.publish_stage ]] script: - echo "Publishing is $[[ inputs.publish ]]"Set input values
For configuration added with include, use include:inputs to set the values for inputs when the included configuration is added to the pipeline.
Example:
include: - local: 'scan-website-job.yml' inputs: job-prefix: 'some-service-' environment: 'staging' concurrency: 2 version: 'v1.3.2' export_results: falseInputs for the included configuration:
Input Value Details job-prefix some-service- Must be explicitly defined. job-stage test Not defined in include:inputs, so the value comes from spec:inputs:default in the included configuration. environment staging Must be explicitly defined, and must match one of the values in spec:inputs:options in the included configuration. concurrency 2 Must be a numeric value to match the spec:inputs:type set to number in the included configuration. Overrides the default value. version v1.3.2 Must be explicitly defined, and must match the regular expression in the spec:inputs:regex in the included configuration. export_results false Must be either true or false to match the spec:inputs:type set to boolean in the included configuration. Overrides the default value.With multiple include entries
Inputs must be specified separately for each include entry.
For a pipeline
Inputs provide advantages over variables including type checking, validation and a clear contract. Unexpected inputs are rejected. Inputs for pipelines must be defined in the spec:inputs header of the main .gitlab-ci.yml file. You cannot use inputs defined in included files for pipeline-level configuration.
In GitLab 17.7 and later, pipeline inputs are recommended over passing pipeline variables. For enhanced security, you should disable pipeline variables when using inputs.
You should always set default values when defining inputs for pipelines. Otherwise the pipeline could fail to start if a new pipeline triggers automatically.
You can set input values with:
- Downstream pipelines
- Manually triggered pipelines
- The pipeline triggers API
- The pipelines API
- Git push options
- Pipeline schedules
- The trigger keyword
A pipeline can take up to 20 inputs.
You can pass inputs to downstream pipelines if the downstream pipeline’s configuration file uses spec:inputs.
Example with trigger:inputs:
trigger-job: trigger: strategy: mirror include: - local: path/to/child-pipeline.yml inputs: job-name: "defined" rules: - if: $CI_PIPELINE_SOURCE == 'merge_request_event'Define pipeline inputs in external files
You can reuse pipeline input definitions across multiple CI/CD configurations by defining them in external files and including them a project’s pipeline configuration with spec:include.
Example shared-inputs.yml:
inputs: environment: description: "Deployment environment" options: ['staging', 'production'] region: default: 'us-east-1'Include in .gitlab-ci.yml:
spec: include: - local: /shared-inputs.yml --- deploy: script: - echo "Deploying to $[[ inputs.environment ]] in $[[ inputs.region ]]"If the file is stored outside your project, you can use:
- project for files in another GitLab project. Use the full project path and define the filename with file. You can optionally also define the ref to fetch the file from.
- remote for file on another server. Use the full URL to the file.
You can also include multiple input files at the same time.
You cannot use spec:include for CI/CD component inputs.
Override inputs from an external file
Input keys must be unique across all included files and inline specifications. Defining an input with the same key in multiple included files or both an included file and the inputs: section in the .gitlab-ci.yml configuration returns an error.
Specify functions to manipulate input values
You can specify predefined functions in the interpolation block to manipulate the input value.
Format:
$[[ input.input-id | <function1> | <function2> | ... <functionN> ]]Functions:
- Only predefined interpolation functions are permitted.
- A maximum of 3 functions may be specified in a single interpolation block.
- Functions are executed in the sequence they are specified.
Example:
spec: inputs: test: default: 'test $MY_VAR' --- test-job: script: - echo $[[ inputs.test | expand_vars | truncate(5,8) ]]In this example, assuming $MY_VAR is an unmasked project variable with value "my value":
- expand_vars expands the value to "test my value".
- truncate applies to "test my value" with a character offset of 5 and length 8.
- The output of script would be "echo my value".
Predefined interpolation functions include expand_vars, truncate, posix_escape.
expand_vars
Use expand_vars to expand CI/CD variables in the input value.
Only variables usable with the include keyword and which are not masked can be expanded.
Nested variable expansion is not supported.
Example:
spec: inputs: test: default: 'test $MY_VAR' --- test-job: script: - echo $[[ inputs.test | expand_vars ]]If $MY_VAR is unmasked with value "my value", input expands to "test my value".
truncate
Use truncate to shorten the interpolated value.
Format: truncate(,)
Example:
$[[ inputs.test | truncate(3,5) ]]If inputs.test is "0123456789", output is "34567".
posix_escape
Use posix_escape to escape any POSIX Bourne shell control or meta characters in input values.
It escapes characters by inserting \ before relevant characters.
Example:
spec: inputs: test: default: | A string with single ' and double " quotes and blanks --- test-job: script: - printf '%s\n' $[[ inputs.test | posix_escape ]]Escapes characters that could be shell control or metadata characters.
Do not rely on posix_escape for security purposes with untrusted input values.
It makes a best-effort attempt to preserve the input value exactly, but some character combinations could still cause undesired results.
For security, ensure inputs are trusted. Use spec:input:type number or boolean, spec:input:regex, or spec:input:options to prevent problematic inputs.
If combining posix_escape with expand_vars, set expand_vars first. Otherwise posix_escape would escape the $ in the variable, preventing expansion.
Example:
test-job: script: - echo $[[ inputs.test | expand_vars | posix_escape ]]Troubleshooting
YAML syntax errors when using inputs in rules
When using input to modify rules:if expressions, you might get syntax errors related to string handling in CI/CD variable expressions.
Expressions in rules:if expect a CI/CD variable compared to a quoted string or another variable.
When input values are inserted into rules configuration at pipeline runtime, the resulting value might not be a quoted string or variable, causing errors.
Example:
spec: inputs: branch: default: $CI_DEFAULT_BRANCH branch2: default: $CI_DEFAULT_BRANCH --- job-name: rules: - if: $CI_COMMIT_REF_NAME == $[[ inputs.branch ]] - if: $CI_COMMIT_REF_NAME == $[[ inputs.branch2 ]]In main configuration:
include: inputs: branch: $CI_DEFAULT_BRANCH # Valid branch2: main # InvalidUsing branch: $CI_DEFAULT_BRANCH is valid; if clause evaluates to if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH.
Using branch2: main is invalid; if clause evaluates to if: $CI_COMMIT_REF_NAME == main, which is invalid because main is a string but not quoted.
To resolve, ensure expressions remain properly formatted after input values are inserted, possibly adding quotes.
Example:
rules: - if: $CI_COMMIT_REF_NAME == "$[[ inputs.branch2 ]]"For interpolation functions like expand_vars, you might need to quote the entire if expression.
Example:
spec: inputs: environment: default: "$ENVIRONMENT" --- $[[ inputs.environment | expand_vars ]] job: script: - echo rules: - if: '"$[[ inputs.environment | expand_vars ]]" == "production"'Quoting both the input and the entire if expression ensures valid syntax after evaluation.
When quotes are nested, use " for inner quotes and ' for outer quotes, or vice versa.
Job names do not need to be quoted.
Original source All of your release notes in one feed
Join Releasebot and get updates from Gitlab and hundreds of other software products.
- April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
Work items
Gitlab expands Work items into a unified planning hub for issues, epics, tasks, objectives, key results, and test cases. It replaces separate Issues and Epics pages in GitLab 18.10+, adds filtering, sorting, display controls, and Markdown references.
Work items
Work items are the core elements for planning and tracking work in GitLab. Planning and tracking product development often requires breaking work into smaller, manageable parts while maintaining connection to the bigger picture. Work items are designed around this fundamental need, providing a unified way to represent units of work at any level, from strategic initiatives to individual tasks.
The hierarchical nature of work items enables clear relationships between different levels of work, helping teams understand how daily tasks contribute to larger goals and how strategic objectives break down into actionable components.
This structure supports various planning frameworks like Scrum, Kanban, and portfolio management approaches, while giving teams visibility into progress at every level.
Work item types
GitLab supports the following work item types:
- Issues: Track tasks, features, and bugs.
- Epics: Manage large initiatives across multiple milestones and issues.
- Tasks: Track small units of work.
- Objectives and key results: Track strategic goals and their measurable outcomes.
- Test cases: Integrate test planning directly into your GitLab workflows.
View all work items
The Work items list is the central place to view and manage all work item types (such as issues, epics, and tasks) for a project or group. Use this view to understand the full scope of work in your project or group and prioritize effectively.
In earlier versions of GitLab, issues and epics had separate list pages under Plan > Issues and Plan > Epics. In GitLab 18.10 and later, these pages are replaced by Plan > Work items, which consolidates all work item types in a single view. If you had pinned Issues or Epics in the sidebar, Work items is pinned in their place. URLs that contain /epics/:iid or /issues/:iid automatically redirect to /work_items/:iid.
To view work items for a project or group:
- In the top bar, select Search or go to and find your project or group.
- Select Plan > Work items.
Filter work items
The Work items list shows all work item types by default. To view a specific type (for example, only issues or only epics), use the Type filter.
To filter the work items list:
- At the top of the page, from the filter bar, select a filter, operator, and its value. For example, to view only epics, select the filter Type, operator is, and value Epic.
- Optional. Add more filters to refine your search.
- Press Enter or select the search icon (🔍).
Available filters
These filters are available for work items:
- Assignee
Operators: is, is not one of, is one of - Author
Operators: is, is not one of, is one of - Confidential
Values: Yes, No - Contact
Operators: is - Status
Operators: is - Health status
Operators: is, is not - Iteration
Operators: is, is not - Label
Operators: is, is not one of, is one of - Milestone
Operators: is, is not - My reaction
Operators: is, is not - Organisation
Operators: is - Parent
Operators: is, is not
Values: Any Issue, Epic, Objective - Release
Operators: is, is not - Search within
Operators: Titles, Descriptions - State
Values: Any, Open, Closed - Type
Values: Issue, Incident, Task, Epic, Objective, Key Result, Test case - Weight
Operators: is, is not
To access filters you’ve used recently, on the left side of the filter bar, select the Recent searches (🕘) dropdown list.
Sort work items
Sort the list of work items by the following:
- Created date
- Updated date
- Start date
- Due date
- Title
- Status
- Weight
To change the sorting criteria:
- On the right of the filter bar, select the Created date dropdown list.
To toggle the sorting order between ascending and descending:
- On the right of the filter bar, select Sort direction (⬇️ or ⬆️).
For more information about sorting logic, see sorting and ordering issue lists.
Configure list display preferences
Customize how work items are displayed on the list pages by showing or hiding specific metadata fields and configuring view preferences.
GitLab saves your display preferences at different levels:
- Fields: Saved per namespace. You can have different field visibility settings for different groups and projects based on your workflow needs. For example, you can show assignee and labels in one group or project, but hide them in another.
- Your preferences: Saved globally across all projects and groups. This ensures consistent behavior for how you prefer to view work items.
To configure display preferences:
- In the top bar, select Search or go to and find your group.
- Select Plan > Work items.
- On the right of the filter bar, select Display options (⚙️).
- Under Fields, turn on or turn off the metadata you want to display:
- Status (for issues)
- Assignee
- Labels
- Weight (for issues)
- Milestone
- Iteration (for issues)
- Dates: Due dates and date ranges
- Health: Health status indicators
- Blocked/Blocking: Blocking relationship indicators
- Comments: Comment counts
- Popularity: Popularity metrics
- Under Your preferences, turn on or turn off Open items in side panel to choose how epics open when you select them:
- On (default): Items open in a drawer on the right side of the screen.
- Off: Items open in a full page view.
Your preference is saved and remembered across all your sessions and devices.
Work item Markdown reference
You can reference work items in GitLab Flavored Markdown fields with [work_item:123]. For more information, see GitLab-specific references.
Related topics
- Linked issues
- Linked epics
- Issue boards
- Labels
- Iterations
- Milestones
- Custom fields
- April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
SAST False Positive Detection Flow
Gitlab adds SAST False Positive Detection Flow to automatically analyze critical and high severity SAST vulnerabilities, reduce report noise, and show a confidence score, explanation, and visual badge in the vulnerability report. The analysis can run automatically or be triggered manually.
The SAST False Positive Detection Flow automatically analyzes critical and high severity SAST vulnerabilities to identify potential false positives. This process reduces noise in your vulnerability report by flagging vulnerabilities that are likely not actual security risks.
When a SAST security scan runs, GitLab Duo automatically analyzes each vulnerability to determine the likelihood that it’s a false positive. Detection is available for vulnerabilities from GitLab-supported SAST analyzers.
The GitLab Duo assessment includes:
- Confidence score: A numerical score indicating the likelihood that the finding is a false positive.
- Explanation: Contextual reasoning about why the finding may or may not be a true positive.
- Visual indicator: In the vulnerability report, a badge that shows the assessment.
Results are based on AI analysis and should be reviewed by security professionals. This feature requires GitLab Duo with an active subscription.
For a click-through demo, see SAST False Positive Detection Flow.
You can’t trigger this flow by mentioning, assigning, or requesting a review from its service account. The flow runs automatically after security scans complete. You can run it manually from the vulnerability report by clicking the Check for false positive button.
Run SAST False Positive Detection
The flow runs automatically when:
- A SAST security scan completes successfully on the default branch.
- The scan detects Critical or High severity vulnerabilities.
- GitLab Duo features are enabled for the project or group.
You can also manually trigger analysis for existing vulnerabilities:
- In the top bar, select Search or go to and find your project.
- Select Secure > Vulnerability report.
- Select the vulnerability you want to analyze.
- In the upper-right corner, select Check for false positive.
Related topics
- SAST false positive detection.
- Vulnerability report.
- SAST.
- April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
Code Review Flow
Gitlab adds Code Review Flow, an agentic GitLab Duo feature that analyzes merge requests, linked issues, and code changes to deliver contextual review comments, custom instructions, and automatic reviews from the UI with project, group, or application settings.
Depending on your add-on, GitLab runs one of two code review features:
- Code Review Flow: the agentic version, part of GitLab Duo Agent Platform.
- GitLab Duo Code Review: the non-agentic version, available only for users with the GitLab Duo Enterprise add-on.
This page describes the agentic version. Learn how the two features compare.
The Code Review Flow helps you streamline code reviews with agentic AI.
This flow:
- Analyzes code changes, merge request comments, and linked issues.
- Provides enhanced contextual understanding of repository structure and cross-file dependencies.
- Delivers detailed review comments with actionable feedback.
- Supports custom review instructions tailored to your project.
This flow is available in the GitLab UI only.
Use the flow
Prerequisites:
- Ensure you meet the Agent Platform prerequisites.
- Ensure Allow foundational flows and Code Review are turned on for the top-level group.
- Ensure you have the Developer, Maintainer, or Owner role for the project.
To use the Code Review Flow on a merge request:
- In the left sidebar, select Code > Merge requests and find your merge request.
- Use one of these methods to request a review:
- Assign @GitLabDuo as a reviewer.
- In a comment box, enter the quick action /assign_reviewer @GitLabDuo.
After you request a review, Code Review Flow starts a session that you can monitor until the review is complete.
Interact with GitLab Duo in reviews
In addition to assigning GitLab Duo as a reviewer, you can interact with GitLab Duo by:
- Replying to its review comments to ask for clarification or alternative approaches.
- Mentioning @GitLabDuo in any discussion thread to ask follow-up questions.
Interactions with GitLab Duo can help to improve the suggestions and feedback as you work to improve your merge request.
Feedback provided to GitLab Duo does not influence later reviews of other merge requests. There is a feature request to add this functionality, see issue 560116.
Custom code review instructions
Customize the behavior of Code Review Flow with repository-specific review instructions. You can guide GitLab Duo to:
- Focus on specific code quality aspects (such as security, performance, and maintainability).
- Enforce coding standards and best practices unique to your project.
- Target specific file patterns with tailored review criteria.
- Provide more detailed explanations for certain types of changes.
To configure custom instructions, see customize instructions for GitLab Duo.
Automatic reviews from GitLab Duo for a project
Automatic reviews from GitLab Duo ensure that all merge requests in your project receive an initial review. After a merge request is created, GitLab Duo reviews it unless:
- It’s marked as draft. For GitLab Duo to review the merge request, mark it ready.
- It contains no changes. For GitLab Duo to review the merge request, add changes to it.
Prerequisites:
- You must have at least the Maintainer role in a project.
To enable @GitLabDuo to automatically review merge requests:
- In the top bar, select Search or go to and find your project.
- Select Settings > Merge requests.
- In the GitLab Duo Code Review section, select Enable automatic reviews by GitLab Duo.
- Select Save changes.
For information on how credit usage is attributed for automatic reviews, see determine which code review feature runs.
Automatic reviews from GitLab Duo for groups and applications
Use group or application settings to enable automatic reviews for multiple projects.
Prerequisites:
- To turn on automatic reviews for groups, have the Owner role for the group.
- To turn on automatic reviews for all projects, be an administrator.
To enable automatic reviews for groups:
- In the top bar, select Search or go to and find your group.
- Select Settings > General.
- Expand the Merge requests section.
- In the GitLab Duo Code Review section, select Enable automatic reviews by GitLab Duo.
- Select Save changes.
To enable automatic reviews for all projects:
- In the upper-right corner, select Admin.
- In the left sidebar, select Settings > General.
- In the GitLab Duo Code Review section, select Enable automatic reviews by GitLab Duo.
- Select Save changes.
Settings cascade from application to group to project. More specific settings override broader ones.
For information on how credit usage is attributed for automatic reviews, see determine which code review feature runs.
Troubleshooting
Error DCR4000
You might get an error that states Code Review Flow is not enabled. Contact your group administrator to enable the foundational flow in the top-level group. Error code: DCR4000.
This error occurs when either foundational flows or Code Review Flow are turned off.
Contact your administrator and ask them to turn on Code Review Flow for your top-level group.
Error DCR4001
You might get an error that states Code Review Flow is enabled but the service account needs to be verified. Contact your administrator. Error code: DCR4001.
This error occurs when Code Review Flow is turned on, but the service account for the top-level group is not ready or is still being created.
Wait a few minutes for the service account to activate, then try again. If the error persists, contact your administrator and ask them to verify that a service account has been created in the top-level group with the Developer role.
Error DCR4002
You might get an error that states No GitLab Credits remain for this billing period. To continue using Code Review Flow, contact your administrator. Error code: DCR4002.
This error occurs when you have used all of your allocated GitLab Credits for the current billing period.
Contact your administrator to purchase additional credits or wait for your credits to reset at the start of the next billing period.
Error DCR4003
You might get an error that states , you don't have permission to create a pipeline for Code Review Flow in this project. Contact your administrator to update your permissions. Error code: DCR4003.
This error occurs because Code Review Flow runs on a CI/CD pipeline, and you don’t have permission to create pipelines in this project.
Contact your administrator and ask them to give you the required permissions to execute pipelines.
Error DCR4004
You might get an error that states , you need to set a default GitLab Duo namespace to use Code Review Flow in this project. Please set a default GitLab Duo namespace in your preferences. Error code: DCR4004.
This error occurs when GitLab Duo cannot identify a default GitLab Duo namespace for the user that started the review.
Set a default GitLab Duo namespace in your preferences, then request a review again.
Error DCR4005
You might get an error that states Code Review Flow could not obtain the required authentication tokens to connect to the GitLab AI Gateway and the GitLab API. Please request a new review. If the issue persists, contact your administrator. Error code: DCR4005.
Code Review Flow requires authentication tokens to connect to the GitLab AI Gateway and the GitLab API. This error occurs when those tokens cannot be generated, usually due to an incorrect GitLab Duo setup or a transient infrastructure issue.
For self-managed instances, ask your administrator to verify the GitLab Duo configuration.
Error DCR4006
You might get an error that states Code Review Flow could not add the service account to this project. Contact your administrator to verify that the service account has the required project access. Error code: DCR4006.
This error occurs when the service account cannot be added as a member of the project. This can happen when a group membership lock is enabled or the service account does not have the required access.
Contact your administrator and ask them to verify that the service account can be added to the project as a developer.
Error DCR4007
You might get an error that states Code Review Flow is not available for this project. Contact your administrator to verify that the flow is enabled and the required configuration is in place. Error code: DCR4007.
This error occurs when the flow is disabled or the required configuration is missing for the project.
Contact your administrator and ask them to verify that the flow is enabled for the project.
Error DCR4008
You might get an error that states Code Review Flow could not create the required CI/CD pipeline. Please request a new review. If the problem persists, contact your administrator. Error code: DCR4008.
This error occurs when Code Review Flow cannot create or configure the CI/CD pipeline to run the review because of runner availability issues or internal configuration problems.
Try to restart the review. If the error persists, contact your administrator.
Error DCR4009
You might get an error that states Code Review Flow could not retrieve the source branch for this merge request. Please request a new review. Error code: DCR4009.
This error occurs when Code Review Flow is unable to retrieve the source branch for the merge request.
Try to restart the review.
Error DCR5000
You might get an error that states Something went wrong while starting Code Review Flow. Please try again later. Error code: DCR5000.
This error occurs when GitLab Duo Agent Platform is unable to start Code Review Flow due to an internal error.
Try to restart the review. If the error persists, contact your administrator.
Related topics
- GitLab Duo in merge requests
- Agent Platform AI models
- April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
Editor extensions
Gitlab brings editor extensions that put GitLab and GitLab Duo directly into development environments, helping users manage projects, write and review code, track issues, and optimize pipelines without leaving the editor.
GitLab editor extensions bring the power of GitLab and GitLab Duo directly into your preferred development environments. Use GitLab features and GitLab Duo AI capabilities to handle everyday tasks without leaving your editor. For example:
- Manage your projects.
- Write and review code.
- Track issues.
- Optimize pipelines.
Our extensions boost your productivity and elevate your development process by bridging the gap between your coding environment and GitLab.
Available extensions
GitLab offers the following IDE extensions with access to GitLab Duo and other GitLab features used to manage projects and applications.
If you prefer a command-line interface, try the following:
Security considerations
To learn about the security risks of running agents locally in editor extensions and how to protect your local development environment, see security considerations for editor extensions.
Editor extensions team runbook
Use the editor extensions team runbook to learn more about debugging all supported editor extensions. For internal users, this runbook contains instructions for requesting internal help.
Feedback and contributions
We value your input on both the traditional and AI-native features. If you have suggestions, encounter issues, or want to contribute to the development of our extensions:
- Report issues in their GitLab projects.
- Submit feature requests by creating a new issue in the editor-extensions project.
- Submit merge requests in the respective GitLab projects.
Related topics
- GitLab Duo Agent Platform
- GitLab Duo (non-agentic)
- How we created an extension for VS Code
- GitLab for Visual Studio
- GitLab for JetBrains and Neovim
- Put glab at your fingertips with the GitLab CLI
- April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
ClickHouse
Gitlab adds ClickHouse support for advanced analytics, including runner fleet dashboards, contribution analytics, GitLab Duo and SDLC trends, plus GraphQL API access for AI metrics. It also covers Cloud and self-managed setup, HA, migrations, monitoring, and upgrade guidance.
ClickHouse
ClickHouse is an open-source column-oriented database management system. It can efficiently filter, aggregate, and query across large data sets.
GitLab uses ClickHouse as a secondary data store to enable advanced analytics features such as GitLab Duo, SDLC trends, and CI Analytics. GitLab only stores data that supports these features in ClickHouse.
You should use ClickHouse Cloud to connect ClickHouse to GitLab.
Alternatively, you can bring your own ClickHouse. For more information, see ClickHouse recommendations for GitLab Self-Managed.
Analytics available with ClickHouse
After you configure ClickHouse, you can use the following analytics features:
- Runner fleet dashboard: Displays runner usage metrics and job wait times. Provides export of CSV files containing job counts and executed runner minutes by runner type and job status for each project.
- Contribution analytics: Provides analytics of group member contributions (push events, issues, merge requests) over time. ClickHouse reduces the likelihood of timeout issues for large instances.
- GitLab Duo and SDLC trends: Measures the impact of GitLab Duo on software development performance. Tracks development metrics (deployment frequency, lead time, change failure rate, time to restore) alongside AI-specific indicators (GitLab Duo seat adoption, Code Suggestions acceptance rates, and GitLab Duo Chat usage).
- GraphQL API for AI Metrics: Provides programmatic access to GitLab Duo and SDLC trend data through the AiMetrics, AiUserMetrics, and AiUsageData endpoints. Provides export of pre-aggregated metrics and raw event data for integration with BI tools and custom analytics.
Supported ClickHouse versions
The supported ClickHouse version differs depending on your GitLab version:
- GitLab 17.7 and later supports ClickHouse 23.x. To use either ClickHouse 24.x or 25.x, use the workaround.
- GitLab 18.1 and later supports ClickHouse 23.x, 24.x, and 25.x.
- GitLab 18.8 and later supports ClickHouse 23.x, 24.x, 25.x, and the Replicated database engine.
- Older clusters will require an additional permission (dictGet), see the snippet.
ClickHouse Cloud is always compatible with the latest stable GitLab release.
If you’re using ClickHouse 25.12, note that it introduced a backward-incompatible change to ALTER MODIFY COLUMN. This breaks the migration process for the GitLab ClickHouse integration in versions prior to 18.8. It requires upgrading GitLab to version 18.8+.
Set up ClickHouse
Choose your deployment type based on your operational requirements:
- ClickHouse Cloud (Recommended): Fully managed service with automatic upgrades, backups, and scaling.
- ClickHouse for GitLab Self-Managed (BYOC): Complete control over your infrastructure and configuration.
After setting up your ClickHouse instance:
- Create the GitLab database and user.
- Configure the GitLab connection.
- Verify the connection.
- Run ClickHouse migrations.
- Enable ClickHouse for Analytics.
Set up ClickHouse Cloud
Prerequisites:
- Have a ClickHouse Cloud account.
- Enable network connectivity from your GitLab instance to ClickHouse Cloud.
- Be an administrator your GitLab instance.
To set up ClickHouse Cloud:
- Sign in to ClickHouse Cloud.
- Select New Service.
- Choose your service tier:
- Development: For testing and development environments.
- Production: For production workloads with high availability.
- Select your cloud provider and region. Choose a region close to your GitLab instance for optimal performance.
- Configure your service name and settings.
- Select Create Service.
- Once provisioned, note your connection details from the service dashboard:
- Host
- Port (usually 9440 for secure connections)
- Username
- Password
ClickHouse Cloud automatically handles version upgrades and security patches. Enterprise Edition (EE) customers can schedule upgrades to control when they occur, and avoid unexpected service interruptions during business hours. For more information, see upgrade ClickHouse.
After you create your ClickHouse Cloud service, you then create the GitLab database and user.
Set up ClickHouse for GitLab Self-Managed (BYOC)
Prerequisites:
- Have a ClickHouse instance installed and running. If ClickHouse is not installed, see:
- ClickHouse official installation guide.
- ClickHouse recommendations for GitLab Self-Managed.
- Have a supported ClickHouse version.
- Enable network connectivity from your GitLab instance to ClickHouse.
- Be an Administrator for both ClickHouse and your GitLab instance.
For ClickHouse for GitLab Self-Managed, you are responsible for planning and executing version upgrades, security patches, and backups. For more information, see Upgrade ClickHouse.
Configure High Availability
For a multi-node, high-availability (HA) setup, GitLab supports the Replicated table engine in ClickHouse.
Prerequisites:
- Have a ClickHouse cluster with multiple nodes. A minimum of three nodes is recommended.
- Define a cluster in the remote_servers configuration section.
- Configure the following macros in your ClickHouse configuration: cluster, shard, replica.
When configuring the database for HA, you must run the statements with the ON CLUSTER clause.
For more information, see ClickHouse Replicated database engine documentation.
Configure Load balancer
The GitLab application communicates with the ClickHouse cluster through the HTTP/HTTPS interface. For HA deployments, use an HTTP proxy or load balancer to distribute requests across ClickHouse cluster nodes.
Recommended load balancer options:
- chproxy - ClickHouse-specific HTTP proxy with built-in caching and routing.
- HAProxy - General-purpose TCP/HTTP load balancer.
- NGINX - Web server with load balancing capabilities.
- Cloud provider load balancers (AWS Application Load Balancer, GCP Load Balancer, Azure Load Balancer).
Basic chproxy configuration example provided.
When using a load balancer, configure GitLab to connect to the load balancer URL instead of individual ClickHouse nodes.
For more information, see chproxy documentation.
After you configure your ClickHouse for GitLab Self-Managed instance, create the GitLab database and user.
Verify ClickHouse installation
Before configuring the database, verify ClickHouse is installed and accessible:
- Check ClickHouse is running:
- clickhouse-client --query "SELECT version()"
- If ClickHouse is running, you see the version number (for example, 24.3.1.12).
- Verify you can connect with credentials:
- clickhouse-client --host your-clickhouse-host --port 9440 --secure --user default --password 'your-password'
- If you have not configured TLS yet, use port 9000 without the --secure flag for initial testing.
Create database and user
To create the necessary user and database objects:
- Generate a secure password and save it.
- Sign in to:
- For ClickHouse Cloud, the ClickHouse SQL console.
- For ClickHouse for GitLab Self-Managed, the clickhouse-client.
- Run the provided SQL commands, replacing PASSWORD_HERE with the generated password.
Configure the GitLab connection
To provide GitLab with ClickHouse credentials:
- Edit /etc/gitlab/gitlab.rb with the appropriate database, url, username, and password.
- Replace the URL with the correct endpoint depending on your deployment.
- Save the file and reconfigure GitLab with sudo gitlab-ctl reconfigure.
For production deployments, configure TLS/SSL on your ClickHouse instance and use https:// URLs. For GitLab Self-Managed installations, see the Network Security documentation.
Verify the connection
To verify that your connection is set up successfully:
- Sign in to the Rails console.
- Execute the command: ClickHouse::Client.select('SELECT 1', :main)
- If successful, the command returns [{"1"=>1}].
If the connection fails, verify:
- ClickHouse service is running and accessible.
- Network connectivity from GitLab to ClickHouse. Check that firewalls and security groups allow connections.
- Connection URL is correct (host, port, protocol).
- Credentials are correct.
- For HA cluster deployments: Load balancer is properly configured and routing requests.
Run ClickHouse migrations
To create the required database objects, execute:
sudo gitlab-rake gitlab:clickhouse:migrateEnable ClickHouse for Analytics
After your GitLab instance is connected to ClickHouse, you can enable features that use ClickHouse:
Prerequisites:
- You must have administrator access to the instance.
- ClickHouse connection is configured and verified.
- Migrations have been successfully completed.
To enable ClickHouse for Analytics:
- In the left sidebar, at the bottom, select Admin.
- Select Settings > General.
- Expand ClickHouse.
- Select Enable ClickHouse for Analytics.
- Select Save changes.
Disable ClickHouse for Analytics
To disable ClickHouse for Analytics:
Prerequisites:
- You must have administrator access to the instance.
To disable:
- In the left sidebar, at the bottom, select Admin.
- Select Settings > General.
- Expand ClickHouse.
- Clear the Enable ClickHouse for Analytics checkbox.
- Select Save changes.
Disabling ClickHouse for Analytics stops GitLab from querying ClickHouse but does not delete any data from your ClickHouse instance. Analytics features that rely on ClickHouse will fall back to alternative data sources or become unavailable.
Upgrade ClickHouse
ClickHouse Cloud automatically handles version upgrades and security patches. No manual intervention is required.
For information about upgrade scheduling and maintenance windows, see ClickHouse Cloud upgrades.
ClickHouse Cloud notifies you in advance of upcoming upgrades. Review the ClickHouse Cloud changelog to stay informed about new features and changes.
For ClickHouse for GitLab Self-Managed, you are responsible for planning and executing version upgrades.
Prerequisites:
- Have administrator access to the ClickHouse instance.
- Back up your data before upgrading. See Disaster recovery.
Before upgrading:
- Review the ClickHouse release notes for breaking changes.
- Check compatibility with your GitLab version.
- Test the upgrade in a non-production environment.
- Plan for potential downtime, or use a rolling upgrade strategy for HA clusters.
To upgrade ClickHouse:
- For single-node deployments, follow the ClickHouse upgrade documentation.
- For HA cluster deployments, perform a rolling upgrade to minimize downtime:
- Upgrade one node at a time.
- Wait for the node to rejoin the cluster.
- Verify cluster health before proceeding to the next node.
Always ensure the ClickHouse version remains compatible with your GitLab version. Incompatible versions might cause indexing to pause and features to fail. For more information, see supported ClickHouse versions.
For detailed upgrade procedures, see the ClickHouse documentation on updates.
Operations
Check migration status
Prerequisites:
- You must have administrator access to the instance.
To check the status of ClickHouse migrations:
- In the left sidebar, at the bottom, select Admin.
- Select Settings > General.
- Expand ClickHouse.
- Review the Migration status section if available.
Alternatively, check for pending migrations using the Rails console with the provided Ruby command.
Retry failed migrations
If a ClickHouse migration fails:
- Check the logs for error details. ClickHouse-related errors are logged in the GitLab application logs.
- Address the underlying issue (for example, insufficient memory, connectivity problems).
- Retry the migration using the appropriate command.
Migrations are designed to be idempotent and safe to retry. If a migration fails partway through, running it again resumes from where it left off or skip already-completed steps.
ClickHouse Rake tasks
GitLab provides several Rake tasks for managing your ClickHouse database.
The following Rake tasks are available:
- sudo gitlab-rake gitlab:clickhouse:migrate: Runs all pending ClickHouse migrations to create or update database schema.
- sudo gitlab-rake gitlab:clickhouse:drop: Drops all ClickHouse databases. Use with extreme caution as this deletes all data.
- sudo gitlab-rake gitlab:clickhouse:create: Creates ClickHouse databases if they do not exist.
- sudo gitlab-rake gitlab:clickhouse:setup: Creates databases and runs all migrations. Equivalent to running create and migrate tasks.
- sudo gitlab-rake gitlab:clickhouse:schema:dump: Dumps the current database schema to a file for backup or version control.
- sudo gitlab-rake gitlab:clickhouse:schema:load: Loads the database schema from a dump file.
For self-compiled installations, use bundle exec rake instead of sudo gitlab-rake and add RAILS_ENV=production to the end of the command.
Common task examples
Verify ClickHouse connection and schemaTo verify your ClickHouse connection is working:
- For installations that use the Linux package: sudo gitlab-rake gitlab:clickhouse:info
- For self-compiled installations: bundle exec rake gitlab:clickhouse:info RAILS_ENV=production
This task outputs debugging information about the ClickHouse connection and configuration.
Re-run all migrationsTo run all pending migrations:
- For installations that use the Linux package: sudo gitlab-rake gitlab:clickhouse:migrate
- For self-compiled installations: bundle exec rake gitlab:clickhouse:migrate RAILS_ENV=production
This deletes all data in your ClickHouse database. Use only in development or when troubleshooting.
To drop and recreate the database:
- For installations that use the Linux package:
sudo gitlab-rake gitlab:clickhouse:drop
sudo gitlab-rake gitlab:clickhouse:setup - For self-compiled installations:
bundle exec rake gitlab:clickhouse:drop RAILS_ENV=production
bundle exec rake gitlab:clickhouse:setup RAILS_ENV=production
You can use environment variables to control Rake task behavior:
- VERBOSE (Boolean): Set to true to see detailed output during migrations. Example: VERBOSE=true sudo gitlab-rake gitlab:clickhouse:migrate
Performance tuning
For resource sizing and deployment recommendations based on your user count, see system requirements.
For information about ClickHouse architecture and performance tuning, see the ClickHouse documentation on architecture.
Disaster recovery
Backup and Restore
You should perform a full backup before upgrading the GitLab application. ClickHouse data is not included in GitLab backup tooling.
Backup and restore strategy depends on the choice of deployment.
ClickHouse Cloud
ClickHouse Cloud automatically:
- Manages the backups and restores.
- Creates and retains daily backups.
You do not have to do any additional configuration.
For more information, see ClickHouse Cloud backups.
ClickHouse for GitLab Self-Managed
If you manage your own ClickHouse instance, you should take regular backups to ensure data safety:
- Take initial full backups of tables (excluding system tables like metrics or logs) to a object storage bucket, for example AWS S3.
- Take incremental backups after this initial full backup.
This duplicates data for every full backup, but is the easiest approach to restore data.
Alternatively, use clickhouse-backup. This is a third-party tool that provides similar functionality with additional features like scheduling and remote storage management.
Monitoring
To ensure the stability of the GitLab integration, you should monitor the health and performance of your ClickHouse cluster.
ClickHouse Cloud
ClickHouse Cloud provides a native Prometheus integration that exposes metrics through a secure API endpoint.
After generating the API credentials, you can configure collectors to scrape metrics from ClickHouse Cloud. For example, a Prometheus deployment.
ClickHouse for GitLab Self-Managed
ClickHouse can expose metrics in Prometheus format. To enable this:
- Configure the prometheus section in your config.xml to expose metrics on a dedicated port (default is 9363).
- Configure Prometheus or a similar compatible server to scrape http://:9363/metrics.
Metrics to monitor
You should set up alerts for the following metrics to detect issues that may impact GitLab features:
- ClickHouse_Metrics_Query: Number of queries currently executing. A sudden spike might indicate a performance bottleneck. Alert threshold: Baseline deviation (for example > 100).
- ClickHouseProfileEvents_FailedSelectQuery: Number of failed select queries. Alert threshold: Baseline deviation (for example > 50).
- ClickHouseProfileEvents_FailedInsertQuery: Number of failed insert queries. Alert threshold: Baseline deviation (for example > 10).
- ClickHouse_AsyncMetrics_ReadonlyReplica: Indicates if a replica has gone into read-only mode (often due to ZooKeeper connection loss). Alert threshold: > 0 (take immediate action).
- ClickHouse_ProfileEvents_NetworkErrors: Network errors (connection resets/timeouts). Frequent errors might cause GitLab background jobs to fail. Alert threshold: Rate > 0.
Liveness check
If ClickHouse is available behind a load balancer, you can use the HTTP /ping endpoint to check for liveness. The expected response is Ok with HTTP Code 200.
Security and auditing
To ensure the security of your data and ensure audit ability, use the following security practices.
Network security
- TLS Encryption: Configure ClickHouse servers to use TLS encryption to validate connections.
- When configuring the connection URL in GitLab, you should use the https:// protocol (for example, https://clickhouse.example.com:8443) to specify this.
- IP Allow lists: Restrict access to the ClickHouse port (default 8443 or 9440) to only the GitLab application nodes and other authorized networks.
Audit logging
GitLab application does not maintain a separate audit log for individual ClickHouse queries. In order to satisfy specific requirements regarding data access (who queried what and when), you can enable logging on the ClickHouse side.
ClickHouse Cloud
In ClickHouse Cloud, query logging is enabled by default. You can access these logs by querying the system.query_log table.
ClickHouse for GitLab Self-Managed
For self-managed instances, ensure the query_log configuration parameter is enabled in your server configuration:
- Verify that the query_log section exists in your config.xml or users.xml.
- Once enabled, all executed queries are recorded in the system.query_log table, allowing for audit trail.
System requirements
The recommended system requirements change depending on the number of users.
Deployment decision matrix quick reference
- 1K Users: ClickHouse Cloud Basic, Managed.
- 2K Users: ClickHouse Cloud Basic, Managed or Single Node. Alternative recommendations for GitLab Self-Managed include AWS: m8g.xlarge (4 vCPU, 16 GB), GCP: c4a-standard-4 or n4-standard-4 (4 vCPU, 16 GB), Azure: Standard_D4ps_v6 (4 vCPU, 16 GB), Storage: 20 GB with low-medium performance tier.
- 3K Users: ClickHouse Cloud Scale, Managed or Single Node. Alternative recommendations for GitLab Self-Managed include AWS: m8g.2xlarge (8 vCPU, 32 GB), GCP: c4a-standard-8 or n4-standard-8 (8 vCPU, 32 GB), Azure: Standard_D8ps_v6 (8 vCPU, 32 GB), Storage: 100 GB with medium performance tier. HA deployments are not cost-effective at this scale.
- 5K Users: ClickHouse Cloud Scale, Single node recommended. Alternative recommendations for GitLab Self-Managed include AWS: m8g.4xlarge (16 vCPU, 64 GB), GCP: c4a-standard-16 or n4-standard-16 (16 vCPU, 64 GB), Azure: Standard_D16ps_v6 (16 vCPU, 64 GB), Storage: 100 GB with high performance tier.
- 10K Users: ClickHouse Cloud Scale, Managed or Single Node/HA. Alternative recommendations for GitLab Self-Managed include AWS: m8g.4xlarge (16 vCPU, 64 GB), GCP: c4a-standard-16 or n4-standard-16 (16 vCPU, 64 GB), Azure: Standard_D16ps_v6 (16 vCPU, 64 GB), Storage: 200 GB with high performance tier. HA Option: 3-node cluster becomes viable for critical workloads.
- 25K Users: ClickHouse Cloud Scale or ClickHouse for GitLab Self-Managed. Single Node and HA Deployment recommendations provided with specific instance types and storage.
- 50K Users: ClickHouse for GitLab Self-Managed HA or ClickHouse Cloud Scale. HA Deployment (Preferred) recommendations provided with specific instance types and storage.
HA considerations for ClickHouse for GitLab Self-Managed deployment
- HA setup becomes cost effective only at 10k users or above.
- Minimum: Three ClickHouse nodes for quorum.
- ClickHouse Keeper: Three nodes for coordination (can be co-located or separate).
- LoadBalancer: Recommended for distributing queries.
- Network: Low-latency connectivity between nodes is critical.
Glossary
- Cluster: A collection of nodes (servers) that work together to store and process data.
- MergeTree: is a table engine in ClickHouse designed for high data ingest rates and large data volumes. It is the core storage engine in ClickHouse, providing features such as columnar storage, custom partitioning, sparse primary indexes, and support for background data merges.
- Parts: A physical file on a disk that stores a portion of the table’s data. A part is different from a partition, which is a logical division of a table’s data that is created using a partition key.
- Replica: A copy of the data stored in a ClickHouse database. You can have any number of replicas of the same data for redundancy and reliability. Replicas are used in conjunction with the ReplicatedMergeTree table engine, which enables ClickHouse to keep multiple copies of data in sync across different servers.
- Shard: A subset of data. ClickHouse always has at least one shard for your data. If you do not split the data across multiple servers, your data is stored in one shard. Sharding data across multiple servers can be used to divide the load if you exceed the capacity of a single server.
- TTL (Time To Live): Time To Live (TTL) is a ClickHouse feature that automatically moves, deletes, or rolls up columns/rows after a certain time period. This allows you to manage storage more efficiently because you can delete, move, or archive the data that you no longer need to access frequently.
Troubleshooting
Database schema migrations on GitLab 18.0.0 and earlier
On GitLab 18.0.0 and earlier, running database schema migrations for ClickHouse may fail for ClickHouse 24.x and 25.x with the error message:
Code: 344. DB::Exception: Projection is fully supported in ReplacingMergeTree with deduplicate_merge_projection_mode = throw. Use 'drop' or 'rebuild' option of deduplicate_merge_projection_modeWithout running all migrations, the ClickHouse integration will not work.
To work around this issue and run the migrations:
- Sign in to the Rails console.
- Execute the command to insert specific versions into schema_migrations.
- Migrate the database again with sudo gitlab-rake gitlab:clickhouse:migrate.
This time the database migration should successfully finish.
Database dictionary read support
From GitLab 18.8, GitLab starts using ClickHouse Dictionaries for data denormalization. The GRANT statements prior 18.8 did not give permission to the gitlab user to query dictionaries so a manual modification step is needed:
- Sign in to:
- For ClickHouse Cloud, the ClickHouse SQL console.
- For ClickHouse for GitLab Self-Managed, the clickhouse-client.
- Run the provided GRANT dictGet ON gitlab_clickhouse_main_production.* TO gitlab_app; command, replacing PASSWORD_HERE with the generated password.
Without granting the permission, the ClickHouse migration (CreateNamespaceTraversalPathsDict) will fail with the error: DB::Exception: gitlab: Not enough privileges.
After granting the permission, the migration can be safely retried (ideally, wait 1-2 hours until the distributed migration lock clears).
ClickHouse CI job data materialized view data inconsistencies
In GitLab 18.5 and earlier, duplicate data could be inserted into ClickHouse tables (such as ci_finished_pipelines and ci_finished_builds) when Sidekiq workers retried after network timeouts. This issue caused materialized views to display incorrect aggregated metrics in analytics dashboards, including the runner fleet dashboard.
This issue was fixed in GitLab 18.9 and backported to 18.6, 18.7, and 18.8. To resolve this issue, upgrade to GitLab 18.6 or later.
If you have existing duplicate data, a fix to rebuild the affected materialized views is planned for GitLab 18.10 in issue 586319. For assistance, contact GitLab Support.
Original source - April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
Fine-grained permissions for personal access tokens
Gitlab adds fine-grained personal access tokens with scoped resource and permission controls, letting users define exactly which API operations a token can access and when it expires.
Fine-grained personal access tokens are scoped to only access the specific resources and permissions you define. When creating the token, you define the following attributes:
- Resources: A collection of API operations. Resources are grouped into larger boundaries (Group and project and User).
- Permissions: The specific actions the token can perform on a resource. Generally, this conforms to Create, Read, Update, and Delete actions.
To create a fine-grained personal access token:
- In the upper-right corner, select your avatar.
- Select Edit profile.
- In the left sidebar, select Access > Personal access tokens.
- From the Generate token dropdown list, select Fine-grained token.
- In Token name, enter a name for the token.
- In Token description, enter a description for the token.
- In Expiration date, enter an expiry date for the token.
- The token expires at midnight UTC on that date.
- If you do not enter a date, the expiry date is set to 365 days from today.
- By default, the expiry date cannot be more than 365 days from today. On GitLab 17.6 and later, administrators can modify the maximum lifetime of access tokens.
- Define the scope of the personal access token.
- In the left panel, select one or more resources.
- If including group or project resources, select an option in the Group and project access section.
- In the right panel, select an available permission for each resource.
- Select Generate token.
A personal access token is displayed. Save the personal access token somewhere safe. After you leave or refresh the page, you cannot view it again.
Available fine-grained permissions
Fine-grained personal access tokens can access the following REST API endpoints:
[The page then lists a comprehensive set of resources and their associated permissions and API endpoints, including Application Security resources, Compliance Policy Setting, Dependency, Dependency List Export, SBOM Occurrence, Security Setting, Vulnerability, Vulnerability Export, CI/CD resources, Artifact, CI Config, CI Minute, Catalog Version, Cluster, Cluster Agent, Cluster Agent Token, Cluster Agent URL Configuration, Deployment, Environment, Feature Flag, Freeze Period, Job, Job Artifact, Merge Train, Merge Train Merge Request, Pipeline, Pipeline Schedule, Protected Environment, Pull Mirror, Repository Storage Move, Resource Group, Runner, Runner Registration Token, Secure File, Terraform State, Trigger, Variable, Compliance resources, Audit Event, External Status Check, External Status Check Service, Duo resources, Chat Completion, Code Suggestion Completion, Code Suggestion Connection Detail, Code Suggestion Direct Access, Code Suggestion Enabled Status, Duo Workflow, Geo resources, Geo Node, Geo Site, Groups resources, Activity, Admin Member Role, Association, Avatar, Follower, Following, GPG Key, Group, Member Role, Namespace, Preference, SSH Certificate, Status, Support PIN, Template, Topic, Integrations resources, Webhook, Monitoring resources, Sidekiq Job, Sidekiq Metric, Note resources, Vulnerability Note, Notifications resources, Todo, Orbit resources, Knowledge Graph, Packages And Registry resources, Container Registry Protection Tag Rule, Container Repository, Container Repository Protection Rule, Debian Distribution, Dependency Proxy Cache, Package, Package Pipeline, Virtual Registry, Virtual Registry Cleanup Policy, Project Features resources, Alias, Badge, Release, Release Link, Remote Mirror, Remote Mirror Public Key, Snapshot, Snippet, Project Model Registry And Experiments resources, MLflow Artifact, MLflow Run, Project Planning resources, Custom Attribute, Feature Flag User List, Internal Event, Label, Service Ping, Usage Data Metric, Work Item, Projects resources, Markdown Upload, Page, Pages Domain, Project, Repository resources, Approval Configuration, Approval Rule, Approval Setting, Branch, Code, Commit, Merge Request, Merge Request Approval Rule, Merge Request Dependency, Protected Branch, Protected Tag, Push Rule, Repository, Repository Submodule, Repository Tag, Tag, Search resources, Global Search, Search Migration, Zoekt Index, Zoekt Namespace, Zoekt Node, Subscription And Licensing resources, GitLab Subscription, License, License Billable User, System Access resources, Access Request, Application Appearance, Counts, Deploy Key, Deploy Token, Email, Enterprise User, Experiment, Invitation, Job Token Scope, Job Token Scope Allowlist, LDAP Group Link, LDAP Group Sync, Member, Metadata, Notification Setting, OAuth Application, Personal Access Token, Plan Limit, Provisioned User, Resource Access Token, SAML Group Link, SAML Identity, SAML User, SCIM Identity, SSH Key, Service Account, Service Account Personal Access Token, Statistic, Usage Data Query, User, System Migration And Integration resources, Batched Background Migration, Database Dictionary, Database Migration, Export, Import, Placeholder Reassignment, Wiki resources, Wiki, Always accessible endpoints, Unavailable endpoints]
The page also notes that some endpoints are always accessible without authentication and some endpoints are unavailable for fine-grained tokens due to alternative authentication mechanisms.
Original source - April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
Vulnerability management policy
Gitlab adds vulnerability management policies that automatically resolve no-longer-detected issues, dismiss matches by file, directory, or identifier criteria, and override severity levels. The update helps teams triage vulnerabilities more consistently while applying policy rules on the default branch.
Use a vulnerability management policy to automatically resolve vulnerabilities that are no longer detected, automatically dismiss vulnerabilities that match specific criteria, or override vulnerability severity levels.
This can help reduce the workload of triaging vulnerabilities.
When a scanner detects a vulnerability on the default branch, the scanner creates a vulnerability record with the status Needs triage. After the vulnerability has been remediated and the next security scan runs, the scan adds No longer detected to the record’s activity log but the record’s status does not change. You can change the status to Resolved either manually or by using a vulnerability management policy.
Vulnerability management policies ensure that rules are applied consistently. For example, you can create policies that:
- Automatically resolve vulnerabilities that meet all of the following criteria:
- No longer detected on the default branch.
- Found by a SAST scan.
- Low risk.
- Automatically dismiss vulnerabilities found in test files with the reason Used in tests.
- Dismiss vulnerabilities with specific CVE identifiers with the reason False positive.
- Override (or change) the severity to critical for vulnerabilities that match specific CVE patterns in production code.
A vulnerability management policy only affects vulnerabilities with the status Needs triage or Confirmed.
The vulnerability management policy is applied when a pipeline runs against the default branch or when vulnerabilities are detected by advisory scanning.
When policies use auto-resolve, for each vulnerability that is no longer detected by the same scanner and matches the policy’s rules:
- The GitLab Security Policy Bot user sets the vulnerability record’s status to Resolved.
- A note about the status change is added to the vulnerability’s record.
When policies use auto-dismiss, for each vulnerability that matches the policy’s criteria:
- The GitLab Security Policy Bot user sets the vulnerability record’s status to Dismissed.
- The dismissal reason is set according to the policy configuration.
- A note about the status change is added to the vulnerability’s record.
Policies can identify vulnerabilities that match a set of criteria and override their severity:
- The GitLab Security Policy Bot user sets, increases, or decreases the severity level, according to the policy configuration.
- A note about the severity change is added to the vulnerability’s record.
To limit the pipeline load and duration, a maximum of 1,000 vulnerabilities per pipeline are processed for auto-resolve or auto-dismiss actions. The auto-resolve or auto-dismiss actions resume in subsequent pipelines, up to the maximum, until all matching vulnerabilities are processed.
Restrictions
- You can assign a maximum of five rules to each policy.
- You can assign a maximum of five vulnerability management policies to each security policy project.
- When a secret detection scan finds that a previously detected secret key is no longer detected, the vulnerability is not auto-resolved. Instead, it remains in Needs Triage because the removed secret key has already been exposed. The vulnerability status should be manually resolved only after the secret key is revoked or rotated.
Auto-dismiss policies
Auto-dismiss policies support the following criteria:
- File path: Match vulnerabilities based on the file path where they were found. Supports glob patterns like test/**/*.
- Directory: Match vulnerabilities found in specific directories. Supports glob patterns like vendor/*.
- Identifier: Match vulnerabilities based on their identifiers (CVE, CWE, or scanner-specific IDs). Supports wildcard patterns like CVE-2023-*.
With the security_policies_severity_customize feature flag enabled, identifier criteria also support:
- Specifying an identifier type (cve, cwe, or owasp) to match against specific identifier formats.
- Using a values array to match multiple identifiers with OR logic.
You can combine multiple criteria using:
- AND logic. To be dismissed, the vulnerability must match all of the criteria.
- OR logic. To be dismissed, the vulnerability can match any of the rules.
The following dismissal reasons are supported:
- Acceptable risk: The vulnerability is known and accepted as a business risk.
- False positive: The vulnerability is incorrectly reported.
- Mitigating control: Equivalent protection is provided by other controls.
- Used in tests: The vulnerability is part of test code or test data.
- Not applicable: The vulnerability is in code that is no longer updated.
Severity override policies
The availability of this feature is controlled by a feature flag. For more information, see the history.
Policies that override the severity of vulnerabilities use the same criteria as auto-dismiss policies:
- File path: Match vulnerabilities based on the file path where they were found.
- Directory: Match vulnerabilities found in specific directories.
- Identifier: Match vulnerabilities based on their identifiers.
For identifier criteria, you can optionally specify an identifier type to match only specific identifier formats:
- CVE ID: Match CVE identifiers like CVE-2021-44228 or patterns like CVE-2023-*.
- CWE ID: Match CWE identifiers like CWE-79 or patterns like CWE-*.
- OWASP: Match OWASP identifiers like A1 or A03:2021.
The following severity operations are supported:
- Set: Sets the severity to a specific level (info, low, medium, high, or critical).
- Increase: Increases the severity by one level.
- Decrease: Decreases the severity by one level.
Create a vulnerability management policy
Create a vulnerability management policy to automatically resolve or dismiss vulnerabilities matching specific criteria.
Prerequisites:
- By default, only group, subgroup, or project Owners have the permissions required to create or assign a security policy project. This can be changed using custom roles.
To create a vulnerability management policy:
- In the top bar, select Search or go to and find your project.
- Go to Secure > Policies.
- Select New policy.
- In Vulnerability management policy, select Select policy.
- Complete the fields and set the policy’s status to Enabled.
- Select Create policy.
- Review and merge the merge request.
After the vulnerability management policy has been created, the policy rules are applied to pipelines on the default branch.
Edit a vulnerability management policy
Edit a vulnerability management policy to change its rules.
To edit:
- In the top bar, select Search or go to and find your project.
- Go to Secure > Policies.
- In the policy’s row, select Edit.
- Edit the policy’s details.
- Select Save changes.
- Review and merge the merge request.
The vulnerability management policy has been updated. When a pipeline next runs against the default branch, the policy’s rules are applied.
Schema
When a vulnerability management policy is created or edited, it’s checked against the vulnerability management policy schema to confirm it’s valid.
Original source - April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
Self-hosted models
Gitlab adds self-hosted GitLab Duo and AI Gateway options for running AI features on your own infrastructure, with support for private or on-prem models, hybrid deployments, and GitLab-managed models. It keeps inference data inside your network while offering flexible billing and deployment choices.
Host your own AI infrastructure to use GitLab Duo features with the LLMs of your choice. Use a self-hosted AI Gateway to keep all request and response data in your own environment, avoid external API calls, and manage the full lifecycle of requests to your LLM backends.
Deployment options
You can use self-hosted models with different deployment options.
GitLab Duo Agent Platform
Use GitLab Duo Agent Platform Self-Hosted for on-premise models or private cloud-hosted models in the GitLab Duo Agent Platform.
For customers with an offline license, billing is seat based and you must have the GitLab Duo Agent Platform Self-Hosted add-on.
For customers with an online license, billing is usage based. You can also use GitLab-managed models in a hybrid deployment.
Data transmission
The following billing metadata is sent to GitLab for usage billing:
- Anonymized instance ID
- Call count
- User ID
Inference data, including code inputs, model prompts, and model responses, does not leave the customer network.
GitLab does not capture which model or model provider the customer uses.
GitLab Duo
GitLab Duo Self-Hosted is for customers with GitLab Duo Enterprise who are using GitLab Duo features. You can use:
- On-premise models or private cloud-hosted models
- GitLab-managed models in a hybrid deployment
This option uses seat-based pricing.
Feature versions and status
The following table lists:
- The GitLab version required to use the feature.
- The feature status. A feature status in the deployment might be different to the status listed in the feature.
To use GitLab Duo features with GitLab Duo Self-Hosted, you must have the GitLab Duo Enterprise add-on. This applies even if you can use these features with GitLab Duo Core or GitLab Duo Pro when GitLab hosts and connects to those models through the cloud-based AI Gateway.
AI Gateway configurations
After you choose a product option, configure how your AI Gateway connects to LLMs:
- Self-hosted AI Gateway and LLMs: Use your own AI Gateway and models for full control over your AI infrastructure.
- Hybrid AI Gateway and model configuration: For each feature, use either your self-hosted AI Gateway with self-hosted models, or the GitLab.com AI Gateway with GitLab-managed models.
- GitLab.com AI Gateway with default GitLab external vendor LLMs: Use GitLab managed AI infrastructure.
Self-hosted AI Gateway and LLMs
In a fully self-hosted configuration, you deploy your own AI Gateway and use only supported LLMs in your infrastructure, without using GitLab infrastructure or AI vendor models. This gives you full control over your data and security.
This configuration only includes models configured through your self-hosted AI Gateway. If you use GitLab-managed models for any features, those features connect to the GitLab-hosted AI Gateway instead of your self-hosted gateway, making it a hybrid configuration rather than fully self-hosted.
While you deploy your own AI Gateway, you can still use cloud-based LLM services like AWS Bedrock or Azure OpenAI as your model backend and they will continue to connect through your self-hosted AI Gateway.
If you have an offline environment with physical barriers or security policies that prevent or limit internet access, and comprehensive LLM controls, you should use this fully self-hosted configuration.
Hybrid AI Gateway and model configuration
In this hybrid configuration, you deploy your own AI Gateway and self-hosted models for most features, but configure specific features to use GitLab-managed models. When a feature is configured to use a GitLab-managed model, requests for that feature are sent to the GitLab-hosted AI Gateway instead of your self-hosted AI Gateway.
This option provides flexibility by allowing you to:
- Use your own self-hosted models for features where you want full control.
- Use GitLab-managed vendor models for specific features where you prefer the models GitLab has curated.
When features are configured to use GitLab-managed models:
- All calls to those features use the GitLab-hosted AI Gateway, not the self-hosted AI Gateway.
- Internet connectivity is required for these features.
- This is not a fully self-hosted or isolated configuration.
GitLab managed models
Use GitLab managed models to connect to AI models without the need to self-host infrastructure. These models are managed entirely by GitLab.
You can select the default GitLab model to use with an AI-native feature. For the default model, GitLab uses the best model based on availability, quality, and reliability. The model used for a feature can change without notice.
When you select a specific GitLab managed model, all requests for that feature use that model exclusively. If the model becomes unavailable, requests to the AI Gateway fail and users cannot use that feature until another model is selected.
When you configure a feature to use GitLab managed models:
- Calls to those features use the GitLab-hosted AI Gateway, not the self-hosted AI Gateway.
- Internet connectivity is required for these features.
- The configuration is not fully self-hosted or isolated.
GitLab.com AI Gateway with default GitLab external vendor LLMs
Add-on: GitLab Duo Core, Pro, or Enterprise
If you do not meet the use case criteria for GitLab Duo Self-Hosted, you can use the GitLab.com AI Gateway with default GitLab external vendor LLMs.
The GitLab.com AI Gateway is the default Enterprise offering and is not self-hosted. In this configuration, you connect your instance to the GitLab-hosted AI Gateway, which integrates with external vendor LLM providers, including:
- Anthropic
- Fireworks AI
- Google Vertex
These LLMs communicate through the GitLab Cloud Connector, offering a ready-to-use AI solution without the need for on-premise infrastructure.
To set up this infrastructure, see how to configure GitLab Duo on a GitLab Self-Managed instance.
Set up a private infrastructure
If you have an offline license, you can set up a fully private infrastructure:
- Install a Large Language Model (LLM) serving infrastructure.
- GitLab supports various platforms for serving and hosting your LLMs, such as vLLM, AWS Bedrock, and Azure OpenAI. For more information about each platform, see supported LLM platforms documentation.
- GitLab provides a matrix of supported models with their specific features and hardware requirements. For more information, see the supported models and hardware requirements documentation.
- Install the AI Gateway to access GitLab Duo features.
- Configure your GitLab instance for features to use self-hosted models.
- Enable logging to track and manage your system’s performance.
Related topics
- Troubleshooting
- Install the GitLab AI Gateway
- Supported models
- Supported platforms
- Tutorial: AWS Bedrock BYOM deployment guide
- April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
Agent Platform AI models
Gitlab expands GitLab Duo Agent Platform model selection, with default models for Agentic Chat, Code Review Flow and other agents, plus support for choosing from a broader set of Claude and GPT models in top-level groups.
Every GitLab Duo feature uses a default model. GitLab might update default models to optimize performance. For some features, you can select a different model, which persists until you change it.
Default models
This table lists the default model for each feature in the Agent Platform.
Feature Model GitLab Duo Agentic Chat Claude Sonnet 4.6 Vertex Code Review Flow Claude Sonnet 4.6 Vertex All other agents Claude Sonnet 4.5 VertexSupported models
This table lists the models you can select for features in the Agent Platform.
Model GitLab Duo Agentic Chat All other agents Claude Sonnet 4 Yes Yes Claude Sonnet 4 Vertex Yes Yes Claude Sonnet 4.5 Yes Yes Claude Sonnet 4.5 Vertex Yes Yes Claude Sonnet 4.6 Yes Yes Claude Haiku 4.5 Yes Yes Claude Opus 4.5 Yes Yes Claude Opus 4.6 Yes Yes Claude Opus 4.7 Yes Yes GPT-5 Yes Yes GPT-5 Codex Yes Yes GPT-5.2 Codex Yes Yes GPT-5.3 Codex Yes Yes GPT-5 Mini Yes Yes GPT-5.2 Yes YesSelect a model for a feature
Offering: GitLab.com
History
You can select a model for a feature in a top-level group. The model that you select applies to that feature for all child groups and projects.
Prerequisites:
- You have the Owner role for the group.
- The group that you select models for is a top-level group.
- In GitLab 18.3 or later, if you belong to multiple GitLab Duo namespaces, you must assign a default namespace.
To select a model for a feature:
- In the top bar, select Search or go to and find your group.
- Select Settings > GitLab Duo.
- Select Configure features.
- Go to the GitLab Duo Agent Platform section.
- Select a model from the dropdown list.
- Optional. To apply the model to all features in the section, select Apply to all.
In the IDE, model selection for GitLab Duo Agentic Chat is applied only when the connection type is set to WebSocket.
To specify a model for the GitLab Duo CLI, see select a model.
Troubleshooting
When selecting models other than the default, you might encounter the following issues.
Model is not available
If you are using the default GitLab model for a GitLab Duo AI-native feature, GitLab might change the default model without notifying the user to maintain optimal performance and reliability.
If you have selected a specific model for a GitLab Duo AI-native feature, and that model is not available, there is no automatic fallback. The feature that uses this model is unavailable.No default GitLab Duo namespace
When using a GitLab Duo feature with a selected model, you might get an error that states that you have not selected a default GitLab Duo namespace. For example, on:
- GitLab Duo Code Suggestions, you might get Error 422: I'm sorry, you have not selected a default GitLab Duo namespace. Please go to GitLab and in user Preferences - Behavior, select a default namespace for GitLab Duo.
- GitLab Duo Chat, you might get Error G3002: I'm sorry, you have not selected a default GitLab Duo namespace. Please go to GitLab and in user Preferences - Behavior, select a default namespace for GitLab Duo.
This issue occurs when you belong to multiple GitLab Duo namespaces, but have not chosen one as your default namespace.
Original source
To resolve this, set a default GitLab Duo namespace. - April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
GitLab Credits and usage billing
Gitlab adds GitLab Credits for usage-based billing across Duo Agent Platform, with included credits, monthly commitment pools, on-demand billing, usage caps, dashboards, and export tools to help teams monitor and control consumption.
GitLab Credits are the standardized consumption currency for usage-based billing. Credits are used for GitLab Duo Agent Platform, where each usage action consumes a number of credits.
GitLab Duo Pro and Enterprise and their associated GitLab Duo features are not billed based on usage and do not consume GitLab Credits.
Credits are calculated based on the features and models you use, as listed in the credit multiplier tables. You are billed for features that are generally available.
Billing occurs at the root namespace or top-level group level, not at the project level. Credit usage is attributed to the user who performs the action, regardless of which project they are using the features in. All usage in a root namespace or top-level group is consolidated for billing purposes.
GitLab provides three ways to obtain credits:
- Included credits
- Monthly Commitment Pool
- On-Demand credits
For a click-through demo, see GitLab Credits.
For information about credit pricing, see GitLab pricing.
Included credits
Included credits are allocated to all users on a Premium or Ultimate tier. These credits are individual and cannot be shared between users. Included credits reset at the beginning of each month. Unused credits do not roll over to the next month.
Community program subscriptions do not receive included credits.
For more information about included credits, see GitLab Promotions Terms & Conditions.
Monthly Commitment Pool
Monthly Commitment Pool is a shared pool of credits available to all users in the subscription. All users in your subscription can draw from this shared pool after they have consumed their included credits.
You can purchase the Monthly Commitment Pool as a recurring annual or multi-year term. The number of credits purchased for the year is divided in 12.
For example, when you purchase a monthly commitment pool of 1,000 credits, you will have 1,000 credits available each month for the contract term.
You can increase your commitment at any time through your GitLab account team. The additional commitment applies for the remainder of your contract term. You can decrease your commitment only at the time of renewal.
You can purchase a commitment of credits with built-in tiered discounting. The commitment is billed up front at the start of the contract term.
Credits become available immediately after purchase, and reset on the first of every month. Unused credits do not roll over to the next month.
When purchasing a monthly commitment pool, you accept the usage billing terms, including On-Demand credit usage. After you accept the terms, On-Demand billing stays active for the rest of your subscription and subsequent self-serve renewals, and you cannot opt out.
On-Demand credits
On-Demand credits cover usage incurred after you have used all included credits and the credits in the Monthly Committed Pool. On-Demand credits are billed monthly.
On-Demand credits are consumed at the list price of $1 per credit used.
On-Demand credits can be used after you have accepted usage billing terms. You can accept these terms when you purchase your monthly commitment, or directly in the GitLab Credits dashboard. By accepting usage billing terms, you agree to pay for all On-Demand charges already accrued in the current monthly billing period, and any On-Demand charges incurred going forward.
If you haven’t accepted usage billing terms, you can’t use GitLab Duo Agent Platform and consume On-Demand credits. You can regain access to GitLab Duo Agent Platform by either purchasing a monthly commitment or accepting the usage billing terms.
For example, a subscription has a monthly commitment of 50 credits per month. If 75 credits are used in that month, the first 50 credits are part of the monthly commitment pool, and the additional 25 are billed as on-demand usage.
Usage order
GitLab Credits are consumed in the following order:
- Included credits are used by each user first.
- Monthly Commitment Pool of credits are used after all included credits have been consumed.
- On-Demand credits are used after all other available credits (included credits and Monthly Commitment Pool, if applicable) are depleted and usage billing terms are signed.
Temporary evaluation credits
If you have not purchased the Monthly Commitment Pool or accepted the usage billing terms for On-Demand credits, you can request a free temporary pool of credits to evaluate GitLab Duo Agent Platform features.
Credits are allocated based on the number of users you request for the evaluation, and added to a shared pool for those users. Credits are valid for 30 days, and cannot be used after they expire.
To request credits, contact the Sales team.
If you’re on the Free tier and want to try credits, you can start an Ultimate trial.
For the Free tier on GitLab.com
Users on the Free tier on GitLab.com can purchase a Monthly Commitment Pool of GitLab Credits for their group namespace. This provides access to a set of GitLab Duo Agent Platform features, without needing a Premium or Ultimate subscription.
On-demand usage for Free namespaces is capped at $25,000 for each calendar month. Upon reaching this limit, on-demand usage is automatically turned off and resets at the beginning of the following month.
Buy GitLab Credits
You can buy GitLab Credits for your Monthly Commitment Pool in Customers Portal.
Prerequisites:
- You must be a billing account manager.
- Sign in to Customers Portal.
- On the relevant subscription card, select GitLab Credits dashboard.
- Select Purchase monthly commitment or Increase monthly commitment.
- Enter the number of credits you want to buy.
- Select Review order. Verify that the number of credits, customer information, and payment method are correct.
- Select Confirm purchase.
Your GitLab Credits are displayed in the subscription card in Customers Portal, and in the GitLab Credits dashboard.
Credit multipliers
Credit usage is calculated based on the features and models they use. Some features have multiple model options to choose from, while other features use only one model.
A request represents a single (billable) action initiated by a user (for example, sending a chat message or requesting code generation). This represents one interaction from the user’s perspective.
A model call represents the underlying API calls made to LLMs to fulfill a user request. A single user request might trigger multiple model calls. For example, one call to understand context and another call to generate a response.
Models
The following table lists the number of requests you can make with one GitLab Credit for different models. Newer, more complex models have a higher multiplier and require more credits. A request is made anytime a model is called.
For self-hosted models, you can make eight requests for one credit for any supported or compatible model.
For subsidized models with basic integration:
Model | Requests with one credit
claude-3-haiku | 8.0
codestral-2501 | 8.0
gemini-2.5-flash | 8.0
gpt-5-mini | 8.0
gpt-5-4-nano | 8.0For premium models with optimized integration:
Model | Requests with one credit
claude-4.5-haiku | 6.7
gpt-5-4-mini | 6.7
gpt-5-codex | 3.3
gpt-5 | 3.3
gpt-5.2 | 2.5
gpt-5.2-codex | 2.5
gpt-5.3-codex | 2.5
claude-3.5-sonnet | 2.0
claude-3.7-sonnet | 2.0
claude-sonnet-4¹ | 2.0
claude-sonnet-4.5¹ | 2.0
claude-sonnet-4.6 | 2.0
claude-opus-4.5 | 1.2
claude-opus-4.6 | 1.1
claude-opus-4.7 | 1.1
claude-sonnet-4² | 1.1
claude-sonnet-4.5² | 1.1Footnotes:
- Prompts with up to 200,000 tokens.
- Prompts with more than 200,000 tokens.
Features
The following table lists the number of requests or model calls you can make with one GitLab Credit for different features. This pricing applies to all models (including self-hosted models) available for the feature.
Feature | Requests or calls with one credit
GitLab Duo Code Suggestions | 50 requests
Code Review Flow | 4 calls
SAST False Positive Detection Flow | 1 call
SAST Vulnerability Resolution Flow | 0.25 callsEach message sent to GitLab Duo Agentic Chat counts as one billable request. One conversation window can include multiple messages, and so multiple billable requests.
Usage caps
Status: Beta
The availability of this feature is controlled by a feature flag. For more information, see the history.
You can set a monthly GitLab Credits cap at the subscription and user level to prevent unexpected overage charges. When credit consumption reaches the configured cap, access to features that consume GitLab Credits (for example, GitLab Duo Agent Platform) is automatically suspended until the next billing period begins, or until an administrator adjusts or disables the cap.
The following cap types are available:
Cap type | Applies to | Credit sources counted | Managed through
Subscription cap | All users on the subscription | On-Demand only | Customers Portal
Flat user cap | Individual users (default limit) | All | GraphQL API
Per-user override | Specific users (overrides the flat cap) | All | GraphQL APIWhen on-demand usage in the current billing period reaches or exceeds the configured cap, all Agent Platform features (Duo Chat, Code Suggestions, Flows, and Agents) are suspended for all users on that subscription or instance. For user-level caps, only the individual user who reached their cap is suspended.
Users who have reached their cap are unable to access Agent Platform features until the cap is raised or the next billing period begins.
Usage counters reset automatically at the start of each billing period. Cap values persist across billing periods unless changed.
Caps are enforced using the most recent usage data available. Because data is not real time, limited additional GitLab Credits usage may occur before enforcement takes effect.
When subscription on-demand usage reaches the configured cap, GitLab sends an email notification to billing account managers.
Set a subscription-level usage cap
Prerequisites:
- You must be a billing account manager.
- Sign in to Customers Portal.
- On the subscription card, select GitLab Credits dashboard.
- In the On-demand Credit Cap panel, turn on the Monthly On-demand Credits cap toggle.
- Enter the maximum number of on-demand GitLab Credits allowed per billing period.
- Select Save.
If the cap is set below the currently reported total on-demand usage for the current billing period, the cap is considered reached immediately on the next enforcement check.
To disable the cap, turn off the Monthly On-demand Credits cap toggle. When disabled, no subscription-level on-demand GitLab Credits cap is enforced, and behavior falls back to existing billing behavior.
You can use the GraphQL API to view usage caps and set a flat user-level cap or a per-user override cap.
GitLab Credits dashboard
Offering: GitLab.com, GitLab Self-Managed
The GitLab Credits dashboard displays information about your usage of GitLab Credits. Use the dashboard to monitor credit consumption, track trends, and identify usage patterns.
On the dashboard, used credits represent deductions from available credits. For overages (On-Demand credits), used credits represent on-demand usage that will be paid later, if you have agreed to the usage billing terms.
To help you manage credit consumption, GitLab emails the following information to administrators and subscription owners:
- Monthly credit usage summaries
- Notifications when credit usage thresholds are at 50%, 80%, and 100%
You can access the dashboard in the Customers Portal and in GitLab.
Usage data is not displayed in real time. Data is synchronized to the dashboards periodically, so usage data should appear within a few hours of actual consumption. This means your dashboard shows recent usage, but might not reflect actions taken in the last few hours.
In Customers Portal
The GitLab Credits dashboard in the Customers Portal provides the most detailed view of your usage and costs.
The dashboard displays summary cards of key metrics:
- Current month usage: Total GitLab Credits used in the current month (if you have a monthly commitment)
- Included credits: Total credits included with your subscription (if you have a monthly commitment)
- Committed credits: Credits from your Monthly Committed Pool (if applicable)
- Monthly waivers: Remaining credits from waivers (if applicable)
- On-Demand usage: Credits consumed beyond your included and committed amounts. If you have enough waiver credits to offset all On-Demand credits, the GitLab Credits Dashboard hides the On-Demand card and displays the Monthly Waiver card instead.
- Usage control status: Whether individual users have been blocked from Agent Platform access due to reaching their per-user credit cap.
In GitLab
The GitLab Credits dashboard in GitLab provides operational visibility into the usage of credits in your organization. Use the dashboard to understand which users, groups, or projects are driving usage, and make informed decisions about resource allocation.
The dashboard displays the following information:
- Organization usage: Total credit usage across your GitLab instance or group
- Detailed credit usage by user: Number of credits used by each user
- User drill-down view: Individual usage events for each user, with links to GitLab Duo Agent Platform session details
View the GitLab Credits dashboard
By default, individual user data is not displayed in the GitLab Credits dashboard. To display it, you must enable this setting for your group or instance.
Usage control status
When per-user credit caps are enabled, the Usage by user tab on the GitLab Credits dashboard displays a Usage control status column. This column shows whether each user can access GitLab Duo Agent Platform features or is blocked because they reached their credit cap.
The column displays one of the following statuses:
Status | Description
Regular | The user has not reached their credit cap and can use GitLab Duo Agent Platform features.
Blocked - subscription cap reached | The user reached the flat per-user cap set at the subscription level.
Blocked - user cap reached | The user reached a per-user override cap set specifically for them.Unblock a user who reached their credit cap
You can restore access for a blocked user by using the per-user override GraphQL API.
To unblock a user, either:
- Increase the cap: Set a higher per-user override cap so the user’s usage falls below the new limit.
- Remove the cap: Delete the per-user override so the user is no longer subject to an individual cap.
After you update the cap, the user’s status changes to Regular and they can use GitLab Duo Agent Platform features again.
View user credit usage details
To view a user’s individual usage events in a drill-down view:
- In the GitLab Credits dashboard, select the Usage by user tab.
- In the User column, select the user you want to view.
- To view session details, in the Action column, select the action you want to view.
Session links are available only for GitLab Duo Agent Platform usage events that are triggered in a project and have an associated session ID. Usage events triggered in a group, legacy events, and actions outside Agent Platform don’t have links.
Export usage data
You can export the credit usage data for a subscription as a CSV file in Customers Portal. The CSV file lists the usage events and credits used on each day of the current month.
Prerequisites:
- You must be a billing account manager.
- Sign in to Customers Portal.
- On the subscription card, select GitLab Credits dashboard.
- From the Usage period dropdown list, select the period you want to export data for.
- Select Export usage data.
- April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
Foundational agents
Gitlab introduces foundational agents for GitLab Duo Chat, bringing domain-specific AI help for planning, security analysis, data analysis, and CI/CD workflows across the GitLab UI and IDEs. It also adds options to duplicate agents and control foundational agent availability at group or instance level.
Foundational agents are specialized AI assistants that extend the capabilities of GitLab Duo Chat with domain-specific expertise and context awareness.
Unlike the general-purpose GitLab Duo agent, foundational agents understand the unique workflows, frameworks, and best practices of their specialized domains. Each agent combines deep knowledge of GitLab features with role-specific reasoning to provide targeted help that aligns with how practitioners actually work.
Foundational agents are built and maintained by GitLab and display a GitLab-maintained badge (tanuki-verified).
Prerequisites
- Meet the prerequisites for the GitLab Duo Agent Platform.
- Have foundational agents turned on.
Available foundational agents
The following foundational agents are available in the GitLab UI, VS Code, and JetBrains IDEs. Tier availability varies by agent. For details, see each agent’s page.
- Planner, for product management and planning workflows.
- Security Analyst, for security analysis and vulnerability management.
- Data Analyst, for analysis and visualization of platform data.
- CI Expert, for creating, debugging, and optimizing GitLab CI/CD pipelines.
Duplicate an agent
To make changes to a foundational agent, create a copy of it.
Prerequisites:
- You must have the Maintainer or Owner role for the project.
To duplicate an agent:
- In the top bar, select Search or go to > Explore.
- Select AI Catalog, then select the Agents tab.
- Select the agent you want to duplicate.
- In the upper-right corner, select Actions (ellipsis_v) > Duplicate.
- Under Visibility & access:
- From the Managed by dropdown list, select a project for the agent.
- For Visibility, select Private or Public.
- Optional. Edit any fields you want to change.
- Select Create agent.
A custom agent is created. To use it, you must enable it.
Turn foundational agents on or off
By default, foundational agents are turned on. You can turn them on or off for a top-level group (namespace) or for an instance.
If you turn foundational agents off by default:
- Foundational agents that use the default configuration, including newly released agents, are turned off.
- You can still use the default GitLab Duo Agent.
For GitLab.com
Prerequisites:
- You must have the Owner role for the group.
- In the top bar, select Search or go to and find your group.
- Select Settings > GitLab Duo.
- Select Change configuration.
- Under Foundational agents, for Default availability, select one of the following:
- On
- Off
- Under Availability settings, for each agent, select one of the following:
- On
- Off
- Use default (On) or Use default (Off)
- Select Save changes.
These settings apply to:
- Users who have the top-level group as the default GitLab Duo namespace.
- Users without a default namespace, and who visit a namespace that belongs to the top-level group.
If you turn off foundational agents for a top-level group, users with that group as their default GitLab Duo namespace can’t access foundational agents in any namespace.
Original source - April 2026
- No date parsed from source.
- First seen by Releasebot:Apr 20, 2026
Agentic SAST Vulnerability Resolution
Gitlab adds agentic SAST vulnerability resolution in GitLab Duo, automatically analyzing high and critical findings, generating merge requests with context-aware fixes, and validating them with pipelines. It also supports manual triggering and confidence scoring.
GitLab Duo automatically analyzes SAST vulnerabilities and generates merge requests with context-aware code fixes. This agentic approach uses multi-shot reasoning to resolve vulnerabilities with minimal human intervention, reducing remediation time and improving security outcomes.
Unlike the single-shot vulnerability resolution, agentic vulnerability resolution uses iterative reasoning to:
- Analyze vulnerability context across the codebase.
- Generate high-quality fixes that address root causes.
- Validate fixes through automated testing.
- Provide confidence scoring for proposed solutions.
Agentic SAST vulnerability resolution can run automatically, or you can run it manually.
For a click-through demo, see Agentic SAST Vulnerability Resolution.
Automatic resolution
When a SAST security scan completes on the main branch, GitLab Duo automatically completes the following actions:
- Analyzes each High and Critical severity SAST vulnerability.
- Checks if false positive detection has run.
- If the vulnerability is not a likely or possible false positive, GitLab Duo creates a merge request with the proposed fix.
- Runs the pipeline to validate that the fix resolves the vulnerability.
The process runs in the background with no manual triggering required. Results appear in the vulnerability report once processing is complete.
Manual resolution
You can manually trigger agentic vulnerability resolution for any SAST vulnerability at any time, regardless of severity. See manual trigger for instructions.
Automatic resolution conditions
Automatic agentic vulnerability resolution runs when all of the following conditions are met:
- A SAST security scan completes successfully on the main branch.
- The scan detects high or critical severity vulnerabilities.
- False positive detection has run and determined the vulnerability is not a false positive.
- Agentic SAST Vulnerability Resolution flow is enabled for the project.
- GitLab Duo features are enabled for the project.
- The vulnerability is from a supported SAST analyzer.
The analysis happens in the background and results appear in the vulnerability report after processing is complete.
Manual trigger
To manually run agentic vulnerability resolution for any existing SAST vulnerability:
- In the top bar, select Search or go to and find your project.
- Select Secure > Vulnerability report.
- Select the vulnerability you want to resolve.
- In the upper-right corner, select AI vulnerability management > Resolve with AI.
GitLab Duo analyzes the vulnerability and generates a merge request if a fix can be produced. Manual resolution works on any SAST vulnerability regardless of severity.
Configuration
To use agentic vulnerability resolution, you must have the following requirements configured:
- A GitLab Duo add-on subscription (GitLab Duo Core, Pro, or Enterprise).
- A default GitLab Duo namespace set in your user preferences.
- GitLab 18.9 or later.
- Turn on experiment and beta GitLab Duo features is enabled in your top-level group.
- Agentic vulnerability resolution allowed for the group and turned on for the project.
- SAST False Positive Detection flow enabled for the top-level group and project.
Allow foundational flow for a group
You can allow all the projects in a top-level group to use the foundational flow. Individual projects must still turn on the feature in their project settings.
To allow agentic vulnerability resolution for all projects in a top-level group:
- In the left sidebar, select Search or go to and find your top-level group.
- Select Settings > GitLab Duo.
- Turn on the Allow flow execution toggle (enabled by default).
- Under Allow foundational flows, select the Resolve SAST Vulnerability checkbox.
- Select Save changes.
Turn on for a project
To turn on the feature for a specific project:
- In the left sidebar, select Search or go to and find your project.
- Select Settings > General.
- Expand GitLab Duo.
- Turn on the Turn on SAST vulnerability resolution workflow toggle.
- Select Save changes.
When you allow agentic vulnerability resolution for the top-level group and turn it on for the project, the feature works automatically with your existing SAST scanners.
Reviewing generated merge requests
The following occurs when GitLab Duo generates a merge request for a vulnerability:
- The merge request is created with the proposed fix.
- The description includes the following:
- The vulnerability details and severity
- Explanation of the fix approach
- Links to relevant security resources
- Confidence score for the proposed solution
- The pipeline runs automatically to validate the fix.
- Reviewers review the changes and pipeline results.
- Users with the ability to merge the merge request do so according to your workflow.
Troubleshooting
Agentic vulnerability resolution sometimes cannot generate a suggested fix. Common causes include:
- Insufficient context: The vulnerability occurs in complex code patterns that require additional context or manual intervention.
- False positive detected: The AI model assesses whether the vulnerability is valid. The model may decide that the vulnerability is not a true vulnerability, or isn’t worth fixing.
- If you agree that the vulnerability is a false positive or is not worth fixing, you should dismiss the vulnerability and select a matching reason.
- Temporary or unexpected error: The error message may state that an unexpected error has occurred, the upstream AI provider request timed out, something went wrong, or a similar cause.
- These errors may be caused by temporary problems with the AI provider or with GitLab Duo.
- A new request may succeed, so you can try to resolve the vulnerability again.
- If you continue to see these errors, contact GitLab for assistance.
Providing feedback
We welcome your feedback on agentic vulnerability resolution. If you encounter issues or have suggestions for improvement, please provide feedback in issue 585626.
Related topics
- Vulnerability Resolution
- SAST False Positive Detection
- Vulnerability details
- Vulnerability report
- SAST
- GitLab Duo
- Apr 16, 2026
- Date parsed from source:Apr 16, 2026
- First seen by Releasebot:Apr 17, 2026
GitLab 18.11 release notes
Gitlab releases 18.11 with major AI, security, CI/CD, and platform upgrades, including Agentic SAST vulnerability resolution, the Data Analyst and CI Expert agents, finer-grained access controls, new security dashboard insights, expanded Kubernetes and Gitaly support, and Runner 18.11.
On April 16, 2026, GitLab 18.11 was released with the following features.
In addition, we want to thank all of our contributors, including this month's notable contributor.
This month’s Notable Contributor: Rinku C#
We are excited to recognize Rinku C, a Level 4 contributor with over 80 merged improvements across GitLab since joining in September 2025.
Nominated by Arianna Haradon, Senior Fullstack Engineer on the Developer Relations team, this award celebrates his sustained and meaningful impact over time. Rinku has strengthened security-sensitive flows by requiring scopes on project and group access token creation forms, and improved everyday GitLab experience with numerous updates like next/previous navigation in job logs, excluding empty searches from recent, and reducing file tree clutter through thoughtful UI refinements that make common workflows clearer and easier to navigate. Rinku tackles the work that often goes unclaimed, keeping the codebase healthy and compounding to meaningful, lasting value. Thank you for your contributions!
Primary features
Vulnerability resolution generally available on GitLab Duo Agent Platform
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated
Links: Documentation Related issueAgentic SAST Vulnerability Resolution is now generally available in GitLab 18.11 on the GitLab Duo Agent Platform. It runs as part of your SAST scan, after SAST false positive detection runs, or when manually triggered for individual SAST vulnerabilities.
Agentic SAST Vulnerability Resolution:
- Autonomously analyzes the finding and reasons through the surrounding code context.
- Automatically creates a ready-to-review merge request with proposed code fixes for critical and high severity SAST vulnerabilities.
- Provides quality assessments so reviewers can quickly gauge confidence in the proposed remediation.
- Allows you to apply resolutions directly from vulnerability details pages.
We welcome your feedback in issue 585626.
GitLab Data Analyst Foundational Agent now generally available
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated
Links: Documentation Related epicThe Data Analyst Agent is a specialized AI chat assistant that helps you query, visualize, and surface data across the GitLab platform.
Backed by the GitLab Query Language (GLQL), the Data Analyst can retrieve and analyze data about each of the supported data sources, and provide clear, actionable insights about your software development health and engineering efficiency.
These insights can be visualized directly in the agent output and embedded directly into issues and epics for further evaluation.
CI Expert Agent launches in beta
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueThe AI-powered CI Expert Agent is now available in beta. This agent helps teams get from GitLab code to a first working pipeline without starting from a blank .gitlab-ci.yml.
Using GitLab Duo Agent Platform, the agent inspects your repository, asks a few guided questions about your build and test process, and generates a ready-to-run pipeline you can review, edit, and commit.
This turns pipeline creation into a conversational, context-aware experience, while still letting you take full control of the YAML after you’re ready to evolve and optimize your configuration.
Automated vulnerability severity overrides
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicDefault vulnerability severities don’t always reflect your organization’s actual risk. A critical CVE in an internal-only service might not warrant the same urgency as one in a public-facing application, yet teams spend significant time triaging findings that don’t match their risk model.
Vulnerability management policies can now automatically adjust the severity of vulnerabilities based on conditions like CVE ID, CWE ID, file path, and directory. When applied, the policy updates the severity of any vulnerability that matches the criteria on the default branch. Manual overrides still take precedence, and all changes are logged in the vulnerability’s history and audit events.
This reduces triage work and ensures developers focus on the findings that matter most to your business.
Create Service Account in subgroups and projects
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicTeams can now create service accounts in subgroups and projects. Instead of broad, top-level group bots, you can attach a dedicated service account to a single subgroup or project and manage its access like any other member of that namespace. Group and subgroup service accounts can be invited to the group where they were created or to any descendant subgroups and projects. Project service accounts are limited to their own project.
Service Accounts available on GitLab Free
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicService accounts are now available on GitLab.com in all tiers. Previously limited to Premium and Ultimate, service accounts let you perform automated actions, access data, or run scheduled processes without tying credentials to individual team members. They’re commonly used in pipelines and third-party integrations where credentials must stay stable regardless of team changes. On GitLab Free, you can create up to 100 service accounts per top-level group, including those created in subgroups or projects.
Fine-grained permissions for personal access tokens now available (Beta)
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicFine-grained personal access tokens (PATs) are now available in beta. Unlike legacy PATs, which grant access to every project and group you belong to, fine-grained PATs let you limit each token to specific resources and actions. This reduces the potential impact of a leaked or compromised token.
Your existing PATs continue to work as before, and you can still create legacy PATs without fine-grained permissions.
This beta release covers approximately 75% of the GitLab REST API. Full REST API coverage, GraphQL enforcement, and administrator policy controls are planned for the GA release.
To share feedback, see epic 18555.
Top CWE chart in security dashboards
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicThe top CWE chart is now available on the new security dashboards. Identify the most common CWEs across your project or instance to identify opportunities for training, improvement, or program optimization. Users can group the dashboard data by severity and filter the dashboard by severity, project, and report type.
Deploy Gitaly on Kubernetes
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueYou can now deploy Gitaly on Kubernetes as a fully supported deployment method. This gives you greater flexibility in managing your GitLab infrastructure by using Kubernetes orchestration capabilities for scaling, high availability, and resource management. Previously, Kubernetes deployments required custom configurations and weren’t officially supported, making it difficult to maintain reliable Gitaly clusters in containerized environments.
Reconfigure inputs when manually running MR pipelines
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueA powerful aspect of CI/CD inputs is that you can manually run new pipelines with new values for runtime customization. This was not available in merge request (MR) pipelines before, but in this release you can now customize inputs in MR pipelines too.
After you configure inputs for MR pipelines, you can optionally modify those inputs and change the pipeline behavior any time you run a new pipeline for a merge request.
Agentic Core
Default model for GitLab Duo Agentic Chat updated from Haiku 4.5 to Sonnet 4.6
Available in: Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated
Links: Documentation Related issueWe’ve made an update to improve your Agentic Chat experience in GitLab. The default model for Agentic Chat was upgraded from Claude Haiku 4.5 to Claude Sonnet 4.6, hosted on Vertex AI. Claude Sonnet 4.6 offers improved reasoning and response quality but uses a higher GitLab Credit multiplier than Haiku 4.5.
You can select an alternative model, including Haiku, using the model selection setting. If you’ve already selected a specific model, your choice is preserved. This update only affects the default and will not override any existing selections. For information about credit multipliers by model, see the GitLab Credits documentation.
Configure tools in custom flow definitions
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueYou can now configure tool options and parameter values directly in your custom flow definitions to supersede the LLM default values. This gives you more precise, consistent control over how tools behave within a custom flow, making it easier to enforce guardrails and specific parameter values across that flow.
Mistral AI now supported as a self-hosted model in GitLab Duo Agent Platform
Available in: Premium, Ultimate
Offerings: GitLab Self-Managed
Links: Documentation Related issueGitLab Duo Agent Platform now supports Mistral AI as an LLM platform for self-hosted model deployments. GitLab Self-Managed customers can configure Mistral AI alongside existing supported platforms, including AWS Bedrock, Google Vertex AI, Azure OpenAI, Anthropic, and OpenAI. This gives teams more choice in how they run AI-powered features.
Scale and Deployments
View historical months in GitLab Credits dashboard
Available in: Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueThe GitLab Credits dashboard in Customers Portal now supports historical month navigation. Billing managers can browse past billing months to review daily usage trends, compare consumption patterns across periods, and reconcile usage with invoices. Previously, the dashboard only displayed the current billing month. With this improvement, administrators can make more informed decisions about credit allocation and forecast future needs based on historical data.
Set subscription-level usage cap for GitLab Credits
Available in: Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: DocumentationAdministrators can now set a monthly usage cap for On-Demand Credits at the subscription level. When total on-demand credit consumption reaches the configured cap, GitLab Duo Agent Platform access is automatically suspended for all users on that subscription until the next billing period begins or the admin adjusts the cap. This setting gives organizations a hard guardrail against unexpected overage bills, removing a key barrier to broader Agent Platform rollout. Caps reset automatically each billing period, and administrators receive an email notification when the cap is reached.
Set per-user GitLab Credits cap
Available in: Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: DocumentationAdministrators can now set an optional per-user usage cap for GitLab Credits per billing period. When an individual user’s total credit consumption reaches the configured limit, GitLab Duo Agent Platform access is suspended only for that user, while other users continue unaffected. This prevents any single user from consuming a disproportionate share of the organization’s credit pool, and gives administrators fine-grained control over usage distribution. Per-user usage caps work alongside subscription-level usage caps, by applying the cap that is reached first.
Linux package improvements
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed
Links: Documentation Related issueIn GitLab 19.0, the minimum-supported version of PostgreSQL will be version 17. To prepare for this change, on instances that don’t use PostgreSQL Cluster, upgrades to GitLab 18.11 will attempt to automatically upgrade PostgreSQL to version 17.
If you use PostgreSQL Cluster or opt out of this automated upgrade, you must manually upgrade to PostgreSQL 17 to be able to upgrade to GitLab 19.0.
Backup and Restore Support for Container Registry Metadata Database
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed
Links: Documentation Related issueThe GitLab backup Rake task for Linux package installations and the backup-utility for Cloud Native (Helm) installations now support the container registry metadata database. You can now back up references to blobs, manifests, tags, and other data stored in the metadata database, enabling recovery in the event of malicious or accidental data corruption.
New navigation experience for groups in Explore
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicWe’re excited to announce improvements to the groups list in Explore, making it easier to discover groups across your GitLab instance. The redesigned interface introduces a tabbed layout with two views:
- Active tab: Browse all accessible groups, helping you discover relevant communities and projects.
- Inactive tab: View archived groups and groups pending deletion for visibility into group lifecycle status.
These changes streamline group discovery and provide clearer visibility into which groups are available to join.
Asynchronous transfer of projects
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicIn previous versions of GitLab, transfers of large groups and projects could timeout. As we move groups and projects to use a unified state model for operations such as transfer, archive, and deletion, you get more consistent behavior, better visibility into state history and audit details, and fewer timeouts, specifically, for long running transfer operations through asynchronous processing.
Unified DevOps and Security
ClickHouse is generally available for Self-Managed deployments
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated
Links: Documentation Related issueFor GitLab Self-Managed instances, we now have improved recommendations and configuration guidance for the GitLab ClickHouse integration. Customers have options to bring their own cluster, or use the ClickHouse Cloud (recommended) setup option. This integration powers multiple dashboards and unlocks access to various API endpoints within the analytics space.
This scalable, high-performance database is part of the larger architectural improvements planned for the GitLab analytics infrastructure.
Enhanced GitLab Duo Agent Platform analytics on Duo and SDLC trends dashboard
Available in: Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated
Add-ons: Duo Pro, Duo Enterprise
Links: Documentation Related epicThe GitLab Duo and SDLC trends dashboard delivers improved analytics capabilities to measure the impact of GitLab Duo on software delivery. The dashboard now includes new single stat panels for monthly Agent Platform unique users and Agentic Chat sessions. Additionally, metrics previously displayed as a % usage compared to seat assignments have been updated to strictly report usage counts. This change resolves the issue where counts were missing Agent Platform usage controlled under the new usage billing model.
GLQL now has access to projects, pipelines, and jobs data sources
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated
Links: DocumentationThe GitLab Query Language (GLQL) now has access to three new data sources: projects, pipelines, and jobs. These new data sources are also available as embedded views, letting teams surface pipeline results, job statuses, and project overviews directly in wikis, issue and merge request descriptions, and repository Markdown files. GLQL also powers the Data Analyst Agent. With these new types, the agent can inspect CI/CD job results, debug failures, and provide detailed overviews of pipeline execution, as well as provide an accurate overview of projects in a namespace.
Dependency resolution for Maven and Python SBOM scanning
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicGitLab dependency scanning using SBOM now supports generating a dependency graph automatically for Maven and Python projects. Previously, dependency scanning required users to provide a lock file or a graph file to get an accurate dependency analysis. Now, when a lock file or graph file is not available, the analyzer automatically attempts to generate one. This improvement makes it easier for Maven and Python projects to enable dependency scanning without requiring a lock file.
Incremental scanning for Advanced SAST
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicYou can now perform incremental scans that analyze only changed parts of the codebase with GitLab Advanced SAST, significantly reducing scan times compared to full repository scans. This feature is a further iteration of diff-based scanning, because it produces full results for codebases.
By scanning just the code that has changed rather than the entire codebase, your teams can integrate security testing more seamlessly into their development workflow without sacrificing speed or adding friction.
Unverified vulnerabilities (Beta)
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicAdvanced SAST can now surface unverified vulnerabilities (findings that cannot be fully traced from source to sink) directly in the vulnerability report. Enable this feature if you have a higher tolerance for false positives over false negatives.
This feature is in beta status. Provide feedback in issue 596512.
Kubernetes 1.35 support
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueGitLab now fully supports Kubernetes version 1.35. If you want to deploy your applications to Kubernetes and access all features, upgrade your connected clusters to the most recent version. For more information, see supported Kubernetes versions for GitLab features.
Prefer mode for the container registry metadata database
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed
Links: Documentation Related issueYou can now set the container registry metadata database to prefer mode, a new configuration option alongside the existing true and false values. In prefer mode, the registry automatically detects whether it should use the metadata database or fall back to legacy storage based on the current state of your installation.
If your registry has existing filesystem metadata that has not been imported to the database, the registry continues to use legacy storage until you complete a metadata import. If the database is already in use, or on a fresh installation, the registry uses the database directly.
In a later release, prefer mode will become the default for new Linux package installations. Existing installations will not be affected. For more information, see issue 595480.
Package protection rules now support Terraform modules
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueTeams publishing Terraform modules through the built-in GitLab Terraform module registry had no way to restrict who could push new module versions. Package protection rules supported several package formats but did not include terraform_module, leaving infrastructure teams without a project-level push control.
You can now create package protection rules scoped to terraform_module, restricting push access based on minimum role. Support is available in the UI package type dropdown, the REST API, the GraphQL API, and the GitLab Terraform provider resource.
Release evidence now includes packages
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueWhen creating a GitLab Release, packages published to the package registry were not automatically associated with it. Teams had to manually construct package URLs and attach them as release links through the API or pipeline scripts, adding friction and risk of incomplete release records.
GitLab now automatically includes packages in release evidence when the package version matches the release tag. This creates a verifiable, auditable link between your release and its associated packages without any manual steps, keeping source code, artifacts, and packages together in one complete release snapshot.
Wiki sidebar toggle repositioned for easier access
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueThe wiki sidebar toggle is now positioned on the left side, directly next to the sidebar it controls.
When the sidebar is collapsed, the toggle remains visible as a floating control so you can reopen it without scrolling back to the top of the page.
Sticky action bar on wiki pages
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueThe action bar on wiki pages is now sticky, so it remains visible as you scroll through a page. Previously, you had to scroll back to the top to access actions like editing, viewing page history, or managing templates. Now the page title and key actions, including Edit, New page, Templates, Page history, and more, stay within reach no matter how far down the page you are.
Epic weights
Available in: Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicEpics now support weights, making it easier to estimate and prioritize large-scale initiatives during planning.
Before breaking down an epic into child issues, you can assign a preliminary weight to represent your initial estimate. As you decompose the epic, the weight automatically updates to reflect the rolled-up total from all child issues. This is consistent with how weight rollup works for issues and tasks.
On the epic detail page, you can see both the preliminary weight and the rolled-up weight from child issues, giving you the insight needed to refine estimates over time.
Block merge requests with high exploitability risk
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicPreviously, merge request (MR) approval policies could block MRs based on vulnerability severity, but not all vulnerabilities carry the same risk. CVSS severity alone doesn’t tell you whether a CVE is being exploited or how likely exploitation is. This leads to noisy approval policies and wasted time for developers and security teams.
You can now configure MR approval policies using Known Exploited Vulnerability (KEV) and Exploit Prediction Scoring System (EPSS) data. Block or require approval when a finding is in the KEV catalog (actively exploited in the wild), or when its EPSS score is above a threshold. Policy violations in the MR include KEV and EPSS context so developers understand why the security gate was triggered.
This gives security teams precise control over which findings block or warn, reduces alert fatigue, and keeps enforcement aligned with the current threat landscape.
Assign CVSS 4.0 scores to vulnerabilities
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicCVSS 4.0 is the latest version of the industry standard used to assess and rate the severity of a vulnerability. You can now view and access CVSS 4.0 score in the UI, including the vulnerability details page and the vulnerability report. You can also query the score using the API.
Improved row interaction in the vulnerability report
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issuePreviously, you had to select the row description to navigate to a vulnerability details page from the vulnerability report.
You can now select anywhere in the row to go directly to its details. Link styling for the vulnerability description and file location only appears when you hover over each link, and keyboard navigation has been improved.
These changes make the vulnerability report more intuitive and accessible.
Export a security dashboard as a PDF
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicYou can export the security dashboard as a PDF for use in reports and presentations. The export captures the current state of all of the charts and panels in the dashboard, including any active filters.
SAST scanning in security configuration profiles
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicIn GitLab 18.9, we introduced security configuration profiles with the Secret Detection - Default profile. In GitLab 18.11, profiles now extend to SAST with the Static Application Security Testing (SAST) - Default profile, giving you a unified control surface to apply standardized static analysis coverage across all your projects without touching a single CI/CD configuration file.
The profile activates two scan triggers:
- Merge Request Pipelines: Automatically runs a SAST scan each time new commits are pushed to a branch with an open merge request. Results only include new vulnerabilities introduced by the merge request.
- Branch Pipelines (default only): Runs automatically when changes are merged or pushed to the default branch, providing a complete view of your default branch’s SAST posture.
Security attribute filters in group security dashboards
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related epicYou can now filter the results in a group security dashboard based on the security attributes that you have applied to the projects in that group.
The available security attributes include the following:
- Business impact
- Application
- Business unit
- Internet exposure
- Location
Security Manager role (Beta)
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: DocumentationThe Security Manager role is now available as a beta feature, providing a new default set of permissions designed specifically for security professionals. Security teams no longer need Developer or Maintainer roles to access security features, eliminating over-privileging concerns while maintaining separation of duties.
Users with the Security Manager role have the following access:
- Vulnerability management: View, triage, and manage vulnerabilities across groups and projects, including vulnerability reports and security dashboards.
- Security inventory: View a group’s security inventory to understand scanner coverage across all projects.
- Security configuration profiles: View security configuration profiles for a group.
- Compliance tools: View audit events, compliance center, compliance frameworks, and dependency lists for a group or project.
- Secret push protection: Enable secret push protection for a group.
- On-demand DAST: Create and run on-demand DAST scans for a group.
To get started, go to a group and select Manage > Members to invite and assign members to the Security Manager role.
Identifier list popover in the vulnerability report
Available in: Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: Documentation Related issueThe vulnerability report now shows the primary CVE identifier as a clickable link in each row. When multiple identifiers exist, a +N more popover lists all of the identifiers. Each identifier in the list links to its external reference (for example, in the CVE, CWE, or WASC databases) so you can quickly access more details without leaving the report.
GitLab Runner 18.11
Available in: Free, Premium, Ultimate
Offerings: GitLab Self-Managed, GitLab.com, GitLab Dedicated, GitLab Dedicated for Government
Links: DocumentationWe’re also releasing GitLab Runner 18.11 today! GitLab Runner is the highly-scalable build agent that runs your CI/CD jobs and sends the results back to a GitLab instance. GitLab Runner works in conjunction with GitLab CI/CD, the open-source continuous integration service included with GitLab.
What’s New:
- Create concrete helper image with bundled dependencies
- Read the job router feature flag from the runner configuration instead of an environment variable
Bug Fixes:
- Incorrect runner binary path after refactoring
- Pipeline hangs on cache operations
- The docker-machine binary in GitLab Runner 18.9.0 references CVE-2025-68121
- Runner silently falls back to job payload credentials when credential helper binary is missing from DOCKER_AUTH_CONFIG
- CONCURRENT_PROJECT_ID not unique in different jobs, which causes a conflict in the builds directory
- Artifact upload fails with timeout awaiting response headers
- User-defined after_script executes after failed pre_build_script and bypasses post_build_script
The list of all changes is in the GitLab Runner CHANGELOG.
Related topics
Original source