Argo CD 3.0 is meant to be a low-risk upgrade containing only minor breaking changes. For each change, the next section will describe how to quickly determine if you are impacted, how to remediate the breaking change, and (if applicable) restore Argo CD 2.x default behavior.
Once 3.0 is released, no more 2.x minor versions will be released. We will continue to cut patch releases for the two most recent minor versions (so 2.14 until 3.2 is released and 2.13 until 3.1 is released).
The default behavior of fine-grained policies have changed so they no longer apply to sub-resources.
Prior to v3, policies granting update
or delete
to an application also applied to any of its sub-resources.
Starting with v3, the update
or delete
actions only apply to the application itself. New policies must be defined
to allow the update/*
or delete/*
actions on an Application's managed resources.
The v2 behavior can be preserved by setting the config value server.rbac.disableApplicationFineGrainedRBACInheritance
to false
in the Argo CD ConfigMap argocd-cm
.
Read the RBAC documentation for more detailed information.
2.4 introduced logs
as a new RBAC resource. In 2.3 and lower, users with applications, get
access automatically got logs access. In 2.4, it became possible to enable logs RBAC enforcement with a flag in argocd-cm
ConfigMap:
server.rbac.log.enforce.enable: 'true'
Starting from 3.0, this flag is removed and the logs RBAC is enforced by default, meaning the logs
tab on pod
view will not be visible without granting explicit logs, get
permissions to the users/groups/roles requiring it.
Users who have server.rbac.log.enforce.enable: "true"
in their argocd-cm
ConfigMap, are unaffected by this change.
Users who have policy.default: role:readonly
or policy.default: role:admin
in their argocd-rbac-cm
ConfigMap, are unaffected.
Users who don't have a policy.default
in their argocd-rbac-cm
ConfigMap, and either have server.rbac.log.enforce.enable
set to false
or don't have this setting at all in their argocd-cm
ConfigMap are affected and should perform the below remediation steps.
After the upgrade, it is recommended to remove the setting server.rbac.log.enforce.enable
from argocd-cm
ConfigMap, if it was there before the upgrade.
For users with an existing default policy with a custom role, add this policy to policy.csv
for your custom role: p, role:<YOUR_DEFAULT_ROLE>, logs, get, */*, allow
.
For users without a default policy, add this policy to policy.csv
: p, role:global-log-viewer, logs, get, */*, allow
and add the default policy for this role: policy.default: role:global-log-viewer
Explicitly add a logs, get
policy to every role that has a policy for applications, get
or for applications, *
.
This is the recommended way to maintain the principle of least privilege.
Similar to the way access to Applications are currently managed, access to logs can be either granted on a Project scope level (Project resource) or on the argocd-rbac-cm
ConfigMap level.
See this example for more details.
Argo CD manifest now contains a default configuration for resource.exclusions
in the argocd-cm
to exclude resources that
are known to be created by controllers and not usually managed in Git. The exclusions contain high volume and high churn objects
which we exclude for performance reasons, reducing connections and load to the K8s API servers of managed clusters.
The excluded Kinds are:
- Kubernetes Resources:
Endpoints
,EndpointSlice
,Lease
,SelfSubjectReview
,TokenReview
,LocalSubjectAccessReview
,SelfSubjectAccessReview
,SelfSubjectRulesReview
,SubjectAccessReview
,CertificateSigningRequest
,PolicyReport
andClusterPolicyReport
. - Cert Manager:
CertificateRequest
. - Kyverno:
EphemeralReport
,ClusterEphemeralReport
,AdmissionReport
,ClusterAdmissionReport
,BackgroundScanReport
,ClusterBackgroundScanReport
andUpdateRequest
. - Cilium:
CiliumIdentity
,CiliumEndpoint
andCiliumEndpointSlice
.
The default resource.exclusions
can be overridden or removed in the configMap to preserve the v2 behavior.
Read the Declarative Setup for more detailed information to configure resource.exclusions
.
The argocd_app_sync_status
, argocd_app_health_status
and argocd_app_created_time
, deprecated and disabled by
default since 1.5.0, have been removed. The information previously provided by these metrics is now available as labels
on the argocd_app_info
metric.
Starting with 1.5.0, these metrics are only available if ARGOCD_LEGACY_CONTROLLER_METRICS
is explicitly set to true
.
If it is not set to true, you can safely upgrade with no changes.
If you are using these metrics, you will need to update your monitoring dashboards and alerts to use the new metric and labels before upgrading.
When using Dex, the sub
claim returned in the authentication was used as the subject for RBAC. That value depends on
the Dex internal implementation and should not be considered an immutable value that represents the subject.
The new behavior will request the
federated:id
scope from Dex, and the new value
used as the RBAC subject will be based
on the federated_claims.user_id
claim instead of the sub
claim.
If you were using the Dex sub claim in RBAC policies, you will need to update them to maintain the same access.
You can know the correct user_id
to use by decoding the current sub
claims defined in your policies. You can also
configure which
value is used as user_id
for some connectors.
$> echo "ChdleGFtcGxlQGFyZ29wcm9qLmlvEgJkZXhfY29ubl9pZA" | base64 -d
[email protected]_conn_i%
# Policies based on the Dex sub claim (wrong)
- g, ChdleGFtcGxlQGFyZ29wcm9qLmlvEgJkZXhfY29ubl9pZA, role:example
- p, ChdleGFtcGxlQGFyZ29wcm9qLmlvEgJkZXhfY29ubl9pZA, applications, *, *, allow
# Policies now based on federated_claims.user_id claim (correct)
- g, [email protected], role:example
- p, [email protected], applications, *, *, allow
If authenticating with the CLI, make sure to use the new version as well to obtain an authentication token with the appropriate claims.
Before repositories were managed as Secrets, they were configured in the argocd-cm ConfigMap. The argocd-cm option has been deprecated for some time and is no longer available in Argo CD 3.0.
To check whether you have any repositories configured in argocd-cm, run the following command:
kubectl get cm argocd-cm -o=jsonpath="[{.data.repositories}, {.data['repository.credentials']}, {.data['helm.repositories']}]"
If you have no repositories configured in argocd-cm, the output will be [, , ]
, and you are not impacted by this
change.
To convert your repositories to Secrets, follow the documentation for declarative management of repositories.
Setting the spec.applyNestedSelectors
field in an ApplicationSet resolves counter-intuitive behavior where filters in
nested selectors were not applied. Starting in Argo CD 3.0, the field is ignored, and behavior is always the same as if
applyNestedSelectors
was set to true
. In other words, nested selectors are always applied.
To detect if you are impacted, search your ApplicationSet controller logs for this string: ignoring nested selector
.
If there are no logs with this string, you are not impacted.
Another way to detect if you are impacted is to run the following command:
kubectl get appsets -o=json | jq -r '.items[] | select(
.spec.applyNestedSelectors != true and
.spec.generators[][].generators[][].generators[].selector != null
) | .metadata.name'
The command will print the name of any ApplicationSet that has applyNestedSelectors
unset or set to false
and has
one or more nested selectors.
Since applyNestedSelectors
is false by default, you can safely remove the nested selectors on ApplicationSets where
applyNestedSelectors
has not been explicitly set to true
. After the selectors are removed, you can safely upgrade.
For example, you should remove the selector in this ApplicationSet before upgrading to Argo CD 3.0.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- matrix:
mergeKeys: ['test-key']
generators:
- list:
elements:
- test-key: 'test-value'
cluster: staging
- test-key: 'test-value'
cluster: production
- merge:
generators:
- list:
elements:
- another-key: 'another-value'
- cluster: {}
- selector:
- matchLabels:
- app: guestbook
template:
metadata:
name: '{{.cluster}}-guestbook'
spec:
project: my-project
source:
repoURL: https://github.com/infra-team/cluster-deployments.git
targetRevision: HEAD
path: guestbook/{{.cluster}}
destination:
server: '{{.url}}'
namespace: guestbook
Helm was upgraded to 3.17.1.
This may require changing your values.yaml
files for subcharts, if the values.yaml
contain a section with a null
object.
See related issue in Helm GitHub repository
See Helm 3.17.1 release notes
Example of such a problem and resolution
Explanation:
- Prior to Helm 3.17.1,
null
object invalues.yaml
resulted in a warning:cannot overwrite table with non table
upon performinghelm template
, and the resulting K8s object was not overridden with the invalidnull
value. - In Helm 3.17.1, this behavior changed and
null
object invalues.yaml
still results in this warning upon performinghelm template
, but the resulting K8s object will be overridden with the invalidnull
value. - To resolve the issue, identify
values.yaml
withnull
object values, and remove thosenull
values.
The default behavior for tracking resources has changed to use annotation-based tracking instead of label-based tracking. Annotation-based tracking is more reliable and less prone to errors caused by external code copying tracking labels from one resource to another.
To detect if you are impacted, check the argocd-cm
ConfigMap for the application.resourceTrackingMethod
field. If it
unset or is set to label
, you are using label-based tracking. If it is set to annotation
, you are already using
annotation-based tracking and are not impacted by this change.
kubectl get cm argocd-cm -n argocd -o jsonpath='{.data.application\.resourceTrackingMethod}'
For most users, it is safe to upgrade to Argo CD 3.0 and use annotation-based tracking. Labels will be replaced with annotations on the next sync. Applications will not be marked as out-of-sync if labels are not present on the resources.
!!! warning "Potential for orphaned resources"
There is a known edge case when switching from label-based tracking to annotation-based tracking that may cause
resources to be orphaned. If the first sync operation after switching to annotation-based tracking includes a
resource being deleted, Argo CD will fail to recognize that the resource is managed by the Application and will not
delete it. To avoid this edge case, it is recommended to perform a sync operation on your Applications, even if
they are not out of sync, so that orphan resource detection will work as expected on the next sync.
Some users rely on label-based tracking to track resources that are not managed by Argo CD. They may set annotations to have Argo CD ignore the resource as extraneous or to disable pruning. If you are using label-based tracking to track resources that are not managed by Argo CD, you will need to construct tracking annotations instead of tracking labels and apply them to the relevant resources. The format of the tracking annotation is:
argocd.argoproj.io/tracking-id: <app name>:<resource group>/<resource kind>:<resource namespace>/<resource name>
For cluster-scoped resources, the namespace is set to the value in the Application's spec.destination.namespace
field.
!!! warning
Manually constructing and applying tracking labels and annotations is not an officially supported feature, and Argo
CD's behavior may change in the future. It is recommended to manage resources with Argo CD via GitOps.
If you are not ready to use annotation-based tracking, you can opt out of this change by setting the
application.resourceTrackingMethod
field in the argocd-cm
ConfigMap to label
. There are no current plans to remove
label-based tracking.
When cluster.inClusterEnabled: "false"
is explicitly configured, Applications currently configured to
sync on the in-cluster cluster will now be in an Unknown state, without the possibility to sync resources.
It will not be possible to create new Applications using the in-cluster cluster. When deleting existing Application, it will not delete the previously managed resources.
It is recommended to perform any cleanup or migration to existing in-cluster Application before upgrading when in-cluster is disabled. To perform cleanup post-migration, the in-cluster will need to be enabled temporarily.
Argo CD manifest now contains a default configuration for resource.customizations.ignoreResourceUpdates
in the argocd-cm
to exclude common resources that are often mutated in Kubernetes. These mutations are known to cause an unnecessary
load on Argo CD. When a watched resource is modified, Argo CD will always ignore the .status
changes.
The default resource.customizations.ignoreResourceUpdates
configurations can be overridden or removed in the configMap to preserve the v2 behavior.
The health status of each object used to be persisted under /status
in the Application CR by default.
Any health churn in the resources deployed by the Application put load on the application controller.
Now, the health status is stored externally.
You can revert this behavior by setting the controller.resource.health.persist
to true
in the Argo CD
argocd-cmd-params-cm.yaml
Config Map.
Example of a status field in the Application CR persisting health:
status:
health:
status: Healthy
resources:
- group: apps
health:
status: Healthy
kind: Deployment
name: my-app
namespace: foo
status: OutOfSync
version: v1
sync:
status: OutOfSync
Example of a status field in the Application CR not persisting health:
status:
health:
status: Healthy
resourceHealthSource: appTree
resources:
- group: apps
kind: Deployment
name: my-app
namespace: foo
status: OutOfSync
version: v1
sync:
status: OutOfSync
-
Check the
argocd-cmd-params-cm.yaml
ConfigMap forcontroller.resource.health.persist
.If the value is empty or true, the health status is persisted in the Application CR.
kubectl get cm argocd-cmd-params-cm -n argocd -o jsonpath='{.data.controller\.resource\.health\.persist}'
- Check any Application CR for the
resourceHealthSource
field. If you see a blank value, the health status is persisted in the Application CR.
kubectl get applications.argoproj.io <my app> -n argocd -o jsonpath='{.status.resourceHealthSource}'
Any tools or CLI commands parsing the .status.resources[].health
need to be updated to use the argocd cli/API to get the health status.
argocd app get <my app> -o json
In Argo CD 3.0, empty environment variables are now passed to config management plugins.
apiVersion: argoproj.io/v1alpha1
kind: Application
spec:
source:
plugin:
name: example-plugin
env:
- name: VERSION
value: '1.2.3'
- name: DATA # Even though this is empty, it will be passed to the plugin as ARGOCD_ENV_DATA="".
value: ''
By default, the existing system-level ignoreDifferences
customizations will be added to ignore resource updates as well.
Logically, if differences to a field are configured to be ignored, there is no reason to generate the diff for the application when that field changes.
To disable this behavior and preserve the v2 default, the ignoreDifferencesOnResourceUpdates
can be set to false:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
resource.compareoptions: |
ignoreDifferencesOnResourceUpdates: false
More details for ignored resource updates in the Reconcile Optimization documentation.
By default, the compare options to ignore the status field has been changed from crd
to all
resources.
This means that differences won't be detected anymore for fields that are part of the status.
If you rely on the status field being part of your desired state, the ignoreResourceStatusField
setting can be used to preserve the v2 default.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
resource.compareoptions: |
ignoreResourceStatusField: crd
More details for ignored resource updates in the Diffing customization documentation.