Impact
Upon cluster creation, if the administrator does not specify a cluster token, the bootstrap data for the cluster is encrypted with a key derived from the empty string, rather than the randomly-generated cluster token. This means that any user with direct access to the datastore, or a copy of a datastore backup, would be able to extract the cluster's confidential keying material (cluster certificate authority private keys, secrets encryption configuration passphrase, etc) and decrypt it, without having to know the token value.
Note that this confidential material is present on the disk of server (control-plane nodes) and accessible to anyone with root access; the additional exposure path is via a user with read access to the datastore or datastore backups.
Am I affected?
This bug presents itself under the following circumstances:
- Server (control-plane) nodes were not initially started with a token provided via the
--token
CLI flag or config file key.
Remediation
Upgrade K3s on all server nodes. On startup, updated versions of K3s will automatically re-encrypt the bootstrap data using the cluster token, and delete the copy encrypted with the incorrect key.
Due to this change, it is no longer possible to add additional K3s servers to the cluster without specifying the token on the CLI or in the config file. Previously, K3s did not enforce the use of a token when using external etcd or SQL datastores. Other datastore types (standalone, or using embedded etcd) do not require any action, as they always required a token when joining.
Administrators may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/k3s/server/token
⚠️ Note ⚠️
If servers are in an auto-scaling group, ensure that the server image is to include the token value before upgrading. If existing nodes are upgraded and then subsequently deleted prior to an administrator retrieving the randomly-generated token, there will be no nodes left from which to recover the token.
Mitigation
If it is not possible to upgrade K3s, administrators may rebuild the cluster from scratch on a new datastore, ensuring that a token is passed during initial cluster creation.
Background
K3s encrypts the cluster bootstrap data (cluster CA keys, secrets encryption configuration, etc) at-rest within the datastore, using AES in GCM mode with a 32-byte key derived from the cluster token using PBKDF2, as defined in RFC 2898. In addition to having valid credentials to access the datastore, servers must also know the cluster token in order to decrypt the bootstrap data and successfully join the cluster.
In affected versions, the raw token flag value from the CLI or configuration file was passed in to PBKDF2, rather than using the passphrase portion of the final cluster token (either specified by the user, or randomly generated on initial cluster startup). Fixed versions of K3s now detect this condition, properly encrypt the data at all times, and correct datastores previously affected by this issue.
Impact
Upon cluster creation, if the administrator does not specify a cluster token, the bootstrap data for the cluster is encrypted with a key derived from the empty string, rather than the randomly-generated cluster token. This means that any user with direct access to the datastore, or a copy of a datastore backup, would be able to extract the cluster's confidential keying material (cluster certificate authority private keys, secrets encryption configuration passphrase, etc) and decrypt it, without having to know the token value.
Note that this confidential material is present on the disk of server (control-plane nodes) and accessible to anyone with root access; the additional exposure path is via a user with read access to the datastore or datastore backups.
Am I affected?
This bug presents itself under the following circumstances:
--token
CLI flag or config file key.Remediation
Upgrade K3s on all server nodes. On startup, updated versions of K3s will automatically re-encrypt the bootstrap data using the cluster token, and delete the copy encrypted with the incorrect key.
Due to this change, it is no longer possible to add additional K3s servers to the cluster without specifying the token on the CLI or in the config file. Previously, K3s did not enforce the use of a token when using external etcd or SQL datastores. Other datastore types (standalone, or using embedded etcd) do not require any action, as they always required a token when joining.
Administrators may retrieve the token value from any server already joined to the cluster:
If servers are in an auto-scaling group, ensure that the server image is to include the token value before upgrading. If existing nodes are upgraded and then subsequently deleted prior to an administrator retrieving the randomly-generated token, there will be no nodes left from which to recover the token.
Mitigation
If it is not possible to upgrade K3s, administrators may rebuild the cluster from scratch on a new datastore, ensuring that a token is passed during initial cluster creation.
Background
K3s encrypts the cluster bootstrap data (cluster CA keys, secrets encryption configuration, etc) at-rest within the datastore, using AES in GCM mode with a 32-byte key derived from the cluster token using PBKDF2, as defined in RFC 2898. In addition to having valid credentials to access the datastore, servers must also know the cluster token in order to decrypt the bootstrap data and successfully join the cluster.
In affected versions, the raw token flag value from the CLI or configuration file was passed in to PBKDF2, rather than using the passphrase portion of the final cluster token (either specified by the user, or randomly generated on initial cluster startup). Fixed versions of K3s now detect this condition, properly encrypt the data at all times, and correct datastores previously affected by this issue.