Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EAR Support #516

Open
wants to merge 14 commits into
base: main
Choose a base branch
from
Open

EAR Support #516

wants to merge 14 commits into from

Conversation

fitzthum
Copy link
Member

@fitzthum fitzthum commented Sep 27, 2024

This is the very beginning of this PR. I still need to fix some issues with the underlying crate, add support for verifying tokens in the KBS, and fixup all the tests, examples, and default values.

See commit messages for more info.

Fixes: #353

Progress so far:

  • Miscellaneous Fixups
  • Implement EAR token generation in the AS
  • Implement EAR verification in KBS
  • Update docker compose
  • Update k8s (hard to test fully without building the images, but should be smooth when we do)
  • Fix integration tests
  • FIxup RVPS
  • Fixup policy_ids
  • Improve default policy to validate TCB of all platforms (well, for a couple of platforms. the rest will be added by others hopefully)
  • Update docs and examples

Since we reference this error enum from mod.rs, it should not
be rego-specific. The error variants are not specific to OPA,
so lift them into mod.rs.

Now, someone writing an alternative policy engine
can use the same errors.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
@fitzthum fitzthum requested a review from a team as a code owner September 27, 2024 23:08
@fitzthum fitzthum marked this pull request as draft September 27, 2024 23:09
This commit allows the AS to issue EAR tokens with the
help of the rust-ear crate.

EAR tokens require particular claims. This creates a binding
between the AS policy and the EAR token.
Specifically, the policy engine must return an EAR appraisal.
The policy engine is still generic. Multiple policy engines
could be implemented as long as they create an appraisal.

Token generation is no longer generic.
Since the policy engine, will always return an appraisal,
we must generate an EAR token.
This commit removes the simple token issuer
and replaces the TokenProvider trait with a struct.

The KBS will still be able to validate many different tokens,
but this commit changes the AS to only issue EAR tokens.

There are a few other changes, including that the policy engine
no longer takes multiple policies. For now, we only evaluate
the first policy in the policy list, but future commits will
change this convention so that we only ever think about one
policy for the attestation service (until we introduce support
for validating multiple devices at once).

This commit also removes the flattening of the tcb claims.
With the EAR tokens, we store the TEE pubkey the tcb claims.
If these claims are flattened, we will need to do some extra
work to deserialize the key.

The TCB claims are currently flattened so that we can use the
key names as the input to the RVPS. This commit breaks this
functionality, but a future commit will change the way
the RVPS works to accomodate. There isn't a direct pairing
between claim names and reference values, so there is no
reason to keep flattening all the claims, especially
because the flattening code has some corner cases
that it does not support.

This commit also adds the init_data_claims and
runtime_data_claims to the tcb claims as long as
the corresponding claims about the hashes are already there.
This will allow the init_data to travel with the token,
which will be convenient except if the init_data is too big.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
For EAR tokens we require the public key to be set.
There is no option to deserialize a token without
validating the signature.

The EAR verifier returns the submods as JSON.
This means that some information, such as the verifier id
is not propogated.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
A keypair is required to sign and validate the attestation token.
In the past this was optional, but now it is not.

Update the docker-compose manifest and configs to pass in this
new keypair and update the docs to tell people how to generate it.

This does complicate the user experience, but things are not secure
without it. That said, we may be able to implement this automatically
in a future PR.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Now we need to provision a keypair for signing and
validatig the attestation tokens.

Add this keypair to the docker e2e test

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
The sample attester is enabled by default.
Remove setting the environment variable that used to
enable it.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
We now require a keypair for signing/validating the attestation
token. Add this keypair to our k8s deployment tooling.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
We now require a keypair to sign/validate the attestation
token. Add this keypair to the e2e test.

Interestingly, we were using a keypair for validating
the old CoCo token in this test, but only for the
passport mode. Even in background check mode, this keypair
is required or the token won't be validated at all.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Previously we expected the caller of the RVPS to provide
a name for the reference value that they wanted.
In the AS we were flattening the TCB claims to get this name.
Ultimately, the names of the TCB claims do not map directly onto
the names of the required reference values.

This changes the interface to have the RVPS determine which
reference values to send. At the moment, it simply sends all of them.

This allows the reference values that are used to mostly be set within
the policy itself, which is probably a good idea.

In the future, the RVPS should be improved to include a context
abtraction that allows groups of reference values to be provided to the
AS.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
When generating EAR tokens, it seems best to only use one policy
at a time (per-submod).

In the commit that introduces EAR token generation in the AS,
we simply ignore all policies in the policy_ids list except
the first one.

Here, we change the interface so that only one policy can be
provided in an attestation request

The KBS always sets one policy ("default"), anyway.
In the future, we should figure out how to set this policy id
more dynamically.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
@fitzthum
Copy link
Member Author

fitzthum commented Oct 4, 2024

This is ready for review. I tried to make this as simple as possible, but there are a lot of interlocking pieces. Please read the commit messages. They will hopefully explain what is happening and highlight a couple of significant changes (like not flattening the tcb claims anymore, not supporting multiple policies, etc)

We still have some issues to resolve in the underlying crate before we can merge.

  • Appraisal is not Send veraison/rust-ear#19 - without this we can't create extensions, which we aren't using anyway yet. we could probably go ahead without a fix, but we have to pin the crate to a hash right before they added extensions or it won't compile (and we will need more commits on top of this for the other issues)
  • JWT has no expiration field veraison/rust-ear#20 - without this the tokens never expire. interestingly guest-components is fine with this. the code works fine, but it's a security problem since the KBS can't verify that the token hasn't expired.
  • RawValue has no variant for null veraison/rust-ear#21 - without this JSON deserialization of some values into EAR RawValues causes everything to explode. this may even be triggered by the TDX or SGX claims, which would be bad.

@fitzthum fitzthum marked this pull request as ready for review October 4, 2024 22:33
THe skeleton for a policy that can be used to validate the TCB
claims of all platforms in the context of confidential
containers.

Only sample and snp are supported currently, but this should give
a good idea of how to extend the policy to other platforms.

There are a few tweaks we can make later, such as supporting
`>` or `<` comparisons.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Update the attestestion service policy docs to describe the
requirements for policies that will generate EAR tokens.

Also update various example and default policies.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Copy link
Member

@Xynnn007 Xynnn007 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Tobin. Nice work about code, tests and documents. I am excited to see that the design that leaves the trusted anchor calculation to users.

I have not dived into the code too deeply, but only for a quick catch for the architecture design.

This PR abondons former AS token format. Maybe we could leave the token plugins still. Probably the difficulty is where should OPA lie. My suggestion is to make OPA a common lib for both legacy AS token and EAR.

In this way, the token type could be configured in the launch toml of AS. Such as

[token]
type = "ear"
# other configs for ear

or

[token]
type = "simple"
# other configs for simple

This can be implemented by

#[serde(tag = "type")]
pub enum AttestationTokenConfig {
    Ear(EarConfig),

    Simple(SimpleConfig),
}

pub type AttestationTokenBroker = Arc<dyn AttestationTokenBrokerTrait + Send + Sync>;

impl TryFrom<AttestationTokenConfig> for AttestationTokenBroker {
    type Error = Error;

    fn try_from(value: AttestationTokenConfig) -> Result<Self> {
        match value {
            AttestationTokenConfig::Ear(config) => {
                let ear_broker = Arc::new(EarBroker::new(config)?);
                Ok(ear_broker as _)
            }
            AttestationTokenConfig::Simple(config) => {
                let simple_broker = Arc::new(SimpleBroker::new(config)?);
                Ok(simple_broker as _)
            }
        }
    }
}

This would give enough flexibility to downstream users to define their different token format, although they might not be standard as EAR. But for CoCo, we can support EAR by default.

@@ -26,6 +26,8 @@ cfg-if = "1.0.0"
chrono = "0.4.19"
clap = { version = "4", features = ["derive"] }
config = "0.13.3"
#ear = "0.2.0"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a leftover?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because the latest commits to rust-ear broke the crate (Appraisal isn't send anymore`). Before we merge this, the crate will need to be fixed and i will fix the version.

.policy_engine
.evaluate(reference_data_map.clone(), tcb_json, policy_ids.clone())
.evaluate(reference_data_map.clone(), tcb_claims, policy_id.clone())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reference_data_map and policy_id seem to be consumed here. Maybe we do not need the .clone() here.

@@ -295,7 +258,8 @@ fn parse_data(
) -> Result<(Option<Vec<u8>>, Value)> {
match data {
Some(value) => match value {
Data::Raw(raw) => Ok((Some(raw), Value::Null)),
// Ear RawValue does not support NULL, so use an empty string
Data::Raw(raw) => Ok((Some(raw), Value::String("".to_string()))),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this cause ambiguity. As the user might not give input RuntimeData/InitData to AS (null), or the data is an empty string ( ""). In the former case, verifier will not check the binding of data against the value inside evidence. While for the latter case, the verfier will calculate the digest of "" and compare it with the claim inside tee evidence.

When we see a "" value in ear, how could we distinguish it as a non-given runtime data, or a "" runtime data?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bug in the rust-ear crate imo that will hopefully be fixed. I made an issue about it there. veraison/rust-ear#21

@jialez0
Copy link
Member

jialez0 commented Oct 8, 2024

This PR abondons former AS token format. Maybe we could leave the token plugins still. Probably the difficulty is where should OPA lie. My suggestion is to make OPA a common lib for both legacy AS token and EAR.

In this way, the token type could be configured in the launch toml of AS. Such as

[token]
type = "ear"
# other configs for ear

or

type = "simple"
# other configs for simple

This can be implemented by

#[serde(tag = "type")]
pub enum AttestationTokenConfig {
    Ear(EarConfig),

    Simple(SimpleConfig),
}

pub type AttestationTokenBroker = Arc<dyn AttestationTokenBrokerTrait + Send + Sync>;

impl TryFrom<AttestationTokenConfig> for AttestationTokenBroker {
    type Error = Error;

    fn try_from(value: AttestationTokenConfig) -> Result<Self> {
        match value {
            AttestationTokenConfig::Ear(config) => {
                let ear_broker = Arc::new(EarBroker::new(config)?);
                Ok(ear_broker as _)
            }
            AttestationTokenConfig::Simple(config) => {
                let simple_broker = Arc::new(SimpleBroker::new(config)?);
                Ok(simple_broker as _)
            }
        }
    }
}

This would give enough flexibility to downstream users to define their different token format, although they might not be standard as EAR. But for CoCo, we can support EAR by default.

I concur that the format for generating tokens should be modular. We can default to supporting EAR, but simultaneously, retain the previous simple AS token mode as a plugin option. This approach will provide ample room for customization to the users of this project.

@fitzthum
Copy link
Member Author

fitzthum commented Oct 10, 2024

@Xynnn007 @jialez0 You are both right that this PR makes our token provisioning less generic. In fact, it's no longer generic at all. There are some reasons for this.

First, with EAR tokens the policy is fundamentally tied to the token. Our CoCo token supports just about any policy claims, but with EAR we need to generate an Appraisal. This means that 1) the policy is specific to the type of token we use. 2) A bunch of the code surrounding the policy is specific to the type of token we are using.

The first thing isn't a huge problem. It's not a great user experience to require totally different policies for different tokens, but at least that is configurable at runtime. The second thing is a bit more of a problem. There is code in both the policy engine and lib.rs that is specific to the type of policy that we are using. For instance, the policy engine parses the claims differently, and lib.rs composes the token differently. Supporting generic tokens would require changing some abstractions and adding complexity.

One option would be to move most of the logic into the token generation code. The policy engine would always process all the claims and return the full results. The results would be provided as-is to the token broker. The token broker would also get the TCB claims and any other pieces of the token. I don't really like broadening the scope of the token broker like this. If we want to keep the token broker abstraction the same, we would probably have to add a bunch of conditional logic to the policy engine and lib.rs, which also doesn't seem like a great idea.

It's possible to keep both tokens, but there would be costs. Keep in mind that we would need to create more tests, more examples, extend the docs, etc. Besides the costs, I actually think there are advantages to supporting fewer things. So far we have tried to make Trustee as generic as possible. In general I think this is good, but currently I think we are in danger of giving users too many ways to configure things without showing them any particular way that actually works end-to-end. I think part of the reason why we have some gaps with policies, the RVPS, etc is that we haven't been prescriptive enough. I think EAR tokens are a good fit because they are supposed to be generic, but they are more opinionated.

Anyway, this change wasn't an accident, but I understand that it is kind of scary. I am open to trying to support both, but my question for you is what is the value of keeping the CoCo token?

@fitzthum
Copy link
Member Author

Another way we could potentially compromise is by keeping the CoCo token but populating the policy results with an EAR-like Appraisal. I'm not sure if this really makes sense tho.

@Xynnn007
Copy link
Member

Hi @fitzthum

Anyway, this change wasn't an accident, but I understand that it is kind of scary. I am open to trying to support both, but my question for you is what is the value of keeping the CoCo token?

Actually we are trying to define some own token format in downstream pre-productions benefits from the original plugin design.

Let's go back to your worries. The core issues are 1. token generation code and 2. user interface.

For 1,
I tried to look into the code about the design of current policy module. Some ideas:

  • Change the API of policy engine. That takes policy id, data and input as parameters, just as OPA defines. We could have only regorus as backend and abondon the extensibility for policy engine (I have a strong hunch we need to do this because so far there is no indication that we will have a strategic replacement other than rego)
  • Change the AttestationTokenBroker's issue API to receive parameter reference_data_map, tcb_claims and policy_id. Then the module itself decides how to interact with regorus.
  • To help KBS recognize the concrete token type, we could add another return value of AS's evaluate function.
    For 2,
    The current config of AS looks like
{
    "work_dir": "/opt/confidential-containers/attestation-service",
    "policy_engine": "opa",
    "rvps_config": {
	    "remote_addr":"http://rvps:50003"
    },
    "attestation_token_broker": "Simple",
    "attestation_token_config": {
        "duration_min": 5
    }
}

After some change, it would be

{
    "work_dir": "/opt/confidential-containers/attestation-service",
    "policy_engine": "opa",
    "rvps_config": {
	    "remote_addr":"http://rvps:50003"
    },
    "attestation_token_broker": {
        "type": "XXX",
        ...// Other options
    }
    "attestation_token_config": {
        "duration_min": 5
    }
}

The default one we could set as EAR, together with all other e2e examples. Although we provide "too" many ways to run trustee, I believe most users will directly use the default configuration of our e2e test (probably EAR or Simple token), which needs to be ensured to run normally by us. Some advanced users might want do extension, then the value of extensibility values.

@fitzthum
Copy link
Member Author

Actually we are trying to define some own token format in downstream pre-productions benefits from the original plugin design.

Ok. This is a valid consideration. Ideally we could converge on one token, but I'm not sure what your requirements are and they might not be public. Keep in mind that the EAR token is in some ways very similar to our CoCo token. For instance, they are both JWT. The EAR token stores a public key in a similar way but with a different path. Consumers can somewhat ignore parsing the AR4SI Trust Claims by just looking at the status field of the Appraisal.

We could have only regorus as backend and abondon the extensibility for policy engine

I think we should try to keep the policy engine modular. My opinion of rego/opa is declining and I would love to provide another option (as long as we don't invent it ourselves).

Btw, this PR also switches from evaluating multiple policies with the AS, to only taking one policy id. If we keep support for the CoCo token, can it just take one policy or does it need to do multiple?

@Xynnn007
Copy link
Member

Btw, this PR also switches from evaluating multiple policies with the AS, to only taking one policy id. If we keep support for the CoCo token, can it just take one policy or does it need to do multiple?

One policy is ok for now.

If we want to support multiple types of tokens,
we'll need to decouple the claims that are evaluated in the policy
from the poliicy engine.

This allows the token broker to specify a set of rules that
the policy engine will evaluate. Then, the token broker
will get the unprocessed output from the policy engine
and do what it needs to.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Add back the support for the EAR token. This commit is still rough.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
@fitzthum
Copy link
Member Author

ok @Xynnn007 I have added two more commits which demonstrate how we could support both EAR and the simple tokens. IMO these make the code significantly worse. The commits are a bit rough. The tests won't pass, but it should give a pretty good idea of what is required. I will clean it up if we decide to go this way. PTAL and see what you think. Also consider if there is any way we can get rid of the simple token requirement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: We have code
Development

Successfully merging this pull request may close these issues.

Adopt EAR in Attestation Service
3 participants