Use of FAPI with mandatory MTLS

Issue #670 resolved
Joseph Heenan created an issue

We are continuing to see ecosystems that adopt FAPI but also mandate the use of MTLS on all endpoints.

Certainly in FAPI1 we have seen this with ConnectID, Brazil, UK and CDR - i.e. all the ecosystems we have certification tests for, but we also have indications that other ecosystems might adopt similar approaches.

If MTLS sender constrained access tokens are used (which is the only option in FAPI1 and I think still a common choice in FAPI2) this potentially only affects a few endpoints:

  1. Discovery
  2. Server JWKS
  3. PAR

Several ecosystems mandate the use of MTLS even when using private_key_jwt. I think it is a ‘Defence in depth’ type approach, with many security teams having a preference for enforcing MTLS at the perimeter but traffic within the perimeter still having some protection (i.e. the client authentication assertion).

The fact that all ecosystems and the main spec diverge in this area is one of the reasons we end up having to create ecosystem specific certification test variants. I think we should discuss it in the WG and consider whether this kind of setup should be included as an option in the specs & certification tests.

(We did sort of discuss this a little bit here: https://bitbucket.org/openid/fapi/issues/493/certification-query-supply-of-tls-client )

Comments (21)

  1. Ralph Bragg

    I would really welcome a discussion on this point. We are this time and time again with PAR especially.

  2. Filip Skokan

    We are this time and time again with PAR especially.

    If only ecosystems stopped laying down landmines in their own fields. If these require use of mtls everywhere then they should also prohibit the use of AS mtls_endpoint_aliases.

  3. Filip Skokan

    To my previous point, FAPI 2.0 Security Profile already has a section this kind of setup that accounts for endpoints that may end up in a state where the client has no reason to select an mtls endpoint alias but one may be provided on the http client layer.

    … however in the interests of interoperability this document state that when using TLS as a transport level protection in this manner, authorization servers should expect clients to call the endpoints located in the root of the server metadata, and not those found in mtls_endpoint_aliases.

    @Joseph Heenan Are you saying that ecosystem specific plans exist solely because of this MTLS-everywhere (but not really everywhere [or even discovery, jwks]?) requirement?

  4. Joseph Heenan reporter

    @Joseph Heenan Are you saying that ecosystem specific plans exist solely because of this MTLS-everywhere (but not really everywhere [or even discovery, jwks]?) requirement?

    It is not the only reason we currently have profiles, but some/most of the other reasons the WG has been trying to solve with e.g. the grant management spec to replace custom pre-lodged intent / consent management APIs.

  5. Filip Skokan

    FYI today’s conformance suite behaviour allows non-profiled FAPI 2.0 PAR requests with private_key_jwt auth to target either of the exposed endpoints (mtls and non-mtls). That in itself creates a scenario in which certified AS/client may not be able to interoperate if the AS’s mtls endpoints are set up to reject requests that have no reason to use an alias but the client wrongly chooses to use an aliased PAR endpoint.

  6. Joseph Heenan reporter

    FYI today’s conformance suite behaviour allows non-profiled FAPI 2.0 PAR requests with private_key_jwt auth to target either of the exposed endpoints (mtls and non-mtls). That in itself creates a scenario in which certified AS/client may not be able to interoperate if the AS’s mtls endpoints are set up to reject requests that have no reason to use an alias but the client wrongly chooses to use an aliased PAR endpoint.

    Thanks! Tracking that as a bug here:

    https://gitlab.com/openid/conformance-suite/-/issues/1299

  7. Joseph Heenan reporter

    So with the existing language in place, what do you believe is missing?

    I think perhaps the minimum I am looking for is the working group to make a decision on is, if an MTLS certificate is included in the test configuration, is it acceptable for the conformance suite to just try to present it at all endpoints even when testing ‘generic FAPI’.

    If it isn’t, then a decision on whether we should create an “mtls always” (better names gratefully accepted) certification profile or not.

    What if anything we mention in the spec probably depends on the answers to those questions.

  8. Filip Skokan

    is it acceptable for the conformance suite to just try to present it at all endpoints even when testing ‘generic FAPI’.

    It is not, as evident by the PAR endpoint interactions when configured with mtls cert just for sender constraining, not client auth.

    If it isn’t, then a decision on whether we should create an “mtls always” (better names gratefully accepted) certification profile or not.

    What, if anything, would such profile achieve for the ecosystems? Such configuration/expectation is already part of the ecosystem specific profiles that, at least for now, have other reasons to exist. If this is to be refactored in the suite to be a true “variant” (in the sense that the conformance software has) that has a preconfigured value for the existing non-ecosystem profiles(false) is in my opinion at the certification team’s own discretion.

    I would recommend not to expand the non-ecosystem profile list with such an option. Between SP and MS there’s already 16, adding another variant would make that 32 (and i’m only counting MS with both JARM and JAR, not on/off combinations thereof, and that’s without any support for HTTP Message Signatures 🤕 ).

  9. Joseph Heenan reporter

    What, if anything, would such profile achieve for the ecosystems?

    The certification team have had a clear direction from various parts of OpenID Foundation including the FAPI working group that certification tests should not be ecosystem specific if at all possible as creating versions of the tests for every ecosystem that adopts FAPI is not scalable.

    If the FAPI working group are not going to do something about this ticket, there is no way I can achieve that goal.

  10. Filip Skokan

    The certification team have had a clear direction from various parts of OpenID Foundation including the FAPI working group that certification tests should not be ecosystem specific if at all possible as creating versions of the tests for every ecosystem that adopts FAPI is not scalable.

    But how does this particular issue at hand translate to non-scalable effort on the certification software? Seeing how “require mtls everywhere” condition is already in the certification code and having this in FAPI directly would merely change the source of that condition’s value from either the name of the plan or the fapi_profile variant (which is already present and possibly controls this behaviour?) to just another variant.

    I do understand that custom ecosystem specific behaviours, endpoints, pre-requisites are to be avoided but this doesn’t feel like the one that’s causing effort in the software’s development. I am not trying to be obtuse here so please go easy on me.

  11. Joseph Heenan reporter

    The software development effort to create a new variant for each new ecosystem that simply requires mtls everywhere is relatively low, although also non zero.

    The existence of a variant for each and every ecosystem causes a ton of work elsewhere, with multiple tables in the certification page, the implication that vendors feel the need to certify for each and every profile, etc.

  12. Filip Skokan

    Sharing my notes from today’s meeting

    • Adding a general “Plain FAPI 2.0 with MTLS Everywhere on top” option to the fapi_profile variant selector is a good idea given that for some ecosystems this variant’s option would be their baseline
    • More guidance is needed in the MTLS Protection of all endpoints section which will need to be discussed over a concrete PR which I’ll open with my suggestions (edit: https://bitbucket.org/openid/fapi/pull-requests/458)
    • Doing any of the above does not actually solve any short term issues but should serve to have a better starting point for emerging ecosystems to take off from.

  13. Dima Postnikov

    I’ve tried to summarise the use of MTLS with FAPI profile here:

    1. MTLS everywhere / MTLS and private_key_jwt

    MTLS is used by existing FAPI ecosystems for 3 different reasons:

    • Client authentication
    • Token binding
    • Additional ecosystem access control

    Most of the existing ecosystems (CDR, Brazil, ConnectID and some banks in the UK) use both MTLS (token binding and additional ecosystem access control) and private_key_jwt (client authentication). Most ecosystems require participants to be accessing services using specific certificates or certificates that are issued and bound to either the client or can be validated are bound to the organization to whom the client belongs (Brazil, UK eIDAS). 

    Recommendations: 

    1. OIDF specs and conformance testing needs to support MTLS + private_key_jwt as a first class citizen
    2. The specs should provide clearer guidance for future ecosystems where it’s a good idea or not. And this guidance should be backed by formal security analysis.

    2. MTLS everywhere / including PAR

    PAR specification doesn’t require the use of MTLS (for client authentication). It’s silent about token binding and ecosystem control. 

    Most of the existing ecosystems (CDR, Brazil, ConnectID) chose to require MTLS for PAR endpoint for transport security and consistency reasons: all sensitive data is always transferred over an MTLS channel.

    Recommendation: OIDF specs and conformance testing needs to support MTLS + private_key_jwt for PAR endpoint invocation.  @Filip Skokan says it’s already in place.

    3. MTLS and TLS discovery

    FAPI WG and its standards always aim to facilitate the re-use of infrastructure for OPs and RPs.

    If AS uses its infrastructure to support regulatory ecosystems that require MTLS and other ecosystems that don’t for the same endpoints, we need to provide a standard discovery mechanism for clients where MTLS version of the endpoint is available or preferred.

    Another example where support of both MTLS and non-MTLS channel is required for the same endpoint: MTLS FAPI2 or FAPI1+PAR ecosystem expands to use OpenID4VC issuance which recommends to use PAR via non-MTLS channel (the request comes from a mobile device).

    Most of the existing ecosystems (UK?, Brazil and ConnectID) chose to require mtls_endpoint_aliases as the discovery mechanism for endpoints that have MTLS preference.

    Recommendation: 

    1. Continue AS discovery support using mtls_endpoint_aliases. Don’t proceed with https://bitbucket.org/openid/fapi/pull-requests/458 in a current shape.
    2. Clarify in the FAPI2 security profile that which part of RFC8705 is still applicable even if MTLS authentication is not used.
    3. @Filip Skokan 's recommendation: add additional metadata for a client to be aware of MTLS everywhere requirement.

    @Ralph Bragg @Filip Skokan @Joseph Heenan anything to add?

  14. Filip Skokan

    FWIW I made a suggestion to pursue alternatives to PR 458. Not a recommendation.

    In absence of a mechanism that tells clients they participate in a “we use MTLS for transport even if you don’t need MTLS for OAuth” kind of ecosystem, PR 458 is the best guidance we can give to not fall into a trap like PAR + pkjwt.

    Such mechanism may be aforementioned client metadata and a normative section covering its use.

  15. Nat Sakimura
    • changed status to open

    Waiting for Dima and Ralph to come up with an alternative PR, which should be available next week.

  16. Tom Jones

    MTLS and token binding are different. This issue now seems to be wandering outside of FAPI to OIDF generally. I have been opposed to token binding because large systems do not use MTLS to applications endpoints, but just to gateways. Wide adoption of token binding has been opposed by Google and similar large organizations that separate tls endpoints from application endpoints. This issue also seems to be morphing in the idea of cross site or first party sets. I suspect that including such requirements will basically kill any wide spread adoption of standards that require them for conformance.

    We have other conformance issues like this in OIDF that cause similar CORS lax settings. This is not a security improvement IMHO. This is an issue with OAuth that i described with the following comment:

    the point has been made elsewhere that the common Authorization Code Flow usage will not have an issue. If the Authorization server handles the authz grant,  the token issuance and the user resource all from the same origin as they all need to share data.  However it is not required in the spec that these endpoints are all in the same origin, as that was not an issue when the spec was written.  If they all share a common origin, there should be no problem at least with the currently deployed CORS policies.

  17. Log in to comment