OIDCC appears to override single-use nature of auth code in RFC6749

Issue #1057 wontfix
Joseph Heenan created an issue

https://openid.net/specs/openid-connect-core-1_0.html#TokenRequestValidation says:

If possible, verify that the Authorization Code has not been previously used.

However https://tools.ietf.org/html/rfc6749#section-10.5 says:

Authorization codes MUST be short lived and single-use.

My reading of this is that OAuth2 requires that authorisation codes are single use, and OIDCC is weakening this requirement. My understanding is the OIDCC should generally extend OAuth2 and should not conflict with the underlying RFCs. (I had a search for previous discussion on this point but failed to find any. The certification suite seems to have a test called OP-OAuth-2nd which I think requires the authorisation codes are single use, but I'm not 100% sure).

I think for consistency the 'if possible' should be removed from OIDCC and replaced with a 'MUST'.

Comments (8)

  1. Brian Campbell

    I don't think I can point to the specific previous discussions about it but it is certainly something that's been a point of discourse during the development of RFC 6749 and in the years since.

    There were a non-trivial number of people who wanted to allow for an authorization code to be a self-contained cryptographically secured thing rather than a reference to data kept server side. This is why, for example, there's no hard length limit put on authorization codes. Strictly enforcing one-time-use for such self-contained codes can be prohibitively difficult and it can be hard with reference style codes too in a large distributed system.

    Arguments around that led to text in RFC 6749 that is perhaps a little awkward. And I know some have argued since that the single-use wording on the authorization code in RFC 6749 is applicable to the client. Which would leave some wiggle room on the AS side detecting and rejecting a authorization code being presented more than once.

    I believe the wording in OIDC core that you quote was intentional and meant to acknowledge and allow for the reality of things in deployments/implementations rather than overriding RFC 6749 per se. (but yeah, it depends on interpretation).

    To the best of my recollection, the certification suite test was only a warning on inability to detect a second use of a code rather than a test failure. And that too was to allow for the situation described above.

  2. Michael Jones

    I agree with Brian's analysis. It doesn't seem that a spec change is needed.

    This will be closed after allowing a period for more discussion.

  3. Joseph Heenan

    Thanks Brian & Mike for the extra context. I guess that interpretation makes some sense, and if anything needs clarifying it's RFC6749.

  4. Joseph Heenan reporter
    • changed status to open

    I think it's worth re-opening this one, as Nat pointed out that RFC6749 states:

         If an authorization code is used more than
         once, the authorization server MUST deny the request and SHOULD
         revoke (when possible) all tokens previously issued based on
         that authorization code.  The authorization code is bound to
         the client identifier and redirection URI.
    

    To me the text in OIDCC, "If possible, verify that the Authorization Code has not been previously used.", is currently weakening that RFC6749 requirement, which I'm not sure it should.

    Also further discussed on https://bitbucket.org/openid/fapi/pull-requests/113/fapi-r-clarify-authorization-code-reuse/diff and https://bitbucket.org/openid/fapi/issues/86/the-read-only-spec-should-not-contain-if

  5. Michael Jones

    Per the OAuth Behaviors section of https://openid.net/wordpress-content/uploads/2018/06/OpenID-Connect-Conformance-Profiles.pdf, the working group made an explicit decision to make code reuse a WARNING for the first 30 seconds after issuance and to make it a FAILURE if reused after 30 seconds. This reflected the reality that for some highly distributed implementations, and explicit engineering decision was made to not require tight synchronization between all replicas, but instead to use eventual, non-blocking consistency algorithms. This is also the reason for the “if possible” language in the specification.

    This has been the behavior in the Core specification for the last 5+ years and the behavior explicitly enforced by certification for ~4 years. I don’t believe that we have the luxury to change it now. Requiring a hard error after 30 seconds is an engineering tradeoff explicitly adopted by the working group.

    Per the discussion on the calls, I believe that this issue was decided long ago, and should therefore be closed.

  6. Michael Jones

    Reflecting on this some more, the other historical context that’s relevant is many of the widely-deployed distributed OAuth 2 implementations still in use today were in production years before RFC 6749 was completed in 2012. These were based on earlier drafts, including OAuth WRAP, and made engineering choices that seemed reasonable at the time - including basing authorization code reuse prevention on eventual consistency algorithms, rather than (much more expensive and delay-inducing) ACID consistency algorithms.

    An explicit goal of Connect was enabling its deployment on existing OAuth 2.0 implementations. I believe that was the right engineering choice, as it led to widespread adoption. And yes, that means that Connect intentionally tolerates limited authorization code reuse, within specified bounds.

    I hope this helps readers understand why we are where we are and why it’s OK.

    Cheers,
    -- Mike

  7. Michael Jones

    This was an intentional choice at the time we wrote the OpenID Connect specification. We don't plan to change this.

    The Certification tests OP-OAuth-2nd and OP-OAuth-2nd-30s cover these behaviors, from the point of view of the working group.

  8. Log in to comment