Browser swap attack explained on 2022-09-28

Issue #543 resolved
Nat Sakimura created an issue

Daniel explained this attack on 2022-09-29 call.

The specifics of (a) how the Attacker forwards the Authz res to itself and (b) stops the redirect were not given.

(a) is necessary for the attack. (b) increases the probability of success stopping the race condition.

Suggested ways to achieve (a)

  1. Attacker controlled proxy server between the Victim and the AS
  2. Leaking browser history/logs

Comments (13)

  1. Nat Sakimura reporter

    Source code for the above diagram

    title Browser Mixup Attack

    autonumber 1

    participant "Attacker\nBrowser" as A
    participant Victim\nBrowser as V
    participant Client as C
    participant AS as S
    participant Resource as R

    A->C: Service request
    C->S: PAR request
    S-->C: request_uri, exp
    C->C: create AuthZ req link
    C-->A: Authz Redirect
    A-->A: Stop redirect
    A->A: Create a clickable\nlink that reproduces\nthe Authz req
    A->V: Send the link
    V-->V: Clicks the link
    V->S: Authz Req
    opt User AuthN
    S-->V: Credential + Grant Please
    V->S: Credential + Grant
    end
    S->V: Authz Res (code, state)
    A->V: Stops redirect
    V->A: Authz Res forwarded
    A->C: Authz Res
    C->C: Verifies state, \ncookie, etc.
    C->S: token req
    S->C: token for the Victim
    C->R: Res Req w/token
    R->C: Victim's res
    C->A: Victim's res

  2. Nat Sakimura reporter

    From the BCM Principles PoV, this attack occurs as it is violating these principles:

    1. All the participants in the protocol must be listed and authenticated.
    2. The message itself, the sender, and the receiver must be authenticated.

    That is, we are not identifying the participating browser in the initiation of the protocol and we are not authenticating the browser during the protocol run.

    • As it was pointed out during the call, Token Binding for browsers would provider the browser authentication and solves it, but we do not have it, so we cannot use it here. (We could negotiate for the future to the browser vendors.)
    • The first party set would have helped, but we do not have it either.
    • Browser fingerprints would help, but it may stir privacy discussions. (Controlled sharing of cookies between the client and the server is much better)

  3. Nat Sakimura reporter

    Question: When the attacker insert itself as a proxy, can it cannot see the query parameters as it is TLS protected. Could it use it as is to send a valid authz response to the client?

  4. Nat Sakimura reporter

    Also, do not our security assumptions assume that the attacker can not control the honest user’s browser actions? If he can, he can pretty much do anything even after the fact, so it is useless to talk about it. That means the attacker cannot stop the honest browser’s redirect.

  5. Daniel Fett

    To your questions:

    • The attacker inserting itself as a proxy is a different attack. Do you assume that TLS still works properly? If the attacker does not control or at least see the auth request, the attacker cannot forge the auth response.
    • The honest user’s browser does not need to be under the control of the attacker, but the network might be. That can be enough to stop a request from ever reaching its destination.
    • Yes, that would help, but its generally unreliable and often suppressed for privacy reasons.

  6. Daniel Fett

    I discussed this attack with Pedram and we came to the conclusion that essentially, attacker A3b in its current form is too coarsely defined, making it stronger than it should be:

    In the current model, the attacker just receives any authorization response.

    In practice, there is not one single attack this attacker is capturing, but the following types of attacks:

    1. open redirector attacks and other things that lead to a “forwarding” of the auth response from the client to the attacker
    2. wrong redirect URLs, insufficient redirect URI checking, etc.
    3. leaks from proxy logs, i.e., log files either at the authorization server or at the client
    4. leaks from the browser history
    5. leaks on mobile operating systems

    Looking at these types of attacks, the following can be observed:

    • (1.) and (2.) can be captured in the model by assuming that all clients leak the authorization request to the attacker; also there are rules in FAPI already to ensure correct redirect URI checking etc.
    • for (3.) and (4.), it can be assumed that such leaks will most likely not happen before the authorization response has reached the client (note that an attacker would need to observe the log files or browser history and then prevent a specific request at the TLS-protected network level - rather hard to pull off)
    • (5.) is something that needs to be taken care of, see https://danielfett.de/2020/11/27/improving-app2app/

    We therefore think that we should do two things in FAPI:

    • Add a rule that clients must accept any authorization code they encounter only once; i.e., blacklist it for at least the lifetime of the authorization codes from that AS (we might need metadata for that)
    • Change the definition of A3b to say that authorization responses leak from the client (i.e., after the client had a chance to invalidate the authorization code)

  7. Filip Skokan

    Add a rule that clients must accept any authorization code they encounter only once; i.e., blacklist it for at least the lifetime of the authorization codes from that AS (we might need metadata for that)

    Client software, and its deployments, did not so far require such mechanism for its operation, effectively a mechanism to synchronize state across instances (can be e.g. on different servers, VMs, localities, PaaS workers, on the CDN edge or gateway).

    We have experience with a similar requirement, the one time code use ask for AS implementers. AS implementers are few, compared to client implementers at least, and we still cannot make it a normative MUST based on implementer’s feedback/pushback there. On the other hand, client implementers are many and they may operate in even more varied and exotic software architectures. This mitigation suggestion is a very hard, I dare say even impossible, sell.

    Given that this mitigation is not a 100% effective and the attack itself is very much impractical and unlikely (disclaimer: I am possibly not a good judge of its impracticality and likelyhood), which is what we discussed in today’s call, I would ask that we either search the realm of possibilities for a less taxing mitigation, adjust the attacker’s capabilities to be slightly realistic, or go very easy on the normative and certification requirement we end up using.

  8. Daniel Fett

    I discussed this with Pedram and Tim and we came to the conclusion that without any completely new techniques using browser-based trust, this attack cannot be mitigated. Any assumptions we can make in the model to exclude this attack would lead to unrealistic assumptions. It is therefore most likely the best approach to remove Attacker A3b (which can read all authorization responses) from the attacker model.

    Interestingly, this can be considered the first result of the analysis: When this response leaks, redirect-based in-browser protocols cannot be secured. In essence, the problem is the same as in cross-device flows: A redirection happens between two entities, here origins, and an attacker can, given the right tools, decouple the two entities and inject himself in the middle.

    Although the attack has already been documented, I think it makes sense to again discuss it with a broader audience and figure out if there can be new solutions in the future, e.g., using new browser features.

  9. Log in to comment