Reconcile the mapping/processing between input descriptors and submitted inputs

Issue #1256 resolved
Daniel Buchner created an issue

There is currently a mapping from PE Descriptors to submitted creds/items in the PE Submission objects that is used in deterministic processing steps within PE. If this mapping is not present, the steps 1) can’t be performed as specified (moves from direct O(1) to loop test O(n) for each item), and 2) doesn’t retain the nested compat ability to drill through tiered items. I would love to find a way to use PE or somehow compose the objects such that they can adhere to the processing rules and retain the features, so perhaps we can talk this part through on a call?

Comments (6)

  1. Stephane Durand

    This mapping approach was actually what got me the most concerned when I went through PE and I’d welcome clarifications on what it is meant to achieve.

    Overall, I found the Presentation Definitions quite prescriptive in the sense that they state quite precisely what they want and with a granularity that seems not so fine. This may not necessarily be the case, but the supplied examples didn’t help dispel the feeling (it shows requests with list of specific id documents, inputs following specified schemas).

    Another way to look at it is to consider the semantic that’s used: a PD comes with an “input_descriptors” list, as opposed to an “output_descriptors”, which would clarify what the verifier needs to fulfil its objectives, rather than also carrying an assumption on how the objectives will be fulfilled.

    To illustrate it with an example, a verifier needs to know the address of the holder (and that address is receivable in a certain trust framework XYZ). Let’s consider the two alternatives to see how it translates throughout the sharing process:

    1. It can ask a PE Submission that will “output” [ { what: “address”, trust_framework: “XYZ“ }, ] (not trying to follow a specific syntax)
    2. It can ask a PE Submission that will be based on “input” [ { what: “utility bill“, from: “did:example:1234“ }, { what: “housing tax“, … }, … ]

    The verifier prepares the request:

    1. It just express what its operational or compliance needs require. Quite simple and straightforward.
    2. It analyses how its needs can be covered. It’s a possibly convoluted process.

    Looking at how the verifier is served:

    1. The burden of evaluating how the need of the verifier is on the holder + his sharing medium (which can be a SIOP instance or even a centralized hub-like service). It’s a burden, but also it gives more control to the user as to how he want to address the request and a better opportunity at data minimization.
    2. The user will have to chose from a finite list, (if he has another type of input that would fit the purpose but is unfortunately not listed, maybe he cannot access whatever service he was trying to). Also, sharing a utility bill (in full) will also disclose how much he spends on electricity or…

    Then, 2 has the advantage that if the request is served, there’s less room for interpretation when determining if has been appropriately served (issuers asked for (A, B or C, D and E) and got something that can be evaluating as following the grammar or not, as opposed to issuer asked for something that does X, Y, Z and got G, H, I, which is deemed by the holder to fit the bill). If there is a compensation associated to serving the request, that is quite desirable because it will, on the technical plane, simplify dispute resolutions.

    The verifier processes the response

    1. It analyzes the response to see how its requirements are covered. It’s a possibly convoluted process.
    2. It controls that all the expressed inputs from its request are covered and relate them to the analysis performed before issuing the request.

    I’d say that on the complexity of mapping the verifier requirements to a list of id attributes / documents at the issuer, the net comparison gives an even score (it does the difficult part first or it does it last, but its does it anyway and about the same difficulty).

    Then, the respective benefit

    1. A more user-centric control of disclosure, at the expense of duplication of requirement analysis at the holder + sharing medium site
    2. An easier resolution of dispute

    I described my concern using a semantic opposition between “input_descriptors” and “output_descriptors” because it felt easier this way, but my interpretation of the “mapping” this issue is about is that it is fully supporting and even enforcing even more tightly the “input_descriptors” vision.

    Looking at IODC4VP, the id property of the PE Definition had been specified optional and therefore allows to vacant the coupling with the PE Submission through the definition_id property (which should then also have been specified optional for consistency’s sake), and I was rather relieved in thinking that OIDC was not going to be as prescriptive as DIF seems to be on this aspect.

    There’s indeed a “bug” in the way that respective and Submission.definition_id presence requirements are not overloaded consistently but the way to fix it depends on whether OIDC wants to support ‘prescriptive’ requests only or also allow 'descriptive' ones.

    Since I jumped rather recently in PE / OIDC, I do lack some context on how this cooperation is built and how the respective visions support each others. I couldn’t but notice that the visions where not converging on all aspects (maybe there’s also personal opinions involved).

    As I understand PE aims at defining how request and associated responses can be structured, and a most likely application field is to implement libraries that could be reused independently on the transport layer. In that case, I think there’s a tricky balance to achieve to provide on one hand a support as complete and coherent so a library implemented on PE spec is as useful and relevant possible and on the other hand enough flexibility so it doesn’t exclude / conflict with uses cases / application cases. (by all mean feel free to correct if I misunderstood things).

    I see a clear benefit in leveraging on the effort DIF put into describing a definition and a submission, but I also see that not everything from PE may fit nicely with OIDC (there’s some talks about format, this one about mapping enforcement) and such points of divergences are rather raised bottom-up from the spec details to the spec objectives (they are generally not difficult to address as long as the objectives are clear, so it may be easier to clarify the objectives first).

    Is there something like a MoU / common statement somewhere that would help me understanding what is vision agreement between the two foundations on the objectives and how things are expected to stack in a practical implementation.

  2. Kristina Yasuda

    Hi Stephane, please find attached Liaison agreement between OIDF and DIF (only .jpg files were allowed in the comment, but I can also send you a pdf). It is quite general, and the current main scope of collaboration has been OIDC4SSI specs (mainly SIOP V2 and OIDC4VP)

  3. Kristina Yasuda

    Regarding the two approaches brought up in this issue, on 07-13-2021 SIOP call, it was discussed that returned result being returned as an array is sufficient and having an object being returned would make loop up unnecessarily difficult.

  4. Kristina Yasuda

    The comments to this issue has gone beyond the initial scope. Stephane agreed to resolve this issue in 07-22-2021 SIOP call.

    The initial content of this issue is now covered by issue #1264.

  5. Log in to comment