Wiki

Clone wiki

HEART / 2015-08-05

Attendees: Eve Maler Justin Richer Josh Mandel Adrian Gropper Thomas Sullivan Debbie Bucci

We have decided to delineate between mechanical and semantic scope docs.

For the PCP <-> PHR use case:

The pre determined choice token confidential token choice and exactly what information needs (example: PHR's authorization endpoint) to be shared in advance between the PCP's EHR and Alice's PCP was left out of the discussion for now.

There is one basic mechanical Oauth generic flow that occurs twice in the use case.

Given the group has generally agreed that the SMART specifications are a good place to start ... for this particular use case the only semantic FHIR scope that is necessary is the patient/*.read scope that grants permission to read any resource for the current patient.

During the registration process Alice should be able to select at a fine grain level which resources she is willing to share with the PHR. This mimic's a specific process - Adrian please provide. This information will be used to generate the access token.

The one thing left at the end of the discussion is whether the patient record is implicit or explicitly stated. This is a design decision that may make a difference as we move towards our next use case in which delegation is a factor.

Eve added: Thanks for sending notes, Debbie! A couple of tweaks below:

Attendees: Eve Maler Justin Richer Josh Mandel Adrian Gropper Thomas Sullivan Debbie Bucci

We have decided to delineate between mechanical and semantic scope docs.

(Just "semantic", I think. The UMA doc, at least, will have more than scopes, and the OAuth one might possibly, too.)

We decided call the profiles that are just for security purposes "mechanical" so that we could name them right in the use case, everywhere the [PROFILING] tag appeared. I took a stab at defining what we mean by "semantic": "Related to either the FHIR API, or the content of the resources accessed through it" (or something like that).

For the PCP <-> PHR use case:

The pre determined choice token confidential token choice and exactly what information needs (example: PHR's authorization endpoint) to be shared in advance between the PCP's EHR and Alice's PCP was left out of the discussion for now.

I think we were talking here about a "confidential client" and what information it needs.

There is one basic mechanical Oauth generic flow that occurs twice in the use case.

Given the group has generally agreed that the SMART specifications are a good place to start ... for this particular use case the only semantic FHIR scope that is necessary is the patient/*.read scope that grants permission to read any resource for the current patient.

So in other words, we're suggesting that the client ask for "the whole enchilada"...

During the registration process Alice should be able to select at a fine grain level which resources she is willing to share with the PHR. This mimic's a specific process - Adrian please provide. This information will be used to generate the access token.

...and that Alice be able to uncheck scopes she doesn't want to grant. Adrian talked about an "ROI" form (and I know he didn't mean return on investment -- can't remember what it stands for).

(This isn't a tweak, just a comment:) One of the things that trust frameworks above might want to nail down, over and above our profiling, is requirements around the UX displayed to patients to ensure they understand what they're authorizing. Even at our profile level, if there are large numbers of scopes for Alice to consider (I don't know how many there are), we may want to consider different ways of bucketing them as OIDC has done for attributes.

The one thing left at the end of the discussion is whether the patient record is implicit or explicitly stated. This is a design decision that may make a difference as we move towards our next use case in which delegation is a factor.

Corrections/updates appreciated. Jim Kragh added: Thanks for sharing,... informative and constructive in reaching the patient end point.

May all have a nice evening! Adrian added: I've attached a very typical Release of Information authorization. I've annotated the 5 elements common to all such documents that I have ever seen. The stuff outside if the rectangles is more or less optional.

This form covers one direction of the EHR-PHR Use Case. It is presented to the Custodian (the patient or their designate ) and approved by them by the Resource Server and pre-filled with information supplied by the Client, if available.

In some cases, the Client information is not available at the time the Authorization form is signed. In that case, it will be up to the Authorization Server to consider the Client and User information and provide the authorization to the Resource Server.

The Resource Server has the final say in all cases and could decide to ignore the authorization based on local or jurisdictional policy. This is outside the control of the Resource Owner and likely to be out of scope for HEART in all use-cases.

This ROI Authorization Form is the only "consent" that I'm aware of in clinical IT. Patients are asked to sign other documents, including: Registration Form, Notice of Privacy Practices, and Treatment Consent but none of these has anything to do with sharing of health data (except for HIPAA TPO which we will not get into here.)

Justin added: Thank you, Adrian, this is a great reference! I think your annotations make sense as well, things should map pretty plainly to the OAuth process. The tricky part (that we got a start on today) is going to be the scopes bits and getting those right.

For an UMA flow, it's also similar, except that the "who can see it" is a set of claims instead of the client application.

Debbie added: @Eve - yes I know its client but I'm really hung up on the token generation/choices. Thanks for the tweaks.

I know we clarified that the release form is NOT consent in one of our earlier meetings but is this (release of information) what I have heard others refer to as simple consent? During this process would access to problems/meds/allergies be included in that authorization/consent flow? I visualized more than demographics in the conversation.

Adrian added: I have never heard the term "simple consent". There's nothing like "consent" in the context of data sharing that I can think of. HIPAA removed the patient's right of consent in 2002 https://patientprivacyrights.org/?s=HIPAA+Consent

There are consent forms for research but that's not part of the use cases we're tackling these days.

Does anyone have an example of consent for clinical data sharing to share with us?

John Moehrke added: At the federal level, under HIPAA alone, there is no need for consent for purposes of using the data within the Covered Entity for Treatment, Payment, and Normal operations.

BUT, there are plenty of states that require consent… Ignoring reality of states regulations is not useful.

AND, there are some institutions that would rather have a consent that authorizes them to share beyond their Covered Entity boundary. Not everyone reads HIPAA ‘Treatment’ as an authorization to share with any treating provider.

AND, there are some ‘sensitive’ health topics covered by federal money that do come with a requirement for consent for sharing. This was the main focus of the DS4P efforts.

So, let’s not focus on HIPAA alone. Let’s expect that ‘for whatever reason an organization wants to have positive evidence that the patient desires sharing to happen’ as the trigger to allow it to happen (otherwise deny it from happening. This would seem more helpful to the community we are doing this work for.

An important aspect of all of this is how will the organization holding the data be able to legally defend that a UMA/OAuth token was valid evidence of consent that would hold up in a courtroom… We can’t address this in HEART, but it should not slow us down. We again, document this as a precondition to our work. One way this is done is that a paper trail is a part of the initial setup of a patient engaging with the system.

Debbie added: I know I am generalizing but this flow augments or runs parallel to the opt-in/opt-out options I have seen for release of personal identifying information or the options I am forced to acknowledge when installing/initializing/registering/ authenticating to an app for the first time .

Asynchrously identifying these sort of preferences moves us towards the more complicated DS4P UMA like scenarios (PoF)

John added: Debbie, Yes, that is what I am proposing that we Assert. That there is some legally defendable ceremony that is done that gives assurance to all parties involved. But that this is a gross ceremony. The fine-grain, actual authorization, is done inside technology (UMA/OAuth). In this way the Covered Entities get their legal bases covered, while everyone gets a more dynamic solution for day-to-day, or activity-by-activity from HEART.

Adrian added: John is right. Debbie is right too. We did spend many months discussing consent with the VA during Privacy on FHIR. We used DS4P (Data Segmentation for Privacy). Justin was there and I hope he will now chip into this discussion with his joyous experience. Here we go again...

Consent, in the sense that John is using it is easiest to see with state health information exchanges (HIE) like the one I'm involved with in Massachusetts. I can provide much detail and color on how that evolved over two years. In my opinion, it's legal quicksand - but that only excites the institutional legal concerns that the VA and other Covered Entities (CE) live to deal with. I've had help from a real lawyer in working on some of this so I've cc'd Jim to this thread.

What the CEs seek is a safe harbor. What the CEs want to avoid is transparency. When HIPAA took away the right of consent in 2002, they introduced accountability in the form of Accounting for Disclosures (A4D). If you have consent without A4D, the only way privacy breaches become known is from whistle blowers and, as we see so often today, even security breaches are not discovered for months. The CEs have steadfastly refused to implement A4D as digital real-time notice because "it's too hard". The result is a privacy and security mess in healthcare that we don't see in finance or commerce.

Let me get to the point:

Consent, including for DS4P or HIE, implies a choice on the part of the subject. This choice can be represented by a form just like the ROI form (I've attached the correct annotated PDF. The one I uploaded before was corrupt.) The only difference is how the Client is specified in section 3 and whether the patient is aware that their information has just been transferred from 1 to 3.

After months of PoF and two dozen days of furious discussion about "consent", "consent directives", institutional, state, and federal jurisdictional restrictions,....... the matter still comes down to one or more forms just like the ROI form and whether or not the Resource Server is responsible for contemporaneous notification to the subject that their data was sent from 1 to 3.

As far as the "paper trail" the lawyers would prefer around this ROI form, this is Jim's specialty but from where I stand it is absolutely nothing specific to healthcare and would be much better dealt with in OpenID or IDESG than in HL7.

John added: Adrian, This is very specifically my point… We, in HEART, acknowledge the problem and place it clearly as a pre-condition. There is much value we can add within this context, and little we can do about this problem.

Aaron Seib added: I tend to agree with John’s recommendation with a friendly amendment.

We should not mis-use the word consent. We should use the term – authorize for disclosure.

The primary reason being that the term consent has a lot of baggage and is defined in law for Human research protections and authorize for disclosure is more accurate to me. Consent – as pointed out by the Kind Sir from Boston (Adrian) to point out – meant something before 2002 that it doesn’t mean anymore.

In my opinion the notion of authorize for disclosure also conveniently aligns with my understanding of what ab “UMA/OAuth token” would represent on a per transaction basis.

In court we would expect the entity accused of unauthorized disclosure to be able to produce a valid UMA/OAuth token as a sufficient defense from mis-representations of trial lawyers.

John added: I agree with your proposal for ‘Authorize for Disclosure’ and to de-emphasize ‘Consent’… (although this problem with ‘Consent’ is only a USA problem)…

But I don’t think that a UMA/OAuth ‘token’ will be seen as legitimate evidence in a court. It would be quickly shown to be not intelligible by the layperson, I can barely read them. Thus it is not evidence of the act of ‘authorizing for disclosure’ ceremony. This is indeed a practice-of-law problem that we all hope changes, but I have little hope that it will change in the coming 10 years. This is why I want the gross ceremony to be a pre-condition, with the UMA/OAuth technology be the fine-grain solution. I expect that a gross ceremony can be shown in a court as evidence that all parties understood the use of the technology would be used for fine-grain. Note that if the courtroom antics change, then this pre-condition simply goes away. But by putting it there we enable it to be used, and thus make our solution more palatable to the legal folks at those custodian organizations that are afraid to release information today.

Josh Mandel added: As to the division between "gross ceremony" and "finer-grain adjustments", I want to suss out whether the following model (which readily applies to UMA, though not to vanilla OAuth) is consistent with you have in mind, or whether this model is addressing a different question entirely:

  1. Gross ceremony consists of Alice introducing her resource server to an authorization server of her choice. For example, she might sign a document saying (effectively): "Dear Dr. Jones: please treat my authorization server, at https://authz.alice.org, as representing my wishes for disclosure of my health data. Use the decisions that server renders to guide your access control decisions about my data." This document is easily comprehensible, could serve as evidence in court, etc.

  2. And then the finer-grain adjustments would be made by Alice in concert with her authorization server (for example, establishing specific policies about who can access her data, and which data, and for what purposes, and under what conditions).

John added: YES!

Eve added: A couple of thoughts on this last Josh/John exchange, and the whole thread.

First, in this whole thread, we are assuming US-only in the impact of what we do. Our charter is international, though many of us work in the US sphere exclusively. So it's good to be mindful of state-specific requirements, but it's also good to be mindful of non-US jurisdiction needs too (which may require needs for extra stringency, extra care with the term "consent", etc.).

Second, it's true we definitely do rely on the technology layer to achieve specific effects of "authorization-ish stuff". I can definitely see the usefulness of distinguishing gross vs. fine ceremonies at different stages. It's also important to map what they apply to specifically in each technology. Here's an attempt:

CeremonyOAuthUMAWhat Alice is authorizingAuthz for client to get scoped access to use protected resources at resource servergross or fine (if unchecking is allowed)n/aGranting of access token with scopesRevocation of access tokengrossn/aRevocation thereofIntro of authz server to resource servern/agrossUse of UMA protection API, possibly Ts & CsRevocation of intro of authz server to resource servern/agrossRevocation thereofIntro of authz server/resource server to clientn/agrossUse of UMA authz API, possibly Ts & CsRevocation of authz server/resource server to clientn/agrossRevocation thereofConfiguring authz server (with policies) to allow or disallow access to protected resources at resource serversn/afineAccess to protected resources, revocation of access, time periods of allowable access, possibly after-the-fact approvals of previously attempted accesses, possibly requirements for purpose of use once access that can only be enforceable at a nontechnical layer...

Jeremy Maxwell added: How many patients do we expect to have the technical savvy to say this to their provider? In practice, where will these authorization servers reside?

Adrian added: I'm not sure what we're negotiating here. The current approach to interoperability does not work for many, maybe most people. Part of the reason it doesn't is that privacy approaches that work at a scale of 10K or 100K people don't work when the scale is 100 Million people. I've been a party to four or five generations of attempts at interoperability (IHE, NwHIN, CONNECT, DIRECT, BlueButton Plus) and we still don't have a clear solution. We've also seen that even completely centralized systems like the UK NHS can't deal with this problem very well so I can't see why CommonWell or Carequlality or Epic everywhere would succeed.

The one thing we haven't tried is patient-driven interoperability. Apple has shown us how patient-directed interoperability can work in a highly integrated system. UMA is the only standard we have that has the potential to introduce patient-driven interoperability to healthcare.

We have to give patients that understand UMA the option to use it. Patients who don't care will see no difference at all because the Resource Server will offer a default AS.

Once patients have the option to specify the AS the other interoperability issues, including scopes, will incrementally get fixed. But the first step is to agree that there's only one Alice and she has an AS. That is the only scalable and non-coercive solution.

Jeremy added: The only point I’m trying to make is that Alice should be able to exercise whatever legal rights she has to privacy protections irregardless of her technical knowledge. Being able to use a web browser should be the only technical skills she needs. Alice knowing about UMA, authorization servers, and the like should not be a precondition.

John added: Those of us that are not in Mass, have great healthcare interoperability through the NwHIN… I agree that it is not patient directed, but my records are available anywhere in the NwHIN. I am not trying to disagree with you, but when you say that “Apple has shown us how patient-directed interoperability can work in a highly integrated system.” is just way too argumentative. A walled-garden is always well manicured.

I too have lost track of what the argument is… all this discussion around ‘technical savvy’ vs not is irrelevant to the work that we can accomplish in HEART.

Jeremy added: This thread began when I asked what does this look like for the patient? What information does Alice need to give her provider?

Adrian added: The only information Alice should have to give to her provider is the URL of her UMA Authorization Server (if she has one) or something else that resolves to her UMA Authorization Server. In many cases, this pointer to her UMA AS could be accessible as part of a federated IdP service but we may not be able to count on federation being readily acceptable to the providers.

Glen added: It appears that we are discussing a five-part problem:

  • The technical aspects, driven by use cases, of how to secure, communicate, and assure identity, authentication, authorizations, and obligations.
  • The technical aspects, also driven by uses cases, of how to express, communicate, and assure patients', and their delegates', disclosure preferences at sufficient (TBD) granularity.
  • The human engineering aspects of how to minimize the effort and technical knowledge required for the end-users - both patients and providers.
  • The technical, business, and legal aspects of federating the participants and their automated systems.
  • The economic aspects of how to minimize costs while allocating those costs equitably.

I'm sure there may be other ways to slice and dice the problem space.
However, the first two points above seem to be well within our immediate capabilities. The human engineering aspects require additional expertise as well as end-user input. And the last two points need policy-level resolution, i.e., out of this group's scope.

I'd recommend we revisit these points before we engage in the next use case, but after we finalize the current one.

Aaron added: I completely agree with your assertions. It was my wishful thinking \over simplification to state that a token would be sufficient. If you had both the ceremony (the consumer signed up and indicated they understood the impact of the fine-grain functionality and the risk of mis-configuration being their liability) and the token a Judge would be digressing from a lot of existing case law to favor the trial lawyers complaint lacking any extenuating circumstance (was the user coerced into using the fine-grain mechanism).

Adrian added: This is exactly the problem Jim's Common Accord is designed to solve. It links human-readable documents with machine-based structures a-la github. We also just launched a legal subgroup in UMA. All good stuff that HL7 and FHIR should not have to worry about.

James Hazard added: Adrian,

Thanks for including me.

John,

The gross ceremony / fine-grain issue is an interesting vocabulary that I rather like.

Many of the issues of law and consent can be viewed as problems of traceability. How do I know the text I'm being asked to sign or rely on is derived from a verified source, am I informed if someone spots an issue, which solutions are trusted by people I trust? This problem maps well to issues in source code management, and git/GitHub provides a really robust solution. The "legal" part of the problem is mostly a matter of getting communities of use around particular formulations.
The goal is shared repositories/wikis - a kind of 3.0 Civil Code.

Patient consents are one of the most interesting use cases because they are so important. We've done a number of examples. With Primavera De Filippi (of Berkman, who also coded the current parser) we did a 3-language machine-readable model patient consent based on the form of the Global Alliance for Genomics and Health. With Adrian, I did a swim lanes sketch. For Apple's ResearchKit (with John Wilbanks of Sage Bionetworks) - I did a form from one of their studies.

Global Alliance: http://ga4gh.commonaccord.org/index.php?action=list&file=./Demo/ Swimming with Adrian: http://www.commonaccord.org/index.php?action=list&file=/doc/roi/ ResearchKit: http://my.commonaccord.org/index.php?action=source&file=Research/Consent/Form/Research_Consent_Form.md

None of these yet have active communities, though there are a number of discussions at various stages.

There is a strong fit with peer-to-peer payments systems. The gross ceremony / fine-grain issue is a lot like the legal text vs "smart contract" discussion there.

Happy to point to more examples or make a new one.

Cheers, Jim

Jeremy added: The pre determined choice token confidential token choice and exactly what information needs (example: PHR's authorization endpoint) to be shared in advance between the PCP's EHR and Alice's PCP was left out of the discussion for now.

Perfectly fine with leaving this in the parking lot for now, but before we’re done we need to have very clear setup/configuration/implementation guidance. It needs to be clear and easy to setup and use. If we add a bunch of configuration steps it will be additional hurdles to adoption. Remember, many folks have struggled with certificate and trust bundle management in Direct. So we need to at least be simpler than that.

Debbie added: +1!!!

Aaron added: I am not sure I even understand the statement highlighted in yellow accurately yet. J That might be a start. Predetermined choice token means? Confidential token choice?

Debbie added: My bad should be confidential client ( as Eve corrected )

Josh added: Apologies if this was totally unclear. The "Dear Dr. Jones..." statement was meant to capture what it means for a resource owner to introduce a resource server to an authorization server (in UMA terms). Does this help at all?

Jeremy added: No worries. I’m just trying to understand how we think this will happen in practice. So for my wife (non-techie), what is her resource server? When she goes into her provider, what does she have to provide?

Sorry if I’m being dense or revisiting things that have already been discussed. I missed about 4 weeks of discussion so I’m trying to catch up. I’m trying to understand what the workflow looks like to a non-techie patient.

Debbie added: I think that is an important distinction. Not sure it's necessary for nonteckie to know there is an authorization server. Perhaps it's features of a phr HIE/ACO or patient portal...

Josh added: Roughly: the resource server is what your healthcare provider hosts. Today, you might have various patient portals, provided by different healthcare organizations, each showing you different subsets of your data. The paradigm we're talking about with UMA is: each one of those portals is actually backed by a server that exposes your data through an API. Those servers are the resource servers. And the goal of UMA, as a protocol, is to get all of those resource servers to "work with" a single authorization server, so you can set permissions in one place.

To be honest, I remain skeptical that we can orchestrate this many moving parts into a user experience that non-technical healthcare consumers will be able to understand and leverage. I don't think it's impossible for any theoretical reason, but it's a very, very hard design problem. The systems that come the closest to achieving this vision today are systems that succeed because they control the whole stack (e.g. in Google Docs, the sharing/permissions settings are great, in part because they're tightly integrated into the document editing environment. When you try to standardize each step of the process and factor the architecture out into separate components, it becomes harder to build such a tightly integrated user experience).

John added: I agree with Josh, our problem is already infinitely impossible. Our ultimate goal should be the non-technical healthcare consumer, but In the interests of making incremental improvements can we start with a goal of a healthcare consumer that DOES understand the technology? We can revise and improve as we go (being Agile).

Aaron added: I think we can assume a pseudo-tech person for this work group’s purposes but we need a glossary-type aid for those of us that are catching up.

BTW, I don’t think it is impossible to make this typical user accessible if we assume that user behavior will follow the patterns that evolved in Moral\Societal\Institutional trust decisions made by our species.

I rely on people like me to help me decide who to trust in the world. If Deb Peel’s privacy posture is like mine or John Halamka’s I’d want a mechanism to start the configuration of my privacy preferences in an Authorization Server that is trustworthy to start off to be like his\hers. And then be able to modify it as I walked through a guided review of what the different configuration options mean and the pro’s and cons that matter. It isn’t necessarily something that I am expert in but there are bus loads of UI\UX people that could make this easier for consumers.

Any technology that is useful reflects human needs – figuring out how to make it adoptable is a distinct domain that we should all appreciate for making the stuff we do useful to the world. J

Jeremy added: I disagree. It has to be simple, straightforward, and easy to use by a non-techie consumer. Otherwise, this will be a niche technology that will only be used by individuals that (1). understand the technology and (2). care enough to restrict access to their health information. Personally, I satisfy #1 but not #2. So if even I don’t qualify, what is the actual size of the target user base? And who do we expect to adopt this stuff if we cater to such a small population?

I realize that it’s a challenging problem but as an engineer I never shy away from problems simply because they’re challenging. J

Thanks,

Updated