Sending a HubClientInfo update when a new platform registers (issue #3)

#15 Declined
Repository
jrhender
Branch
HubClientInfoPush
Repository
shareandcharge
Branch
develop
Author
  1. John Henderson
Reviewers
Description

This is a partial implementation of HubClientInfo Push Notifications. Currently the only scenario which is implemented is sending a notification when a new platform is registered. Other ClientInfo notification scenarios still need to be implemented including “the still alive” check.

Not sure if it makes sense to merge a partial implementation. Either way, I’m glad to keep working on it if it would be helpful.

This is related to issue #3 of ocn-node: https://bitbucket.org/shareandcharge/ocn-node/issues/3/hubclientinfo-push-requests

  • Issues #3: hubclientinfo PUSH requests resolved

Comments (9)

  1. Adam Staveley

    Hi John, like what you’ve done so far. Perhaps the PR can be extended with those additional event triggers (e.g. deletion of credentials, still alive check, etc.) before merging. I think that would satisfy the HubClientInfo requirements of a single node (PULL and PUSH for “local” parties). Then the next step would be to look at a network-wide solution.

    Just FYI, today we changed the license of the OCN Node from GPLv3 to Apache v2. As a result your future commits will need to be signed. More details here: https://shareandcharge.atlassian.net/wiki/spaces/OCN/pages/360611849/Contributing+to+the+Open+Charging+Network

    And lastly, perhaps for further discussions on the network-wide HubClientInfo solution we could move to our Gitter channel: https://gitter.im/shareandcharge/community#? It’s not active right now but hopefully this could spark further discussions and contributions.

  2. John Henderson author

    Hi Adam, thanks for the feedback. Sounds good, I will keep working on the “local” implementation.

    And thanks for the heads up regarding the license change and the Gitter channel 👍

  3. John Henderson author

    @Adam Staveley The pull request is extended to have the following HubClientInfo functionality:

    • Notifications when registering or deleting a platform
    • Status and “last updated time” updates for platforms sending a request or a response
    • Optionally enabled Still Alive Check with configurable rate

    Still todo:

    • Remote/cross-node updates
    • Limiting updates to whitelisted parties
    • Accommodation for load balanced OCN Nodes
    • Pagination of HubClientInfo responses

    I’ve added some unit and integration tests (thanks for the starting point, super handy!) but they could be refined and additional tests are probably needed.

    I’d love any feedback that you have! I’m glad to keep working on it though I understand if you want to handle the implementation differently and/or want to take over to accelerate the development.

  4. John Henderson author

    @Adam Staveley Hi Adam, I updated the pull request with a bug fix and larger refactor of the integration tests. The refactor of the integration tests is a separate commit for clarity.
    I’ve also added some comments to the files. Let me know if more explanation would be helpful.

  5. Adam Staveley

    Hi John, thanks for this, amazing work!

    I’d like to first develop it under a feature branch and then merge into develop once it’s done. I created the feature/hubclientinfo branch for this and have already merged your changes into it. Our aim is to finish it up over the next 2-3 weeks. I would therefore take over, but very happy to discuss changes with you and as always, feel free to contribute. Let me know if you have any other thoughts about how we can optimally work on it together.

    Big thanks for improving the integration tests too! I added them very quickly before the 1.0 launch and didn’t know when I would be able to clean them up :)

  6. Adam Staveley

    Hi again John, I will close this PR as development has moved to the feature branch feature/hubclientinfo.

    Would be great to get your feedback on it before we merge to develop (perhaps by commenting on the commits, feel free to create pull requests too). I believe it’s pretty much there, though there are still a few todos.

    Key points are:

    • Added a PlannedPartySearch scheduled task which checks the registry every interval to find newly planned parties belonging to a given node
    • Delete stored planned parties once they register with a node
    • HubClientInfoService notify method now sends out the changed ClientInfo to all other nodes on the network in addition to connected parties
    • ClientInfo PUSH requests across nodes require a signature by the node sending the PUSH update
    • Whitelist is checked on GET and PUSH requests to verify if the recipient is interested in a given party’s ClientInfo

    Todos:

    • Update documentation with the new properties available (as the configuration options are growing it probably warrants moving them to a new markdown file)
    • Implement pagination (interim solution is to reply with required “last-page” pagination headers (X-Total-Count and X-Limit), but we are effectively ignoring pagination requests). I don’t see this being too big of a deal right now as the list of participants is fairly small.

    I also believe the current implementation works for load-balanced nodes but I must admit I didn’t put a lot of thought into that during development.

    And big thanks again for providing the ground work for all this! I found it exceptionally easy to add to what you had already done.

  7. John Henderson author

    Wow, lots of progress! I think it's looking good! Sounds good re closing the pull request 👍

    A few thoughts:

    • I find the entity modelling of the NetworkClientInfo a little confusing. It seems as though the model now has two very similar sets of entities (NetworkClientInfo vs Platform/Role). I think I’m just confused about which entity should have the lastUpdated and connectionStatus attributes. I can see how these attributes are currently on the Platform entity (e.g. if the versions endpoint connected, than all roles are connected), could nodes also store platform/role aggregates for remote parties?
    • I agree with you, I don't see why this wouldn't work for load-balanced nodes. Though I guess you'd only want to have one of the nodes running the scheduled tasks?
    • Having read through the code more thoroughly, I think that TODOs are good entry points for contribution. They are relatively small in scope and allow for someone to contribute quickly. I didn't know that they were there when I started contributing but I will definitely use them as quick targets for future contributions! Maybe other contributors could be tipped-off to them if a note was added to the contributing wiki page: https://shareandcharge.atlassian.net/wiki/spaces/OCN/pages/360611849/Open%2BSource%2BDevelopment%2Band%2BContributions

  8. Adam Staveley

    Yes I agree the modelling is getting a bit confusing and might be wrong in fact. In OCPI 2.2 it is possible for a platform to register with the hub, providing a single versions endpoint for multiple roles. However, it is also possible for a platform to be in control of a single role (a platform for the CPO division, and a platform for the EMSP division), as it was in 2.1.1.

    The hubclientinfo module ties connection status to role, yet it also states that the still-alive check can be carried out using the platform’s version endpoint. In the scenario of a platform with multiple roles, we must assume that each role is online (connected) if the still-alive check is successful. This could be incorrect if the role endpoints exist on different servers or processes. This is also contradictory to our renew connection method, where we validate the connection status based on a role’s request or response.

    Actually our implementation works both ways: the platform dictates its roles' status (via still-alive check), and a role dictates its platform’s status (via renew connection). Putting connection status on the role, rather than the platform, would solve the latter issue, but ultimately the still-alive check could produce incorrect connection statuses.

  9. Adam Staveley

    Regarding other topics: yes only one node should be able to perform the still alive check and planned party search. I will include that in the documentation. I think the other events (like registering etc.) should only ever be triggered on one node so that wouldn’t be an issue.

    And yes good idea regarding the TODOs, I will also add some of the larger ones to the issue tracker.