2023-07-24 Architecture WG Meeting Notes
coDate
ZOOM Meeting Information:
Monday, July 24, 2023, at 11:30am PT/2:30pm ET.
Join Zoom Meeting
Meeting ID: 790 499 9331
Attendees:
- Sean Bohan (openIDL)
- Mason Wagoner (AAIS)
- Ash Naik (AAIS)
- Yanko Zhelyazkov (Senofi)
- Peter Antley (AAIS)
- Nathan Southern (openIDL)
- Ken Sayers (AAIS)
- Josh Hershman (openIDL)
- James Madison (Hartford)
- Joseph Nibert (AAIS)
- Brian Hoffman (Travelers)
- Tsvetan G (Senofi)
- Jeff Braswell (openIDL)
- David Reale (Travelers)
- Satish Kasala (Hartford)
Agenda:
Notes:
Ken's Diagram:
- Link to Lucidchart - https://lucid.app/lucidchart/d4a62c98-12a2-4229-a27c-18319eed1272/edit?view_items=JtvX69%2BqToi%2FujG2gR7V3eDu4VM%3D&invitationId=inv_5414fb08-f84c-42a9-a5d7-a84d2b9d13f5
- answer the question - whats going across the wire
- HLF as comms across the boundary of the carrier
- KS updated interaction diagram, put on a specific interaction between two objects, put info on that
- updated high level footprint diagram, rep when we use diff configs
- added to config options - what it is but added context, likely scenarios, when use SaaS
- Where and what is hosted - Hosted Cloud
- SaaS - full trust, want own node but can't maintain in house, or dont have internal staff, option of everything in one block, system as a service
- Multi-tenant - one hosted node mult carriers, no footprint in carrier cloud, JB mentioned hybrid approach to multitenant (each carrier has HDS) but not ready to discuss, for carriers cant or wont host own node, ok with data managed in shared db, who hosts?
- Split/Hybrid - small footprint, loading + extraction, one hosted for each carrier, carrier app code is just enough to run EPs, when to use: privacy still paramount
- Self-hosted - fully hosted by the carrier,
- JB Diagram
- what goes across the barrier
- member datastores on the left (green) to analytics node, application talks to default channel
- compatible variation
- UIs, proxies for applications
- represents HDS (loading in green), pink corresponds to requests for data calls, added openIDL logo before HLF - chain code sitting on HLF peers, application chaincode for data calls and transferring data in PDC
- yello - extraction engine - originally fairly automatic, when data call consented, request run and data extracted
- there is an additional consent process when data calls might be reviewed, UK for network 1 and N2 for network 2 (2 diff related UIs)
- data center or trusted cloud sace
- "making calls" - issuing data call request, notion of AAIS node and default channels where data call requests and consents are managed by whoever is requesting calls
- Making calls sounds like an execution of comms across the network
- better - issuing data call request, managing data call across the ledger
- all nodes have visibility into the default, based on role of who is logged in, they can do diff things with data call
- over view - managing exec of data call
- linkage - looking at data call request, not getting into that level of detail yet
- exec - not exactly how it occured, in most simple case executed when requested, in others scheduled, in future there may be some time of when a job is due or data call exec is needed, some type of op control approval needed (exex consent)
- 2 types of interfaces - execution in PDC (priv channel) and managing data calls is default channel
- still have the notion there is a network node, interacting, source of view of current data call requests
- notion of API - provide linkage between the data execution being requsted and performed from carrier and sent over
- channel could be complete connection, then becomes more like a web service - too complicated for now
- good to id what all that means port wise
- expand and drill down
Time | Item | Who | Notes |
---|---|---|---|