openIDL - Architecture - Working Group Backlog
Here we track the items in the backlog for architecture:
POC Scope
- current stat plan
- accepted architecture (as described here)
- can be a single upload
- for 2020 data
- validate against individual carrier data, only those that participate in the poc
- ?? reconciliation - quarter / annual
- ?? edit package
Success Criteria
- personal auto reports (2 identified) (for the participating carriers' data) matches
- manual integration between hosted node and carrier adaptor / hds
question
- is there a staging area in the hds
Readiness for Production
When is the architecture ready to be used in production?
- When all backlog items are adopted and implemented
- move data to published when requested (this leads to massive duplication)
High Level Requirements
- Privacy for Data Owners
- carrier raw data is private
- Data Quality for the Consumers
- able to participate in the process
- auditable participation in the process
- carrier raw data is in a common format
- data flow to final report must be secured
Backlog
Description | Decisions | Adoption Status | Date of Status | Spike/POC | |
---|---|---|---|---|---|
Data Verification | How do we verify the data is available for a data call / regulatory report? This is not a measure of its quality (see below) |
| |||
Data Quality Validation | What validation of the data is provided? Is it all part of openIDL? Is it shared across the community? Is there some provided by the carrier/member? |
| |||
Edit Package | SDMA provides a way to identify and fix errors as they occur before submission of the data. Does this belong in the openIDL as a community component? If so, how do we provide that? |
| |||
Adapter Hosting Approach | The adapter runs multiple components required by participants in openIDL. How can we host these in member environments? |
| |||
Adapter Components | What are the components in the Adapter? | ||||
Extraction Technology | What is the technology used to execute the extractions in the member environment? | ||||
Data Standard Format | What is the format for the data? The data at rest in the HDS is the same or different from the data as it is being validated? |
| |||
Harmonized Data Store and History | What history is maintained in the harmonized data store? |
|
Discussion
Data Verification
When data is inserted into the harmonized data store, the insertion is registered on ledger as passing all validation and supporting some level of extraction.
Is there a need to capture a hash of the data?
The data inserted into the HDS is captured per policy. The history of the transactions is captured, so the "state" of the policy at a given date may change between the time data is initially inserted and when it is extracted.
The data extraction will state what level of data is required. How does the member attest that the data meets that expectation? Does the consent constitute that attestation?