Here we track the items in the backlog for architecture:
Backlog
Description | Decision | Adoption | Spike/POC | ||
---|---|---|---|---|---|
Data Verification | How do we verify the data is available for a data call / regulatory report? | ||||
Data Validation | What validation of the data is provided? Is it all part of openIDL? Is it shared across the community? Is there some provided by the carrier/member? | ||||
Edit Package | SDMA provides a way to identify and fix errors as they occur before submission of the data. Does this belong in the openIDL as a community component? If so, how do we provide that? | ||||
Adapter Hosting Approach | The adapter runs multiple components required by participants in openIDL. How can we host these in member environments? | ||||
Adapter Components | What are the components in the Adapter? | ||||
Extraction Technology | What is the technology used to execute the extractions in the member environment? | ||||
Data Standard Format | What is the format for the data? The data at rest in the HDS is the same or different from the data as it is being validated? | ||||
Data Standard Levels | What are the levels of the data standard? How are they identified by extractions? What do they cover? Timeframe, data? |
Discussion
Data Verification
When data is inserted into the harmonized data store, the insertion is registered on ledger as passing all validation and supporting some level of extraction.
Is there a need to capture a hash of the data?
The data inserted into the HDS is captured per policy. The history of the transactions is captured, so the "state" of the policy at a given date may change between the time data is initially inserted and when it is extracted.
The data extraction will state what level of data is required. How does the member attest that the data meets that expectation? Does the consent constitute that attestation?