This page covers the backlog for openIDL.
# | Item | Description |
---|---|---|
Extraction pattern - tech | What is the tech for the extraction pattern? map/reduce, optimized for scale, GraphQL or others? | |
How to assert data integrity | How to assert data integrity? A checksum after record is locked & written to the chain, store the acknowledgement from the HLF to a control DB and map it to a record set etc. | |
How to assert data quality | How to run technical and business validation on data and certify the data? | |
Common Rule Set | Is it possible to provide a common set of rules that can be used by all carriers against their data before making it available to the extraction? | |
Data quality error threshold | Current practice allows for an error rate of up to 5%. Allow? If allowed, how to design & implement | |
Reference data validation | Where to host reference data service? Within member's enterprise or within node? Must be applied before extraction (tenet) | |
Reference data lookup services/APIs | Which APIs to look up? For example, USPS state/zip validation, Carfax etc. | |
Reference data lookup services/APIs - pricing model | Who pays (assumption - whoever owns the data pays), and how to charge the consumers (via assigned accounts, via centralized billing account prorated to consumption etc.). Who signs the vendor contracts | |
Separating the Hyperledger Fabric Network from the data access | Can a carrier participate in the network from a hosted node without putting the data there? That is, can we give a carrier access to the network without them having the data access portion hosted in the same node. The HLF runtimes are not required to run in the carrier, and only a simple api is made available for extraction. | |
Simplify the technical footprint | Can we simplify the architecture so that there are not so many technologies required? | |
Hosted nodes | Should we consider hosted nodes for the HLF network instead of requiring all carriers who desire data privacy to host the network? | |