Grouping | ID | Date | Requirement | Notes |
Data and Data Integrity | D.1 | 5/23/22 | Data contained in the carrier data store will conform to OpenIDL data model standards | |
Data and Data Integrity | D.2 | 6/1/22 | OpenIDL data model standards shall exist for all Property & Casualty lines of business except Workers Compensation (List out lines of business). Domestic business for now. | |
Data and Data Integrity | D.3 | 5/23/22 | Minimal data attributes to be available in carrier data store shall consist of the "Day 1" OpenIDL data model fields, other attributes in the OpenIDL data model are populated at the option of the carrier | |
Data and Data Integrity | D.4 | 5/23/22 | Data shall consist of policy and loss transactions over the course of the policy term and lifetime of any associated claims based on source system activity | |
Data and Data Integrity | D.5 | 5/23/22 | Data shall be current to the Prior Month + 45 days | |
Data and Data Integrity | D.6 | 5/23/22 | Companies shall maintain data in the carrier data store for 5 prior years plus current year | |
Data and Data Integrity | D.7 | 6/1/22 | All data contained in the carrier data store is soley owned and controlled by that carrier | |
Data and Data Integrity | D.8 | 6/1/22 | Data shall remain accurate as of a point in time and may be corrected over time if errors in the transmission of data occurs with no obligation to restate prior uses of the data. Once data leaves the carrier node, that data is assumed to be published/accepted. | |
Data and Data Integrity | D.9 | 6/1/22 | OpenIDL shall maintain (specification and implementation) an edit package to be available and used by carriers to test conformance to data model standards and data point interactions similar to the functioning of the AAIS SDMA portal. Implementation is part of HDS solution. OpenIDL will audit, certify and conformance of edit package implementation. | |
Data and Data Integrity | D.10 | 5/23/22 | Data must pass through OpenIDL edit package and be within 5% error tolerance per line and state based similarly to acceptance by AAIS through SDMA portal | |
Data and Data Integrity | D.11 | 6/1/22 | The OpenIDL data model standards will foster effective and efficient data extractions such that queries of data can be satisfied within 24 hours of commitment to participate in an information request | |
Data and Data Integrity | D.12 | 6/1/22 | Any changes NAIC required fields to the OpenIDL data model will require a minimum of 18 months notice for carriers to conform | |
Information Requests | IR.1 | 6/1/22 | Requests for information shall be specific in detail and communicated through a secured protocol | |
Information Requests | IR.2 | 6/1/22 | Forum shall be established for carriers and regulators to discuss and agree to intent and interpretation of information request | |
Information Requests | IR.3 | 6/1/22 | Request for information shall be for aggregated information only, no individual policy, claim, or personally identified information shall be requested or honored | Need for info @ a policy level or vehicle, obfuscation of VIN ways these requests are added to or validated? KS: exceptions known when extrax is requested JB: at policy level, info from policies CANT be extracted (they might be useful) or some level of aggregation. Data contributed from each carrier to prevent identification DH: requests - none ask for policy or claim info up until today DH: straight to regulator? fine w/ providing info. Analytics node others have access to and can pull? NO JB: only regulators would have access to information DH: person or group making reports for reg? Concerned. Controls so they cant do anything with data DR: blurring lines from compliance-style store to transaction processing, requires higher standards, conflating 2 systems, holding to other standard can make a lot of reqs messy JB: not matter of timeliness or responsiveness, matter of scope and level of aggregation, level by which info is agg or identified, only collected for purpose of sending to regulator, covenants needed DR: purely a regulator - not LE or Insurance Commissioners. Caveat - not bulk. PII should never be requested in Bulk. If a specific question, then yay/nay "coverage exists" but leery of "give me all VINs" just because DH: dont want to open up our books JB: PII in general not involved. ND is VIN not necessarily person involved. JB - another requirement applies to Data Requests |
Information Requests | IR.4 | 6/1/22 | information requests shall identify who has access to the | DH - not naming people, data within the node. For specific info request KS: To the analytics node or to the specific report? Doesn't change from request to request. Dont add new user to analytics node DH: req from Reg, no interim body in between, just us and Reg have access to data. If AAIS has access to openIDL, and create extraction pattern, need to understand WHICH bodies will have access to that data - needs to be spelled out. Get to 3rd party: AAIS + Carpe Diem, wants to know ahead of time JB: access to the data, the results, the report DH: access to the data AND the report, outside of carrier node KS: aggregated, extracted data DH: Carrier, claim, policy, PII - I need to know who has access to it KS: anything you say is OK to be in the results, you want to know who JB - qual and credentialing DR: need for simple data lifecycle, provenance. For this request, this all lives in the extraction request, for this request - this raw data - the result shall be X and visible to Y folks for Duration Z. No unfettered access to HDS, only with some purpose. Even analuytics node shoulfn't be used for other purposes without consent KS: Definition of what data shoulf be used for JB: Categories: privileged, etc. DR: a lot may not be funct. but when we get to approval of extraction patterns, might be more implementation JB - term sheet of a request DR - adapters can see these X raw elements, can turn them into Z elements for ABC. Routine if useing same data, but shouldf be explicit KS: Nuance, part info request and part how it works. Who can see uncombined data should be part of system architecture. Refined results are what we are talking about. KS: When a carrier consents their data is run thru extrax, their data is recognizable UNTIL it is processed DR: 3 steps: RAW, Semi-Agg/not anonymized, Anonymized. Bake in now |
Information Requests | IR.5 | 5/23/22 | Information requests shall define timeframes for data to be included in the aggregation | JB: talkkng about lifetime use of info - historical or one purpose, number of uses, number of purposes DH: when you make a req for info, request must be specific (time parameters, types, etc.) for the request of the data - query range |
Information Requests | IR.6 | 5/23/22 | Information requests shall define the attributes to be used in aggregation | JB: Nature of the data call request? KS: shoudl be redundant - dont see people reading code JB: query, results in aggregated things, 2 parts of a request-report. Req will identify the things selected and need to be accessed. If you did this via Wizard or screen, those criteria included at that level. Translated into extraction KS: least big declarATIVE IS A HEAVY LIFT. Not sure short term target, right now map-reduce function Peter - attaching meta data to the calls, human readable - will need that clarity JB - not talking NLP, but request-translated-terms/types requested and accessed in raw data. KS: whats going over the wire a result of an agg routine. Will return written premium by x and y. DR: acceptance criteria - request, tells us XYZ, approve/reject - some plan lang explanation of what is being asked for. REG: these premiums these lines - should come out in the aggregate. Who writes the query? Analytics node? REGs? Here is what the output looks like, whats needed to gen output. Prob run test execution, these elements were accessed, accepted. DH: also want to know if there is extraneous data requested thats a backwards way to get some data PA: not quite sure what actor will write Extrax Pattern, will be run on certain analutivcs node, who owns that query JB: if in fact, aggregating total premiums per zip, other criteria involved wouldn't show up in req. If you asked "give me total premium on house on Main street" - different thing. Providing info in aggregate JM: solutioning - req is clear, if you use elements, tell me what you want to use DR: needs to be a req JM: might be hard, implementation JB - nature of query will specificy types of data JM - by def, person capable of reading code will be able to answer question. Must be operable by human beings PA: I will write an Extrax pattern to calc premium on X. Who will come in and validate that query is doing what it is supposed to be doing? DR: solution prob for how to verify, on us to solution for, need to know what was supposed to be requested PA: person running Analytics node needs to validate JB: query request, what you can request, minimal set to expand, translates to extraction logic DR: someone writing query should be responsible, result A and Inputs B - need to be able to verify only B was touched and ONLY A came out. what data pulled for what end - must be defined - shoudl be trivial for whoever is writing the query JB - specificying the things the query is for and validating thats what it does JM - saying you can block someone AND block/report. "I reject this request" vs "You said you needed 5 things and we see you requested 7 so..." JB specifying what it is intended to do is a starting point DR - then governance JB - glossary DR - not thousands of elements |
Information Requests | IR.7 | 5/23/22 | Information requests shall define the logic for extracting and aggregating data | DR: interpretation - doesn't need to be pseudocode level or extremely details but has some detail JB - business justification request? DH/JM - yes JB - specifies purpose, what elements, who its for, how done - human understandable JM - will be metadata page, very descriptive, processable by humans JB - logical request KS - human TRANSLATABLE (understandable) |
Information Requests | IR.8 | 5/23/22 | Information requests shall identify and define the calculations to be used in aggregations, analysis, and reporting | JB - similar to logic. Combine with IR.7 |
Information Requests | IR.9 | 5/23/22 | Information requests shall define the specific use of the information | JB - use and access - REG only, single use? JM - who in the sense of roles not names, will know what they want to do with it. Privacy +. Different than "WHO". KS - restriction/constraint. If you say you use it for that, thats all you can use it for. JB "specific purpose and not other things" - like licensing JM - commercial vs personal all the time. |
Information Requests | IR.10 | 6/1/22 | Information requests shall define the permitted accessors to the information and users of data | JB: the WHO. Use declarative, WHO is a restriction JM - redundant with IR.4 DH - who has access to final report JB - other was access in transit. RELATED to IR.4. JM - lifecycle flow - who has access throughout DR - implementation has that data in the same place, doesn't hurt to be explict with requirement JB - tempted - come up with a draft of template of a term sheet for this DR - few weeks ago - definition of that request template. DH: beyond the smart contract - business level |
Information Requests | IR.11 | 5/23/22 | Information requests shall communicate the proportion of individual carrier information to the population of data in the extraction prior to final commitment to participate | JB - keep carriers protected from self-detection. Data can't be deidentified. Provided to each contributor. JB - only know when you have the total DR - requirement: maximum acceptable, sep req that says "no darta will be pulled or aggregated UNLESS it can be confirmed. Might have to do pseudo-extraction to get a rough size. JB - consent to request, what it is asking , data is contrib to the analytics node as "pending" but not approved for use until such time there is sufficient data to let the node say what the totals were DR - maybe do with a lighter weight. Shallow (25% of WHAT) JB - general metrics, so many policies outstanding JM - language of "prior to final committment to participate" DH - two step - what portion you will have (query all avail carriers, who will participate) then when there is a sense of what % of the total WILL we participate. Others face same thing KS - time problem - bartering back and forth JM - regardless of how we do it, data wont be seen until we meet the threshhold. We won't see data unless X%. Multi-stage scares me a lot. DR - once extracted have we lost control? Governance. In Analytics node. Lost effective technical control. Def recourse. Affirmative tech control is lost. Jm - governance level requirement. Whole solution requires not release data w/o reaching threshold. You pull one carrier then ouch KS - micro-req - define participation threshold - then argue governance DH - 2-step process, another requirement below, set at 15%? KS - % of what? premium, loss? KS - reports just dont tell one thing, define that and then deal JB - requires more thought IMPORTANT ONE |
Information Requests | IR.12 | 5/23/22 | Information requests shall be for one time use only. Additional uses for data will require a new request. | JB - licensign of its use, one use, baseline, mayube beyond 1-time use. Use can be controlled or specified JM - what if you know something is 1/4 or annual. Each submitted as a sep request DH - 1 req per year or some timeframe sufficient PA - some indication - has your org approved before? changed year to year? JM - grand vision - if you did have something monthly, set as monthly recurring, could be useful DH - specific req recurring, do it on a time period - this month X next month similar but not the same. Dont want scoppe of any req expanded beyond what was agreed to JM - RECURRING important but maybe out of scope for now JB - data not being used without consent, without apprvoal, who is using it |
Information Requests | IR.13 | 6/1/22 | Information requests shall identify the path information will flow from its raw form through final reporting (e.g. carrier data store to private analytics node to Multi-Carrier aggregation node to Regulator) | PA - path: REG makes request, to analytics node, ANode requests data DH - clear where info is flowing, no side trips the data goes to, not aware of as carrier TE - openIDL will deploy everything from point you say OK, Data calls - fields that define purpose TE - combined and anonymized, presumptive TE - reqs on openIDL now, on reqs on carrier's perspective, on the ANode now, reqs for openIDL operating the analytics node for phase 2 obligations, committed to "what we do with data we got" DR - contracts are the data calls themselves TE -real world contracts DR - blanket TOS, defines things like SLAs and counterparties TE - carriers and openIDL DR - can't imagine no TOS JZ - w/in openIDL there will be SLA for stat reporting TE - SLA as part of the network, openIDL needs to become an agent DR - same verbiage can hit both reqs DH - concerned with deviating from normal path BH - data leaves company, knwo what it is going to do/go to PA - consider running ANode will be offering TOS JB |
Information Requests | IR.14 | 6/1/22 | Information requests shall identify the form information will flow from its raw form through final reportings (raw data; carrier summarized aggregated and anonymized data; reported data) | JB - similar to prev, relates to spec on anonymization, agg vs anon, abstrat detail identifiable. Text is one thing, code is another, some way to formalize/codify nature of call, what being requested, identifies things other than narrative statement (nature of req), analysis of metadata interface DH - one is path this is form KS - how much in the prose vs Extract Pattern, EP has gory details. When filling in req, fields req/fields output? right now data call, fill out, explain what trying, that form extended for deeper info - these are the items req, agg will happen, etc.? JB - this req indicates KS - loose prose and no form? PA - structured stuff, table to fill out KS - asking struct questions, all the things you have to ask JB - design, how to design metadata DH - may not be part of initial, will be part of final ask before final approval is provided KS - get the gist but before I approve this tell me why/what fields DH - person doing extract pattern would be able to JB - could be done in some form of survey of a page (heres what i want, looking for, data-specific not necc technical). anyone implementing call will need to know exactly what regulator wants anyway |
Information Requests | IR.15 | 5/23/22 | Information requests have an expiration date and time from which consent is needed, if applicable | DH - deadline for responding, no response = no (comes up later down pg) (addressed above) JB - what is the time bracket, time bracket use of info. Basis of analyzing when new reqs made - can I do this? when can I do this? If I do this when? Stages of Consent (not single date/time) PA - defining what % of the data you are submitting - raw #s? amount of cars? % total records? DH - % of whatever is being requested PA - explicit JB - # of diff ways, not fully detailed PA - by carrier, etc. |
Information Requests | IR.16 | 6/1/22 | All requests for information, its approval, the disposition of data from its raw form through final reporting shall be tracked, recorded and archived within OpenIDL | PA - where tracked and recorded? Private channels between carrier and analytiucs nodes? on chain? KS - everything on-chain except raw data, every interaction, consent, etc. PA - Eric in VA, makes data call for auto stuff KS - eric creates data call, goes on ledger, uses UI, fills out form, data call on ledger, as diff orgs interact with that (like/dislike) recorded on ledger PA - how will ind carriers know % of their data vs total data on a data call? KS - TBD TE - captured, outside of this goup KS - extract pattern put into data call on the ledger, json file with map-reduce, consents registered and stored with the data call PA - actors consenting or not: KS: Carriers PA sign in? JB Alerts and pushes. NEEDS BREAKDOWN OF REQUEST TYPES |
Information Requests | IR.17 | 5/23/22 | Carriers who participate in information requests shall receive a copy of the final information presented as well as their individual carrier results | PA - receipt + copy of the full payload DH - whatever is shared with ANYONE I want a copy JB - inc Regulator? DH - anonymized, should be able to see the whole thing, concerned about 25%, wants to see their OWN results JB - every call? clear the benefit of anon agg data is benefit to carriers AND regs PA - using openIDL creating any calls that would be bad for DH to see the whole pic DH - aggregated data only, not detail JZ - can't anticipate all, from beginning, agg data is made avail to carriers, state reports are public info, fund principals, value to carriers and they get to see reports. Can have Robin weigh in, everyone needs to know when states get info, one of the reasons why they use stat reporters in past, anything that goes to state entity can be given to anyone who requests. NOT private enterprise when discussing stat reporting JB - how would that data be returned JZ - data thru channel to analytics node where anonymized TE - goal, from arch, make it so each. node can be a data owner node and analytics node so that transactions can be chained together. Chain req together from data source to delivery. Look at arch as an actors:data owner/info receiver/network governance. Can resp to EP, stat reporting network, agg data in analytics node needs to look at that like another data set. Anon-Agg-Test for final delivery (our of visibility of regulator). Should automate AAIS role, so timeliness much faster, so EP happens, is transparent, give the Regs. JB sharing of anon/agg data, one place could be shared is the PDC of the common channel TE - which common channel? NOW - default channel and peer to peer channels. Idea - one default channel (openIDL) or another one (other networks). Default channel cant be everything to everyone unless super lightweight. JB - means for returing info to submitters and dedicated channel for that purpose (better in openIDL) - not the default channel (used for comms) but some channel dedicated for returning results TE - stat agent, executing rules for annuyal stat report for ea state, combined data doesnt have value for submitting carriers today. How do we give more value back not just info reported and compliant acc rules, but all this data that could be used by the states (loss valuation, etc) should be best data product avail (benchmarking, trends in market, etc.). Giving data back to that reporting member. Carrier could have own analyutics node, have own EP that dug into field x |
Information Requests | IR.18 | 6/1/22 | Carriers decide in which information requests they will participate | JB - given with the disc around consent, summary of reqs DH - up to the carriers to participate OR assumed to participate |
Information Requests | IR.19 | 6/1/22 | Carriers must provide an affirmative response prior to any information being extracted to the private analytics mode | JB - along with IR.18 (consent on record) |
Information Requests | IR.20 | 5/23/22 | Final reports shall be archived by OpenIDL for 3 years | JB - network of communication and collab, who is doing archiving (analytic node? carriers have their copy? cloud archive?) - identify is every member responsible for their own archiving. openIDL is the network. DH do we need a data center? PA - archive means a place for archiving JB - ID how accomplished, more than one requester of info, what is a final report, mult requestors, people providing info to diff requestors, one of the issues - is private data collection used for things in transit, complexity DH - is openIDL just a network or is it also an intermediary? JB - resp for maintaining, monitoring is this something that becomes a cost factor, if it is archived does it need to be accessible? cheaper ways to do that if not on chain all the time. Need to look at who might provide archival process. Role question. SB - risk and liability? JB - if archiving is of interest, each. node archived each org could do that -WHY? what reasons for archiving. Needs more detail JZ - diff conversation, idea of archiving beyond scope of openIDL, behind carriers holding data, disppears after the fact and hash - outside of scope of RR JB - outside of initial scope PA - three years after time generated DH - published |
Information Requests | IR.21 | 6/13/22 | Information requests should be testable. Should be able to execute a dry run and know exactly what would be returned if the data call executed | JB - seem to occur anyway if you have something to be run to begin with, ought to be able to do it in HDS and test PA - setting up testnet for us to und cost to op network - talk about a POC HDS or generic HDS, test environment? JB - intent of this item, a per req basis, request should be testable - talking about if you do get a data call or info request, test locally to see if it runs - looking for test facility for data calls and extracts? or verify executable? DH - didn't add it KS - consent to something, need to know what you will return before you consent PA - dev/UAT/Prod looking to maintain in openIDL? JB - sep subject - know what you return on a req by req basis KS - fits a prev req - see just what they are returning, a dry run |
Information Requests | IR.22 | 6/21/22 | NOTIFICATIONS: Carriers, Regulators: New Data Calls, Consents, etc. (*TBD) | what groups of actors would receive them, approve vs evaluate will generate more reqs |
Access and Security | AS.1 | 5/23/22 | Carrier's raw data will be "walled off" from other entities with access only through permissioned protocols | Straightforward requirement, w/in Carrier HDS KS: multi-tenant node as well? logical JB - yes SK: analytical node? same concept? per carrier? JB - raw data IN the carrier node KS: know the data comes to Analytics node carrier-identified, want to make sure no one has access to that data w/in the ANode JB - NO access to raw data, doesn't apply to analytics node DR - once on ANode, not wide open, still some permissioning, implementation and access will be different KS: Sep req - aggregated data, what shows up on ANode, confusing raw data DR - still a need, just b/c outside CarrierNODE still needs to be defined JB - qualification - raw data, implies on CarrierNODE SK: clarification to Dale - raw data on carrier side or raw data could mean ANode, aggegated? DR - catch all term - carrier identifiable data only accessed by permissioned protocol JB - best to deal with life cycle, when data does move DR - if Lifecycle changes, dont need to keep changing requirements DR - ANY carrier data must have permissioned access pattern of some kind - never just open - still needs controls (even in ANode) |
Access and Security | AS.2 | 6/1/22 | Carriers raw data shall not leave its control - a secured limited access "private analytics node" may be established for processing information requests | DR - think DH referring to the adapter, raw data shouldnt leave but might need to be a mechanism to access raw data JB - API adapter to access the data PA - hold this for a tenant, how does ND with the VINs go? Fact we hash the VINs, make this still workable? DR - not sure ND is a violation of the tenet PA wants to revisit KS - is a VIN PII? Heard "no" it is not, could be returned as a result of an extrax, not have to be returned hashed JB - anonymize w/ encryption, comparison with DMV, compare equiv VINs and policy data KS - heard not necessary JB - raw data not anonymized KS - stuck on "private analytics node" - raw data? JB - adapter that interfaces with the HDS at the Carrier node, in the Carrier permiter, separates Fabric request by not having directly on CarrierNode, but thru extrax pattern to get results. KS - boundaries still in CarrierNODE still? JB - some adapter with API, where reqs are made thru well-defined channels, nature of which not entirely clear (get data in serialized fashion?) - not that difficult once est extraction request and get the data |
Access and Security | AS.3 | 6/1/22 | If multiple information requests are being processed at the same time, separate "private analytics nodes" with separate access shall be employed | KS - "private analytcal nodes" ? DR - concern, if you approve mult reqs, access diff data is fine, in theory AGGREGATION could pull datasets together KS - no crosstalk with extractions JB - sep logcal worlflow of each request SK - for each data call data set is different, 1st = combined prem for zip, or 2nd could be somethign else, - saying those two cannot combine the data while the data call is being serviced? JB - think of it as sep channels KS - logically seperate JB - API not fleshed out, needs to be, est conversation ID for a data call SK: little bit of solutioning - can one API service all data calls - flesh out - how do we sep all data calls KS - function gets result JB - same API, mult instances DR - preclude - long lived node, caching every data call ever made, prohibited by this req - ability to return. Req woould throw that part of the arch out JB - adapter not a cache DR - not getting to how JB - stipulation how data utilized, combos occur, lifetimes |
Access and Security | AS.4 | 6/1/22 | If multiple information requests are being processed at the same time, the data for each request will be segregated | Jb - saying the same thing as AS.3 DR - maybe not just the node but in transit, maybe broader DALE - not having the data comingled and access to that data (in flight, not raw) comingled with other information requests KS - dont want two extraction patterns to interact or crosstalk - cant talk to each other about what they have |
Access and Security | AS.5 | 6/1/22 | Carrier data may be transmitted to a private analytics node only as the result of an approved data request via a permissioned access protocol | Jb - goes back to the concept of consent - we might want to suggest a substiturion for private analytics node for "API CHANNEL" - avoid sep node per se - - unless we all accept priv analytics node DR - INTERFACE KS - heard this as not the same as before - what we called the adapter in prev - this is the ANode KS - "inteface" in that req means DESTINATION in terms of data CLARIFICATION FROM DALE - it is the interface, transmitting of data beyond HDS, basic fund of thru permissioned acess and thru a data request THAT HAS BEEN APPROVED JB - priv channel between carrier and ANode -priv channel? YES Dale - convo Ken and Dale have had, little bit of solutioning - where does the data land when it leaves the HDS JB - connection/relationship between carrier and ANode where it is kept private KS - PRIVATE CHANNEL JB - not the adapter, Private channel to ANode Dale - not leaving HDS w/o permission |
Access and Security | AS.6 | 5/23/22 | Carrier data may be transmitted to a private analystics node that has been aggregated and anonymizated through a secured protocol | JB - already talked about and accepted, maybe AND/OR anonymized, def have to have some means of disintermediating them |
Access and Security | AS.7 | 6/1/22 | Carrier data in the private analytics node shall only be used for the purposes for which permissioned access has been granted | JB - similar to reqs above (SEAN) |
Access and Security | AS.8 | 6/1/22 | Carrier data in the private analytics node shall be immediately purged upon completion of the processing for which permissioned access was granted | JB - similar, cert period of time was allowed to use that data along with permissioned access - license to use for reporting purposes JB - node collecting this for analysis on behalf of carriers SK - does this mean Data purged after every data call is serviced? JB - period of time intended for data (ad hoc, ongoing report) - use is only for request, nothing else - can see working on long running report, data every quarter, not just when you first receive it - concern - not to accumulate lots of data b/c available - must be specific for request SK - timeframe? Data calls perpetual? DC today, how long is it needed? JB - talked about specification of meta data that subscribed request (rentention, etc.) - Recurring call, mult times per year or adhoc for incident, would be described and part of the making of the data call - longer running or recurring, understand but not used for anything else |
Access and Security | AS.9 | 5/23/22 | No Personally identifiable information (PII) data shall be transmitted | agreed SK - exceptions? meaningful dataset w/o some PII JB - provenance of PII, out of your control if it leaves your perimeter - PII not transmitted is a safe assumption KS - changes transmitted to "outside carrier control" JB - NO PII shall be required to leave the carrier KS - dont want to say "cant be in HDS" because it can be - when it leaves HDS it would not be in there - HDS has data avail to extraction, PII could be in there |
Access and Security | AS.10 | 5/23/22 | No altering or embellishing data including appending outside data is permitted throughout the processing of the information request unless approved by carrier | JB - carrier may have outside info it can use, if willing to submit, but once collected it would not be done AFTER carrier released it KS - carrier has to approve it SK - good requirement KS - would be in the extraction pattern - known thing that has to happen, approve ExtraxPattern you know the embellishment would happen - embellishment would be part of the extrax pattern |
Access and Security | AS.11 | 5/23/22 | No changes to request, attributes used, extraction patterns, accessors, users, or specific use of the data is permitted post consent | KS - works diff right now - not really conset makes it immutable - makes it the issuing of it that makes it immutable - locked down - after ISSUANCE it is immutable JB - no changes to req can be made after its issued, could be when a req is issueed, modification of request based on feedback KS - thougth thru during prev design sessions about flow - when you issue that vehicle version it becomes immutable on a blockchain JB - procedure for revising - versioning of the requests? KS - make that a requirement SK - making it immutable through life cycle is a challenge, putting digital rights on a payload JB - not the results it is the REQUEST KS - Dale discussing request data |
Access and Security | AS.12 | 5/23/22 | Only authorized approvers may commit carrier to a data request | KS - two layers - auth org (carrier) and then the users INSIDE the org - Dale looking for permissions, credentialed roles, etc. JB - will involve identity and credentialing - needs review |
Access and Security | AS.13 | 5/23/22 | Data request communication shall be through a communications protocol within OpenIDL and archived within OpenIDL | JB - what Fabric does with chaincodes sending out and getting responses KS - second half adds something JB - written on chain in gen channel, where archive of the requests is KS - log of comms JB - general channel of the Fabric blockchain would have it, artifact of comms protocol being used KS - could this be said as the "communications are auditable or logged" JB - instead of archived? KS - archive is specific, hard to get to JB - through an auditable comms protocol, opp to say "hey lets do this on blockchain" comes with the request KS - application needs to use blockchain correctly to do this |
Access and Security | AS.14 | 5/23/22 | Individual carrier contribution to a data request will not exceed 15% of the population of premium, losses, exposures, etc. for a given information request | SK - good one, how to measure KS - have to provide what metric to say "15%", has to be specific to data call which threshold not crossing JB - may want to say "defined % of contrib based on nature of data call" KS - metric has to be specified, dont care about premiums then needs to somethign else JB - % AND metric SK: unless carrier provides data will not know 15 or 20% JB - 2 phase consent - generate data set, then look at it compared to others and decide if you agree to continue ANode would have to perform that service - 2 phase consent SK - "as of this datte, this is the % of..." JB "in this slice of time, these are the results" |
Access and Security | AS.15 | 6/1/22 | OpenIDL is responsible for fulfilling multi-carrier information requests including extraction patterns, aggregations and formatting of final reports | JB - monitoring the network, saying openIDL is responsible is misstated - needs to be rephrased - DESIGNED to fulfill SK - given, implicit JB - openIDL governing network - mult sub-roles to be fullfilled |
Communication | C.1 | 6/1/22 | All requests for information via OpenIDL will be through a secured communications portal within OpenIDL | PA - Angular JS? KS - requests for information? Extraction? Data Call? Extraction Pattern triggered by data call? PA - why/how being secured Beak it apart tomorrow |
Communication | C.2 | 6/1/22 | All communications will be written (electronic) and be archived by OpenIDL for 10 years | PA - kinds? Nodes talking PA - banking, held all for 7 years, is 10 years industry standard for Insurance? DH - put it down, up for debate PA - bound, delete everything? garbage collection after set date DH - internal record retention reqs, not sure if industry standard JB - archived by openIDL - who is the party, actor w/in who would do that? Comms or requests, on common channel, written on blockchain and stay there but data transferred to ANodes, sent thru private data collection repos, used as buffers to send data, who would be resp for archiving data payloads sent for reporting purposes? ANode? any ANode involved? PA - seems like a funct of ANode JB - openIDL is the network DH - question last time - is openIDL a network or is it also an intermediary? JB - openIDL, org governing and certifying / monitoring network, archival process agreed upon by the producers and consumers of data - gets into agreements that exist between makers of data extract reqs, and receivers PA - come back to, mult ANodes, person in charge a a specific node has control over what it is doing. AAIS is one, doing state auto coverage reports, resp for keeping those records (jsut like today). State of VA, making adhoc calls, would hold levers and switches for those calls and results JB - requirement may not be able to be sustained for all openIDL participants - MORE SPECIFICITY NEEDED KS - better define communications, lot of diff comms happening in this process, some are def happening on ledger, some not. Ex: the "why"s omeone doesnt like a data call could be resolved w/ a telephone call, do we want to define what parts are archived clearly DH - ties back with info req, whole info req, whatwever means to comm the internt and fields, not necessarily data, this is the back and forth going thru the network JB - did discuss clarification, consent, all those things on chain, as long as chain maintained should be there KS - stil nuance, hit things like "unliked it" without context, you dont hear chitchat b/w parties about why. Very different level of auditability DH - written comms thru network, whats archived, verbal = not JB - not trade desk recording for audio calls KS - whatever info captured on data call itself and events (consent, like, etc.) network activity |
Communication | C.3 | 6/1/22 | A non-response to a request for information will be considered a decline to participate | DH - dont want assumed participating KS - in order to say, require/decline you have to know who you expect to respoond. Respond? IN JB - no response to request is NOT considered consent DH - dont have permission to do it JB cannot book as decline KS - already know ND, ND wants top ten to participate, id 10 they want answers from, feels like req that the regulator, can put in there "I expect you Carrier X Y Z to respond". Is there a req to define who you expect to respond "we cant do this if you 10 dont respond". JB - could have equiv of consent list, not everybody on the network, req of type might be for participants listening to it, think if theres a mult set of people in the community make req to, req list (mailing list style). From consent protocol - non response is NOT considered consent. Assumption - need to ID who you are waiting on consent from. |
Communication | C.4 | 6/1/22 | Requests for information must come from an authorized representative of the requesting body | PA - define various roles in requesting bodies, some who have access to machine who wont be auth to make request, what are the roles JB - credentialling and validation of requests, consents the same DH - who at the insurance dept can ask for data or information JB credentialing, passed on along with data call made on behalf (intermediary w/ ANode) |
Communication | C.5 | 6/1/22 | Requests for information must state the regulatory authority for the information being sought | PA - statute for extract pattern DH - sometimes market conduct, need to und that (diff protocols in company) - not obligated to provide info just because someone asks for it - must be legal means for someone to ask, for internal audit need to und what that legal authority is PA - walk thru, auto coverage report, 50 states doing business writing auto in, 50 reports turned in, each state ind needs to give justification why each wants it? DH. - stat reporting not right for this, but a data calls (like Hurricane use case from ELowe) JB - req from auth commissioner, as long as authorized DH - get person and statute TODAY when they get data calls - PERSON AND STATUTE NEED REGULATORS INPUT ON HOW CALLS ARE PROCESSED DH - dont want to support fishing expeditions JB - if regulator has auth to ask under compliance requests, whats involved in the regualtor specifying, input from AAIS would be helpful |
Communication | C.6 | 6/1/22 | Agreement to participate in a request for information is conditioned on OpenIDL providing the carrier the proportion of data that carrier is providing to the population of data | Jb - 2 phase consent PA - more solution based discussion, not just a giant neverending "carrier 7 bailed" issues Jb - cant move to processing until you get a quorum of carriers PA - not sure, lets say REG makes req, Dale calls them up, need Req - REQUESTOR CAN CANCEL A REQUEST BEFORE IT IS FINISHED DH - need that requirement |
Communication | C.7 | 6/1/22 | Final agreement to participate in a request for information is valid once received by the OpenIDL communications portal | DH - comms side of the requirement - at what point is it considered a valid consent? when received by the portal. Need date and time |
Communication | C.8 | 6/1/22 | Final agreement to participate may be recinded up to an hour after final agreement is received by the communciations portal to affirm participation | DH - some facility to change your mind (stop the presses), mult reasons (error, etc.) JB - introduce the cutoff by which things would be in motion, biz process cutoff, DH - dont want someone starting on it, fat finger rule, undo JB nature of the request and how quickly acted on, received in the hour, take a week to start - whats the nature of the request, some timeframe, time to change mind after x time, depending on what time of call it is GW - rescission timeframe KS - odd requirement - most systems give you an "Are You Sure"? Not saying bad, but odd DH - "YES" and boss says you shouldnt have done that JB - req could be, abiliity to rescind as long as possible, depends on timeframes work would be done, specific to call, no generic 1 hour grace period, some calls quicker to process than others, window is not constant for all calls DH - hour is a placeholder (TBD discussion) JB - flash crash of may 2010 - cancel reqs didn't get through - we dont have those types of realtime probs GW - business process? JB - per carrier, per policy of carrier |
Information Requests | IR.23 | 6/28/22 | The requester can define what organizations should respond to a request. | KS - REGS can compel? DH - can compel, but not required to go thru openIDL, can go to state directly, JB - if they want to use openIDL, gen case all carriers KS - another requirement? |
Information Requests | IR.24 | 6/28 | Requester can terminate a Data Call prior to release of final report(s) at which point all data about that call would be deleted, while communications about that request would stay intact | ' |
Information Requests | IR.25 | 6/28 | Carriers do not have to respond to a request via openIDL. They can go direct to the state (out of band) | KS - do they need to log in openIDL that they went out of band (new Requirement under communications) DH - no response same as a "no" (see earlier reqs) KS - no need to log they have gone somewhere else |
Operating Infrastructure | OI.01 | 7/8 | openIDL.org (foundational network) includes ability to test a fully functional mock version in a non-production framework, in addition to running a production-oriented one | JB - Testnet place by which people can either investigate - sandbox etc. - system testing can be done - sans impact on production, deployment of same resources and use of same code. Mainnet is the main openIDL network. DR: Is it always possible to meet this requirement? Feels like an implementation detail, not a requirement. JB: Inclusive, not exclusive KS: Danger of implying one network - as opposed to multiple JB: idl governance per se covers both test net and main net JB: Simply stating that what we're trying to do here is part of openIDL organization PA: this is saying openIDL should be highly testable. (Capability to test in a non-production capacity). JB: openIDL predicated on hyperledger fabric as a means of communication. |
Operating Infrastructure | OI.02 | 7/8 | Mainnet is the live openIDL network and is the sum of the Nodes, Data, Data Calls, Extraction Patterns and Smart Contracts that make up openIDL | DH - mainnet and testnet - solutions rather than reqs. |
Operating Infrastructure | OI.03 | 7/8 | Entities that can operate Nodes on openIDL are Members, Associate Members, Infrastructure Partners and openIDL (the organization) | |
Operating Infrastructure | OI.04 | 7/8 | Only approved entities (Members, Associate Members) can request, access or process data from openIDL | refers to IO.05 |
Operating Infrastructure | OI.05 | 7/8 | All entities “on” an openIDL community must be approved by the GB and TSC after evaluation and due diligence by openIDL (Policies and Procedures to be developed) | KS - more than one network (collection of participants in a particular biz case) - currently stat reporting, only SR network - mult networks with mult gov structures - more generic than mainnet JB - TSC and GB calls (recent) - able to manage the communities - reqs for joining openIDL are still some level of validation, joining an app community depends on what those reqs might be - not a decentralized anon org - some need for openIDL.org to coord and orchestrate procedures - monitor and govern - using fund network infra, one org could be member of different communities, activities and roles specific to that application KS - each community may have its own GB and TSC JB - talking about openIDL's network (anyone can run the software). Communities can have own boards/procedures, not as if someone at GB is approving apps KS - setting context of openIDL single community - couching reqs in context of stat reporting JB - reqs is to recognize reqs and stakeholders, dont get hung up on progress |
Operating Infrastructure | OI.06 | 7/8 | <Network> will run the most recent, stable build of openIDL codebase | |
Operating Infrastructure | OI.07 | 7/8 | All updates to openIDL (patches, critical vulnerabilities fixes, software upgrades, modules, features, capabilities, etc.) will be coordinated by openIDL and require subsequent approval by openIDL Maintainers and then openIDL TSC | KS - update mainnet, participants not controlled by openIDL, hosted or on-prem, has to stay in sync JB - not things you do in lab/rapid change dev, changes to data standards and network configs, few and far between, objectives to have the types of things take place amongst community of collaborators, max felxibility and timelines, specify these are not a way to maintain a single application, coordination KS - big non functional requirement JB - not the kind of thing, every other day notice from your browser "time to update" DR - territory - work, always avoid breaking integrations, never be forced to make an update w/o lots of lift, goes to arch, what lift do I get out of being connected 24/7? Needs to be convinced by connected 24/7 - how many nodes need to be up at a given time? JB - how much a node needs to be active to resp to traffic vs how much work to be done - different - DR - 5 carrier nodes, whats the consensus? Fabric - KS - we decide for ourselves what consensus is DR - 5 and 3 aren't online, cant make writes, not enough approvals KS - not making consent at ledger level, consent in the application, putting an event onto the network, not expecting all to run chaincode, etc. - consent needed to do a report, respond to a request JB - consent needed to write a block to a chain, DR - some extract pattern, sucked out of HDS, aggregated, put somewhere, some record writtern to blockchain - not majority of nodes avail, wont happen, not enough nodes, or say so trivial so few on, passes by default - if not 24/7 uptime whats the point? Stateful vs Stateless argument - requires ops team, on call, which a lot of integrations dont need - asking "why? where's the value prop" - looking "ops-y", someone at Travelers needs to not just send data out but someone to respond, patch, someone on call, no negligible hit - or pay someone to run node for them, not cheap, understand reqs, almost pre-supposing need for that exists KS - codify - shouldn't need that DR - doesn't think we should, hasn't seen whole solution, get all funct reqs, maybe need - hasn't seen it, not saying "we can't" KS - dont want to be up 24/7 JB - 2 diff levels of activity, listening, what it takes to maintain the network itself, communicate at system level, what the level of timeline request to get back information - asynch interaction, distinguish between both, not a trading system, business level, processing or responding can be asynch, with fabric you can designate what blocks can be written DR - then why? ordering BH - right place to have that conversation? JB - other reqs for network to function, may not be 24/7 it might be M-F 9-5, not doing "heartbeats" every second, DR - NFR avoid any need for on-call or pager duty JB - ob jective to minimize operational overhead, biz req for how freq req needs to be responded to vs network responsiveness DR - regional carriers and smaller players, not wanting unfunded mandate, low barriers to entry, minimal numbers of nodes required JB - solutioins where service orgs can help with this, reduce the overhead or costs of that listening, und more what are the actual reqs for network integrity vs timeliness |
Operating Infrastructure | OI.08 | 7/8 | openIDL SLA TBD | |
Operating Infrastructure | OI.09 | 7/8 | Testnet is a secondary openIDL network used for evaluation and testing | |
Operating Infrastructure | OI.10 | 7/8 | Testnet is a subset and will include a smaller number of nodes than Network depending on the use case and testing | |
Operating Infrastructure | OI.11 | 7/8 | All code changes will be tested on Testnet and approved (Maintainers and TSC) before being deployed to Mainnet | |
Operating Infrastructure | OI.12 | 7/8 | openIDL Testnet may be “rolled back” at any time to a previous version | |
Operating Infrastructure | OI.13 | 7/8 | There is no SLA for Testnet. | |
Operating Infrastructure | OI.14 | 7/8 | Any downtime for Testnet will be communicated via TBD openIDL mailing list | |
Operating Infrastructure | OI.15 | 7/8 | Prospective members can use the testnet to “kick the tires” | |
Operating Infrastructure | OI.16 | 7/8 | Nodes are the infrastructure that makes up and powers the openIDL networks (mainnet or testnet) | |
Operating Infrastructure | OI.17 | 7/8 | All Nodes are activated via the openIDL Certificate Authority following approval by the openIDL GB (Business/Legal) and openIDL TSC (Technical/Operating) | |
Operating Infrastructure | OI.18 | 7/8 | All nodes must be maintained by Node Operators (by or for Node Owners), are continuously monitored by openIDL, and must remain in consensus at the approved TBD rate | |
Operating Infrastructure | OI.19 | 7/8 | All Nodes are based on the openIDL Fabric implementation
| |
Operating Infrastructure | OI.20 | 7/8 | All Nodes can perform the following operations (integral to the node architecture):
| |
Operating Infrastructure | OI.21 | 7/8 |
| |
Operating Infrastructure | OI.22 | 7/8 |
| |
Operating Roles | OR.01 | 7/8 | Node Operator
| |
Operating Roles | OR.02 | 7/8 | Node Owner
| |
Operating Roles | OR.03 | 7/8 | Node User
| |
Operating Roles | OR.04 | 7/8 | Network Operator (openIDL)
| |
Operating Roles | OR.05 | 7/8 | Network User
| |
Security Policies | SP.01 | 7/8 | A Node Operator MUST maintain and follow IT security policies and practices that are integral to maintain protection of all services provided in association with the openIDL Node Agreement (“Node Services”). These policies MUST be mandatory for all employees of the Node Operator involved with providing the Node Services. | |
Security Policies | SP.02 | 7/8 | The Node Owner shall designate its CIO, CISO or another officer to provide executive oversight for such policies, including formal governance and revision management, employee education, and compliance enforcement. | |
Security Policies | SP.03 | 7/8 | Node Owner shall designate a Security Lead 1 and Security Lead 2 for day-to-day messaging and evaluation of security issues affecting nodes and the network. | |
Security Policies | SP.04 | 7/8 | A Node Owner MUST review its IT security policies at least annually and amend such policies as the Node Owner deems reasonable to maintain protection of its Node Owner Services. | |
Security Policies | SP.05 | 7/8 | Node Owner MUST maintain and follow its standard mandatory employment verification requirements for all new hires involved with providing its Node Services and will extend such requirements to wholly-owned subsidiaries involved with providing its Node Owner Services (Because Node administrators are a potential threat vector). | |
Security Policies | SP.06 | 7/8 | In accordance with the Node Owner's internal process and procedures, these requirements MUST be periodically reviewed and include, but may not be limited to, criminal background checks, proof of identity validation, and additional checks as deemed necessary by the Node Owner. | |
Security Policies | SP.07 | 7/8 | Each Node Owner company is responsible for implementing these requirements in its hiring process as applicable and permitted under local law. | |
Security Policies | SP.08 | 7/8 | Employees of a Node Owner involved with providing its Node Owner Services MUST complete security and privacy education annually and certify each year that they will comply with the Node Owner's ethical business conduct, confidentiality, security, privacy, and data protection policies. Additional policy and process training MUST be provided to persons granted administrative access to components that are specific to their role within the Node Owner's operation and support of its Node Owner Services. | |
Security Policies | SP.09 | 7/8 | If a Node Owner hosts its Node in its own data center, the Node Owner’s security policies MUST also adequately address physical security and entry control according to industry best practices. | |
Security Policies | SP.10 | 7/8 | If the Node Owner hosts its Node using a Node Operator (third-party Hosting Provider), the Node Owner MUST ensure that the security, privacy, and data protection policies of the Hosting Provider meet the requirements in this document. | |
Security Policies | SP.11 | 7/8 | A Node Owner MUST make available to openIDL, upon request evidence of stated compliance with these policies and any relevant accreditations held by the Node Owner, including certificates, attestations, or reports resulting from accredited third-party audits, such as ISO 27001, SSAE SOC 2, or other industry standards. | |
Security Policies | SP.12 | 7/8 | A Node Owner MUST maintain Node Owner keys on a separate machine from the machine that runs their node. This machine, called the “CLI (Command Line Interface) system”, uses Node Owner keys to authorize the Node to participate in the pool, and is thus the basis for trust for the node and the Node Owner’s identity on the network. The CLI system is not required to have high-end hardware, but in terms of IT best practices for security, it must meet or exceed the standards for the Node (see following items). (TBD config specs) | |
Security Policies | SP.13 | 7/8 | A Node Owner MUST provide certification that their Node runs in a locked datacenter with appropriate levels of security, including the specifications that they target (e.g., SSAE 16 type II compliance; other standards may also be acceptable). (TBD config specs) | |
Security Policies | SP.14 | 7/8 | A Node Owner MUST assert that their Node is isolated from internal systems of a Node Owner (TBD config specs) | |
Security Policies | SP.15 | 7/8 | A Node Owner MUST assert that their Node, and its underlying systems, uses state-of-the-art authentication for remote access (at least SSH with key plus password plus source IP firewall rule, and two-factor authentication wherever possible).(TBD config specs) | |
Security Policies | SP.16 | 7/8 | A Node Owner MUST NOT allow access (remote or local) to the Node or CLI systems by anyone other than assigned admins. | |
Security Policies | SP.17 | 7/8 | A Node Owner MUST apply the latest security patches approved by the TSC within one (1) week or less (24 hours or less is recommended). | |
Security Policies | SP.18 | 7/8 | A Node Owner MUST attest that the Node runs on a server protected by a firewall that, at minimum:
| |
Security Policies | SP.19 | 7/8 | A Node Owner MUST run the Node Owner security check tool as requested, and MUST receive TSC approval of the results before the Node is authorized to participate in consensus. | |
Security Policies | SP.20 | 7/8 | A Node Owner MUST run the Node Owner security check tool from time to time as requested by the TSC and provide the test results report to the TSC within three (3) business days. | |
Security Policies | SP.21 | 7/8 | Node Owners MUST maintain and follow documented incident response policies consistent with NIST guidelines for computer security incident handling and will comply with data breach notification terms | |
Security Policies | SP.22 | 7/8 | Node Owners MUST investigate unauthorized access of which the Node Owner becomes aware (security incident), and the Node Owner will define and execute an appropriate response plan. | |
Security Policies | SP.23 | 7/8 | openIDL may notify the Transaction Endorser of a suspected vulnerability or incident by submitting a technical support request. | |
Security Policies | SP.24 | 7/8 | Node Owners MUST notify openIDL without undue delay upon confirmation of a security incident that is known or reasonably suspected | |
Security Policies | SP.25 | 7/8 | The Node Owner will provide openIDL with the reasonably requested information about such security incident and the status of any of the Node Owner remediation and restoration activities | |
Operating Policies | OP.01 | 7/8 | A Node Owner MUST run the most up to date release of the openIDL Open Source Code as approved and designated by the Technical Steering Committee | |
Operating Policies | OP.02 | 7/8 | A Node Owner MUST facilitate an upgrade to a new version of the openIDL Open Source Code within three (3) business days of a new release that has been recommended by the openIDL TSC | |
Operating Policies | OP.03 | 7/8 | A Node Owner MUST register all Node configuration data (TBD) required by openIDL in a timely manner, keeping information up to date within three (3) business days of changes. | |
Operating Policies | OP.04 | 7/8 | A Node Owner MUST have at least two (2) IT-qualified persons assigned to administer the node, and at least one other person that has adequate access and training to administer the Node in an emergency, such as the network being unable to reach consensus or being under attack. See the openIDL Crisis Management Plan (TBD) for details. | |
Operating Policies | OP.05 | 7/8 | A Node Owner MUST supply contact info for all administrators to openIDL, whose accuracy is tested at least quarterly (e.g., by sending an email and/or text that doesn’t bounce). | |
Operating Policies | OP.06 | 7/8 | A Node Owner MUST maintain a system backup or snapshot or image such that recovering the system from failure could be expected to take one hour or less. | |
Operating Policies | OP.07 | 7/8 | Node Owner MUST equip at least two (2) technical points of contact responsible for administering the Node Owner Node with an SMS-capable device for alerting. | |
Operating Policies | OP.08 | 7/8 | Node Owner SHOULD aim to achieve at least 99.9% (three nines) uptime for their Node (this amounts to about 1.4 minutes of downtime per day or 9 hours per year). | |
Operating Policies | OP.09 | 7/8 | SHOULD coordinate downtime with other Node Owners in advance via a mechanism as determined from time to time by agreement between the TSC and any other relevant openIDL Governing Body. | |
Technical Policies | TP.01 | 7/8 | Nodes on the openIDL Test Network (testnet) should be similar, but requirements may be downgraded from MUST to SHOULD. | |
Technical Policies | TP.02 | 7/8 | Nodes MUST run on robust server-class hardware. | |
Technical Policies | TP.03 | 7/8 | If a Node is run on a VM, the Node Owner:
| |
Technical Policies | TP.04 | 7/8 | The Node MUST run in an OS that is dedicated to the openIDL network, i.e., a single-purpose (physical or virtual) machine that MUST run openIDL Open Source Code, MAY run other software approved by the TSC, and MUST NOT run any other software. | |
Technical Policies | TP.05 | 7/8 | Software required to support the node, such as monitoring, backup, and configuration management software, are approved as a general category. However, Node Owners should discuss with the TSC any software packages that transmit between the Node Owner Node and the outside. | |
Technical Policies | TP.06 | 7/8 | Nodes MUST run a server with compatible versions of the operating systems supported by the Hyperledger Fabric requirements as documented in the release notes. | |
Technical Policies | TP.07 | 7/8 | Nodes MUST have adequate compute power (TBD config specs). | |
Technical Policies | TP.08 | 7/8 | Nodes MUST have adequate RAM (TBD config specs). | |
Technical Policies | TP.09 | 7/8 | Nodes MUST have at least ((TBD config specs)) 1 TB, with the ability to grow to 2 TB, of reliable (e.g., RAIDed) disk space, with an adequately sized boot partition. | |
Technical Policies | TP.10 | 7/8 | Nodes MUST have a high-speed connection to the internet with highly available, redundant pipes (TBD config specs) | |
Technical Policies | TP.11 | 7/8 | Nodes MUST have at least one dedicated NIC for openiDL Node consensus traffic, and a different NIC to process external requests. Each NIC must have a stable, static, world-routable IP address. (TBD config specs) | |
Technical Policies | TP.12 | 7/8 | Nodes MUST have a system clock that is demonstrably in sync with well-known NTP servers. | |
Technical Policies | TP.13 | 7/8 | Nodes SHOULD have a power supply consistent with high availability systems. | |
Information Requests | IR.xx | 7/11/2022 | Support the notion of what use cases are supported by the data in the HDS. HDS data is not good for all purposes. When creating an extraction, on emust know that the possible consenters are able to respond. | |
Non-Functional Requirements | NFR.01 | 7/18 | Operational costs should be minimized - minimize on-call requirements | |
Non-Functional Requirements | NFR.02 | 7/18 | Runtime should be minimized - don't require constant running processes that cost non-trivial amount | |
Time | Item | Who | Notes |
---|---|---|---|