2022-7-8 Meeting Agenda
Date
This is a weekly series for The Regulatory Reporting Data Model Working Group. The RRDMWG is a collaborative group of insurers, regulators and other insurance industry innovators dedicated to the development of data models that will support regulatory reporting through an openIDL node. The data models to be developed will reflect a greater synchronization of data for insurer statistical and financial data and a consistent methodology that insurers and regulators can leverage to modernize the data reporting environment. The models developed will be reported to the Regulatory Reporting Steering Committee for approval for publication as an open-source data model.
openIDL Community is inviting you to a scheduled Zoom meeting.
Join Zoom Meeting
https://zoom.us/j/98908804279?pwd=Q1FGcFhUQk5RMEpkaVlFTWtXb09jQT09
Meeting ID: 989 0880 4279
Passcode: 740215
One tap mobile
+16699006833,,98908804279# US (San Jose)
+12532158782,,98908804279# US (Tacoma)
Dial by your location
+1 669 900 6833 US (San Jose)
+1 253 215 8782 US (Tacoma)
+1 346 248 7799 US (Houston)
+1 929 205 6099 US (New York)
+1 301 715 8592 US (Washington DC)
+1 312 626 6799 US (Chicago)
888 788 0099 US Toll-free
877 853 5247 US Toll-free
Meeting ID: 989 0880 4279
Find your local number: https://zoom.us/u/aAqJFpt9B
Attendees
Libby Crews
- James Madison
- Eric Lowe
- David Reale
- Andy Case
- Ash Naik
- Sean Bohan
- Peter Antley
Mike Nurse
- Ken Sayers
- Dale Harris
- Reggie Scarpa
- Susan Chudwick
- Greg Williams
- Allen Thompson
- Matt Hinds-Aldrich
- Susan Young
- Kristin McDonald (Deactivated)
- Nathan Southern
- Tahoe Blue
- Brian Hoffman
- Bourjali HI
Goals
Meeting Minutes
Peter Antley started with Linux Foundation Antitrust Statement
Discussion:
I. Car years - math and related considerations - two definitions - led by Mr. Williams
A. 365 days of insured covg. single vehicle - 1/12 exposure months of liability ins. This is present in files carriers give us. How do we have car years in report?
- Typically talking about 1 yr for report so we need months covered before doing additional calculations on this.
- What we have is month and year - so we use 15th of month (middle month avg.) - this puts even a 1 year plan in group 3 bc 12 month plan starts on 1/15 ends 1/15 of the next year - 15 days crossing over into next group (Mr. Antley will address this further)
- Math for the months covered - calculations (Mr. Williams put into Excel). CSD assumed to be 15th. We have months covered as well.
- Coverage ED - divide 365.25/12 = 30.4375 multiply by months covered and add to start date to get coverage end date
- Reporting dates - 1/1 - 1/31 -
- Pulled up chart - groups 1-4 - calculations explained for ach.
- Months covered x exposure x .08333
B. Calculation of work through earned premium - Mr. Antley
- When stat plan was designed, ltd space - for calculations assumption is everything on the 15th
- Granularity of months covered just based on months
- Sample policy pulled up can't exist in dataset as we have it today because there is no way for policy to start Jan 1st. Will be coded as starting on 15th and will only earn 11.5 months of premium
- Extraction patterns that attempt to grab everything w/i year will grab many strange edge cases given considerations with half months
- When Peter designed extraction pattern he wanted something more granular that could say "I want to look at months x and y"
- Current data presented on a quarterly basis
C. Need to determine data modeling plan for this set up. Mr. Antley asked what questions we need to answer
- For day 1 we'll be getting much newer data than we would if we wait for post-validation data
- For quarterly: we need more granular. Earned premium less of a concern than written premiums, etc.
- The more accurate we are, the better/stronger we'll be.
- Mr. Antley: we need to find the best way to solve the problem
- In beginning we had an object to which we began attaching coverages that resulted in hybrids - this created issues
- Mr. Naik: a design question - executive decision on cadence. Doing on monthly basis, then adding up to quarters then adding up into years this may be best
II. Earned premium and related implementations
- Mr. Lowe: earned premium is more actuarial so need may not exist in that arena to do it granularly - by month not needed fairly useless quarter is more suitable. Reminder that this isn't about rate adequacy. Quarterly is more useful and more realistic.
- Mr. Lowe: Day 2 will not change calculations of earned premium and earned exposure.
- Mr. Antley: two jobs - load data lake (this has been turned mostly into JavaScript right now). Second job: summary, where we summarize everything quarterly. All reporting is emerging from this second job right now.
- Stat records used as ingestion format.
- Standardization of file structures will carry us away from proprietary.
- Mr. Antley: they are giving it to us coded, we are decoding and then reprocessing.
- Mr. Hamilton: inevitable that normalization will come into play, yes? if you have granular information you can put together many reports. One brittle structure to next might not be scalable over time.
- Mr. Sayers: Goal: to get to a middle format that can serve these reports.
- Mr. Antley: many ways to do the report and still satisfy the handbook - no standardized format that is easily ingestible.
- Mr. Lowe: as regulators we need to ask question what do we need and can it be fulfilled with something we are not using? Granularity needs to be there that isn't present in stat plans
- Ms. Darby: Concern is that information request from Feds means we aren't collecting data in the way we need it.
- Mr. Lowe: by doing what we're doing we get insights into changes over time.
- Mr. Antley: Fact FRF summary - breakdown of fields presented onscreen in tabular format - Hue list.
- Mr. Madison asked how we get policy identifier, Mr. Nurse clarified how this is typically done at the Hartford - may differ at AAIS and ISO
III. Mr. Antley question: once we have data format (first priority) figured out, we can then design base. Posed question of what makes the most sense for Hue table.
- Mr. Naik: agree. How do we get from Mr. Williams's table to this one? This would close knowledge/understanding gap for him. Inputs, translations, storage, etc.
- Mr. Reale: clarification of nature of effort. Mr. Antley: we have stat record, transactional level records/data. This is loaded to generate what we're seeing in Hue SQL-based dupe table. Joins and aggregations occur, and reports generated from here
- Mr. Antley: Hue SQL table has aggregate fields like earned premium that are required for producing report. Mr. Reale: why are we concerned with implementation itself? Mr. Antley: still trying to pinpoint intermediary data model. Abs. necessary to do it in multiple steps. Multi-step process.
- Mr. Antley: do we want to replicate SQL target table exactly? What will intermediary data layer look like? This is critical question.
- Mr. Reale: primary concern is that this SQL table may lead to changing things downstream (needlessly). We can use this to inform but not as justification for changing things downstream. We can't make report from transactional records alone - insufficient. Model is too flat - constant data shredding. This is the issue, not that it is missing data itself.
- Mr. Antley: we are currently taking all records and rolling them into quarters, but we lose granularity in the process. We need to decide how to go from raw data and business data, and where do we do our aggregations?
- Mr. Antley: will try to boil hue sql table down and we can visit next week and examine what monthly/quarterly tables would look like
Pointed to next Monday and Tuesday resuming/continuing these discussions.
Discussion items
Time | Item | Who | Notes |
---|---|---|---|