The afternoon session is about monitoring Healthcare Reform. But first I must say how impressive the discussion was at the end of the morning session. At this conference the discussion part of a session is taken very seriously. It is a genuine dialogue among people who know their stuff. None of the usual eye rolling that you sometimes get in the discussion time at other conferences.
But back to evaluating the Affordable Care Act (ACA). The essential problem is this: as the act is implemented policymakers will be looking for data to understand its impacts, but we currently don't have good, consistent and reliable measures to produce such data. An alternative title for this session might have been: Measurement Error in Major Health Surveys. We heard about a number of different federal surveys, including some of the flagships like the American Community Survey (ACS) and the Medical Expenditures Panel Survey (MEPS). They all seem to have their measurement problems whether in question wording, sampling, respondent understanding of the benefits maze of both public and federal health insurance programs, or good old fashioned recall. I was especially struck by the recall problem. For example, there is significant disparity between what MEPS get from self-reports of emergency room visits and prescription drug use and what they get from administrative data and reports by respondents' healthcare providers. The differences are explainable but difficult to fix. I don't want to make too big a deal of this. In the data comparisons that people showed us even the statistically significant differences are small in percentage terms, but at the scale these folks are working a few percentage points can translate to millions of people.
I was reminded of a quote from the statistician William Kruskal who once said, "If you have one GPS you will always know exactly where you are. If you have two you will never be completely sure."
I suppose the bottom line here is that most of these surveys were originally designed for some other purpose—modeling, monitoring trends—and repurposing them is difficult. But I could not help but be struck by the degree to which people are so open, so completely willing to lay out the inconsistencies in their survey estimates. It's another example of how seriously this part of the industry takes what it does. It's hard to imagine a similar session at any MR conference. Maybe in MR we always get it exactly right.