An overview of the GenderSci Lab US State Covid-19 Report Card Project & The Final Report Card: The “State” of Affairs as of February 2021

By Mimi Tarrant

This post is the first of a three-part blog series. Links to the other two are below:

Blog 1: Introduction to the GenderSci Lab US State Covid-19 Report Card and The Final Report Card: The “State” of Affairs as of February 2021

Blog 2: What the GenderSci Lab learned from 8 months of tracking state reporting of socially relevant variables in COVID-19 outcomes

Blog 3: Introducing a new state “Report Card” and The First New Report Card: Results from April 2021

Introduction to the GenderSci Lab US State Covid-19 Report Card

In early July 2020, the GenderSci Lab initiated the US State COVID-19 Report Card, publishing our methodology and first round of results at the Health Affairs Blog. Since then, we’ve updated the Report Card 6 times, accompanied by analysis here on the GSL blog. The Report Card tracks the comprehensiveness of COVID-19 surveillance reporting of socially relevant variables in US states. It is one piece of the Lab’s broader COVID-19 Project engaging gender/sex disparities during the pandemic. 

The first step in creating the Report Card was developing a scoring scheme to evaluate the comprehensiveness of reporting of socially relevant, intersectional data across states. We scored each state on their reporting of COVID-19 cases and fatalities for 4 socially relevant variables and on reporting of any interaction between any two of these variables. 

The Report Card tracked whether a state reported data for gender/sex, age, race/ethnicity and comorbidity variables for both cases and fatalities. If a state reported one of these variables, they would earn a point. In addition, a point could be earned if states reported any interactions between these socially relevant variables for either cases or mortalities, for example, by reporting the distribution of cases between different genders stratified by age. With this scoring system, a state could score between 0 and 10 points, and letter grades were assigned based on the number of points scored: F for a score of 5 or below, D for a score of 6, C for a score of 7, B for a score of 8, and A for a score of 9 or 10. It was our hope that this logical system would both be easy to interpret and a mechanism for simple comparison between states. Visually, we represented states’ grades  via different shades of color on a map, and created a Report Card listing the points scored and the grade for each state. In total, we collected data for all 50 states, as well as Washington D.C.

To track changes in the data that each state was reporting for COVID-19 cases and fatalities, we accessed each state website individually. A list of the best websites for each state was originally taken from the The COVID Tracking Project; however, throughout the 8 month period we adapted and changed this working list of state websites based on our experiences of where we successfully found data for each state. The process of recording data presence for each state involved one lab member first accessing the state website, and recording whether a variable was being reported or not. If the variable was reported, it was recorded as a “1” in our spreadsheet. If it was not reported, it was recorded as a “0”. This process would be repeated for all variables, across all states. 

A second lab member would then validate the information following this initial collection. The validator would investigate the state website themselves, and look to see what variables were and were not being reported. Any discrepancies in data presence between these two collections were then examined to determine whether a variable was being reported. It is important to note that sometimes a collection of websites or hyperlinks had to be accessed for a single state in order to discover all the socially relevant variables being reported. Therefore, our data collection and validation team had to work closely with one another to ensure that all relevant webpages had been investigated. 

We developed the Report Card as a tool for accountability, motivating states to improve their reporting practices and to strive to match the best-scoring states practices for reporting of socially relevant variables. We especially sought to highlight the underreporting of the interaction between socially relevant variables. Without such interactions, we cannot know very basic things about any observed sex disparities, such as how they are distributed across those with comorbidities, or in certain age groups, or within racial/ethnic groups

All 50 states and the District of Columbia report gender/sex data for cases — an important milestone.

Our Report Card has tracked changes in reporting standards at the state level as the US experienced several waves of the COVID-19 pandemic. Our total of 6 Report Cards, covering a time span of 8 months, provide both a visual and quantitative analysis of how the landscape of data reporting across US states has shifted over the last year. (View each of the Report Cards: June 2020, July 2020, August 2020, September 2020, December 2020; February 2021’s Report Card is below.)

Below, we summarize our findings as of February, 2021, for reporting practices by state. In a subsequent blog post, we describe what we have learned over the 6 Report Cards we have issued, and evaluate the Report Card’s strengths and weaknesses. In our final blog post, we introduce a new model for tracking state reporting of socially relevant variables going forward. Importantly, this new Report Card will include state gender/sex reporting for vaccinations and hospitalizations in addition todeaths, track reported interactions with more specificity, and evaluate how well a state accounts for nonbinary and third gender categories.

The Final Report Card: The “State” of Affairs as of February 2021

The Map and Report Card below report the comprehensiveness of state reporting of socially relevant variables for COVID-19 cases and fatalities as of February 2021. Since the previous scorecard in December 2020, the average state score increased; however, there remain noticeable limitations to certain state reporting practices. 

 
Figure 1: Map current as of 3/5/2021

Figure 1: Map current as of 3/5/2021

 

With an increase in average state score from 6.78 in December 2020 to 6.84 in February 2021, the average grade for states remained at a failing D grade. However, some individual states saw significant improvements in their individual scores since December. Most notably, New Mexico’s score increased by 3, as it now reports interactional data for both cases and fatalities as well as comorbidity data for fatalities. New Mexico now reports 9 out of the 10 possible variables, giving it an A grade. However, it is important to note that New Mexico does not offer this information on their COVID-19 dashboard. Rather, it is instead provided as PDF downloads, through a link labelled ‘Epidemiology Reports’ on their homepage. With New Mexico’s grade increase, five states now score an A grade. 

Missouri also saw its score increase by 2 points, as it now reports gender/sex data for both cases and fatalities. This moves Missouri’s grade up to a D, and it also means that now all 50 states and the District of Columbia report gender/sex data for cases -- an important milestone. Similarly, Oregon increased its score by 1 by reporting comorbidity data for fatalities, moving its grade from a D to a C. 

 
Figure 2: Report Card current as of 3/5/2021

Figure 2: Report Card current as of 3/5/2021

 

Three states saw a score decrease by 1: Montana, North Dakota and South Carolina. These states stopped reporting interactional data for cases (MT), age for fatalities (ND), and comorbidity data for cases (SC) respectively. Notably, North Dakota is now the poorest-performing state in our Report Card, reporting only the 3 variables of age, gender/sex and race data for cases on their state dashboard. It is also the only state that scores a failing F grade. Additionally, by no longer reporting comorbidities for fatalities, South Carolina’s grade dropped from an A in December 2020 to a B in our February Report Card. The inclusion of comorbidity variables for fatalities has improved since December, with 17 states now reporting this variable, up from 15 in our last Report Card. However, this February update highlights the need to continue to push for a variety of socially relevant variables to be reported and made easily accessible through state public health websites and dashboards. As the US enters a stage of the pandemic in which vaccination programs become widespread, it is essential that socially relevant variables and their interactions are analysed and understood by state public health departments, so that the interaction between vaccine uptake and group vulnerabilities can be studied. 

Highlights from the February 2021 Report Card: 

  • From December to February, the average state score slightly increased from 6.78 (D grade) to 6.84 (D grade) on a scale of 0-10.

  • All 50 states plus the District of Columbia now report gender/sex for cases -- a milestone. The number of states that report gender/sex for fatalities increased by 1, to 49.  

  • New Mexico saw a score increase of 3, taking it to an A grade. New Mexico now reports comorbidities data for fatalities, as well as interactional data for both cases and fatalities.

  • Missouri and Oregon also saw their scores increase by 2 and 1 respectively. Missouri now reports gender/sex for both cases and fatalities, moving up to a D grade. Oregon now reports comorbidity data for fatalities, and has moved up to a C grade.

  • Montana, North Dakota and South Carolina saw a decrease in their score by 1, as they stopped reporting certain interactional data.

  • The number of states reporting comorbidity data for fatalities has risen from 15 to 17. 


Recommended Citation

Tarrant, M. “An overview of the GenderSci Lab US State Covid-19 Report Card Project & The Final Report Card: The “State” of Affairs as of February 2021.” GenderSci Blog. 2021 June 21, genderscilab.org/blog/report-card-overview-state-of-affairs-feb-2021

Statement of Intellectual Labor

Tarrant drafted the initial blog post, led the writing process and contributed to data collection and validation. Capri D’Souza and Kai Jillson collected and validated data, and provided edits. Kelsey Ichikawa and Sarah Richardson provided feedback and edits.