Case study: Open Enrollment end-to-end research observation & synthesis direction

I served as a UX lead on a fully remote program consisting of approximately 100 cross-functional distributed senior consultants, including Product, Engineering, and UX. Within this program, I oversaw a team of 15 UX designers, with 7 reporting directly to me. Our collaboration with The Centers for Medicare & Medicaid Services (CMS) focused on a range of health care and Medicare-related initiatives under the .gov domain. During Open Enrollment (OE), CMS conducts end-to-end (E2E) user research of the enrollment experience from Account creation to Application and Health care plan selection. As a project team we had the opportunity to observe some of these sessions.

These thoughts are mine alone, I do not speak on behalf of any organization.

ResearchOps for hc.gov Open Enrollment (OE) cover image with text "Open Enrollment end-to-end research observation & synthesis direction." Image of sticky notes on a notetaking board. Linked to case study.

The challenge

One significant challenge we faced in this initiative was our limited direct access to the research team. We needed to maximize this opportunity to see users interact with the products we were working on. 

Not only did this challenge impact our product work, it also affected the maturity level of research thinking throughout the program. Many designers and practitioners had limited exposure and experience in observing, note taking, and operationalizing research on such a scale. When I joined this project, one of my priorities was to create a low-lift strategy for the cross-functional program to maximize this observation opportunity and learn from user interactions.

Scheduling also presented a challenge, sessions were often scheduled last-minute, resulting in varying observer attendance. It was important for us to establish a designated space for all observers to take notes and holistically synthesize key observations.

Due to NDA I can only outline high-level details. Additional information can be shared upon request.

Team:

A program of over 100 consultants, including experts in Accessibility, UX, Product, Engineering, and Program Management.

My role:

UX Research Lead/Manager

Timeline:

~ 7 months

Deliverables/artifacts:

  • Research plan
  • Budget
  • Product-specific notetaking boards
  • Communication plan
  • Internal  and external workshops
  • Training guide
  • Research report & presentation

Discovery

I spoke with all the designers, and some Product and Engineers, who had previously been involved in observations to gather insight on what worked and what didn’t in past years, and to gain a better understanding for the general logistics of the study.

I discovered that the effort had historically placed a significant load on the observers involved and often alienated potential observers because of the heavy lift of the process. 

I also noticed that the strategy had mostly focused on verbatim note taking, and that because of this focus, many key insights around interaction, mental model, and environment were neglected. This had residual effects in the ability for the program to synthesize the observations and operationalize the findings into meaningful product roadmaps.

Setup

I worked with the various product teams to setup notetaking boards related to their products, write goals, and form hypotheses. The notes were coded by participant and observation type. I also included a section for observers to note “I wish, I worry, I wonder” observations that were not specifically related to the product, such as, research methodology, participant recruitment and care, and overall service design.

Read more about the program-wide research upskilling initiative in Case study: Research Chat Series.

During Open Enrollment

I prepared a training guide and ran beta tests on the notetaking boards during the pilot studies. I gathered feedback and worked with the teams to update the boards. Throughout the research sessions (over 60 hours) I moderated backchannel discussion threads and ran debrief sessions. I coached practitioners to identify key findings and helped them navigate through the large amount of insights.

Synthesis & Results

After completing the findings coding, I facilitated program-wide synthesis workshops to identify key insights and potential research gaps (that could benefit from further research). I worked with individual product teams and stakeholders to triangulate and validate our results and to incorporate the findings in product roadmaps and to set agency-wide goals. 

Overall, we saw a significant increase in observer participation, and during the debrief of this effort, there was unanimous support for this new observation methodology. We also gained valuable insights on how to enhance this process in the future, including improvements in pre-study preparation, collaboration with the research team, and joint efforts in operationalizing the synthesis.