Measuring and Increasing CSAT | January 2022
Background
At the beginning of every year, I like to set personal outcomes for products that I oversee. I always communicate this early on to leadership to ensure I have buy-in to use dev resources or squeeze some additional things into our roadmap that is to be completed if time permitting.
One of the outcomes I set for a reporting product was to increase its CSAT by 2% per quarter.
Historically, this product has been one with a very low happiness score due to the nature of it being a complex product.
Research
My research plan for how I was to accomplish this outcome is as follows:
First I created a basic happiness survey. I used Pendo because I wanted to capture my user while they were in context. If you don’t have an in-software option, other survey tools such as Google Forms, or Survey Monkey will do; just be sure to be specific on what feedback you’re seeking. Explicitly state that the feedback you are looking for is for X product to ensure the data doesn’t get skewed with feedback for any other product area.
Next, I determined how many responses I would need to capture the most accurate score.
I used a sample size calculator to find that number. The only number you need to know going into this is MAU for the respective product area.
Synthesize Results
Once I received the number of responses I was looking for, it was time to synthesize the results.
I recommend synthesizing results in a platform where you can categorize the results to find patterns. I used Airtable because I could create tags to associate with each written response, then use those tags to filter down the results into buckets of like feedback.
I didn’t have specific tags going into this, I created tags on the fly while reading the feedback, as I noticed common themes, such as “permission issue” “navigation” “change management” etc.
From there it was easy to spot patterns and note which themes were having the greatest negative impact on the product.
I recognized that the most common theme was users having trouble locating reports.
To determine exactly what was causing the user to be unable to find the report they wanted, I utilized FullStory. I exported the user IDs from Pendo for the users who provided the navigation feedback and searched for their sessions in FullStory. I was able to view how they were interacting with the search and filter for the reports page to identify potential usability issues that might be causing their frustrations.
From there I decided to watch a handful of other users' sessions to see if anyone else was having the same issues. Again, I was looking to find themes in my research.
To verify this was the problem needing to be solved not only on the user side but also from a business perspective, I did a few calculations to determine if solving this problem would also help me achieve my goal and therefore contribute to the business. I found that if all users that gave us a score of 1 and also responded regarding the navigation, increased their score to a 2 when we ran the survey again, that would result in a 2% increase to the happiness score. If all users moved from a 1 to a 3, that would result in a 5% increase. I decided I wanted to build in a buffer under the assumption that only half of our users change their score because they still have other frustrations, or because of the nature of things, we can’t survey the same users as we did last time, that means our score might only increase anywhere from a 1% to 2.5% (under promise, over deliver!)
Hypothesis
At this point, I had a hypothesis; our reports start as templates with industry-standard naming conventions. For our users to save a report to their report stack, they must give the report a unique name to avoid duplicate reports. because each report had a custom, unique name, if a user was not familiar with the unique name the report had been given, they were unable to effectively search for it.
After determining the hypothesis, it was time to brainstorm solutions. At this point, I looped in my PM as well as a dev resource to make sure whatever solution we discussed would be feasible and not too time-consuming to implement.
Solutioning
We identified two small changes that could have a large impact on how our users locate their reports; the first is to include a tag or some sort of metadata tied to each report that contains the name of the parent report. For example, if I had a parent report called “revenue report” and when the user added the report to their company report list, renamed it “[name] Monthly revenue”, included an identified whether in the UI or the metadata that ties it to the parent “revenue report”.
And the second change would be to add a filter to the existing filter set that allows them to filter by parent report. This also allows them to view all the reports that all stem from the same parent, and helps with reducing report creation redundancy.
Additionally, both these changes would take very little time to develop since the data for these tasks were already being tracked, we just needed to surface them in the UI.
Next steps: Design • Prototype • Test • Iterate • Implement
Conclusion
I was able to accomplish my goal for the quarter, which is a huge win! One thing I didn’t account for when surveying our users for Q3 is that I didn’t factor in a way to determine if the increase in the CSAT score was directly tied to the navigational changes or any of the other updates we completed throughout the quarter, so that is something I’ll need to consider next quarter. However, I did see a decrease in responses mentioning issues locating specific reports, and a decrease in responses discussing poor user experience.
I plan to complete this project using the same methodology for the remaining quarters and take a final CSAT measurement at the end of Q1 2023 to give our users a full quarter to adopt the updates we release at the end of Q4.
I will update as the results become available.
Results Post-Implementation
After implementing the updates at the end of Q2, we tracked the progress post-release via Fullstory, and sent out another survey at the end of Q3 to see where our CSAT was at. You can view the written results here, or view the summary below: