Note: Some of the information in these results have been scrubbed for anonymity.
Project Example: Documentation UX Research and Product Design
Part 1: UX Research
Problem Discovery
The current UX of the getting started documentation is long and daunting. The purpose of this research is to find out if this experience is hindering users from completing the getting started process. Furthermore, we would like gauge expectations when a user goes to the getting started documentation. We will use the results of this research as a preliminary insight in how to redesign this page.
Research Goal
Uncover pain-points while setting up using the getting started documentation
See if their mental models of documentation matches what is displayed
To get a sense of the users emotional reaction
Motivation
Determine what information is missing from the current UX (expectations vs. reality)
Help us understand what parts of the getting started doc are confusing or slow our users down
Look for indicators of a better information infrastructure and see if there is natural grouping for the sections
We want to see if the current context helps all user types coming to the page
Assumptions
There are two type of users coming to the doc:
people trying to set up a cluster
people coming to the page from marketing and just want to get the gist of how it works
When people who go to the getting started page:
They are expecting help setting up
They are hoping to get detailed instructions on how to log in and what the different options mean, where to find things, how to set up a cluster, etc.
How they feel about the page:
Whether it did or didn’t answer the questions that they had
It was easy to follow
They succeeded in setting up
Methods
5 remote interviews via UserTesting.com using the current page and 5 in-person interviews using a redesigned prototype (comparable content)
Pre-survey on Expectations
Task Completion
Participant will follow steps of the tutorial on the getting started page. While completing a step of the tutorial, the researchers will note the parts of the journey where the participant had problems or confusion.
Post-survey on Experience
Participant will describe their experience following the tasks.
Takeaways across in-person and remote interviews
Users blame themselves when things do not work as expected
Users expect there to be screenshots and code examples on their documentation, have the ability to search and see highly organized information architecture
They like to see examples in their documentation, code snippets that are easy for them to copy & paste and adjust to fit their needs
Users from different operating systems had very different experiences
Users found certain steps steps vague and confusing
Retrospective
There are two things I would have done differently at this point in the study:
I made assumptions about problems that existed within the page and in attempt to solve them preemptively without research, I made a prototype addressing these issues and tested that prototype for the in-person interviews. While I don’t think this significantly impacted my initial study, it would have been better to consistently test the same material for all parts of the study.
I made specific recommendations to the team pointing out bugs and errors I had uncovered in the first 10 tests. This methodology is something a lot of designers do early in their career. After doing so and sharing it out with the team, I was taught a new framework to re-address the problems that exist in this experience. You can read about the framework here.
Part 2: Content Improvements
Problem Alignment
During the first phase of the research, we uncovered the majority of an omni-present insight within the current page - the content was dated and the primary cause of troubles users had. The goal of this phase of research is to use existing feedback that we’ve received from users to identify content-oriented pain points on the page. We develop solutions to the content-oriented pain points and implement these solutions on the page.
Summary of Phases
Establish current rate of self-serve outcomes
Deployed survey on Qualtrics to capture a “completion” rate of the different steps in the tutorial
Analyze and organize all user feedback
Collected all the feedback across 13 different studies (250) and led 3 card sorts of the feedback, in doing so uncovering common themes for problems within the content
Collaborated on a user story mapping exercise with the PM on the project
the PM to create user stories for a new epic for the engineering team
Brainstorm solutions to content-oriented problems found in user feedback.
Getting Started Summary of Findings
Implement content-oriented solutions and measure rate of self-serve outcomes
Benchmark Results
Survey results on the original version of GSwA page (i.e. before any updates).
Many users completed only the first two steps of the tutorial. For many users, the last two steps were either not completed or determined to be “very difficult”. (n = 57)
Initial updates
Updates were made based on user research and feedback. New steps were introduced, and we observed fewer instances of steps being marked as “very difficult” or “not completed”, but also fewer responses. (n = 20)
Final results with the latest version leveraging new in product on-boarding features
Only a few responses at the moment (currently n = 4, as this version went live April 23, 2019), but so far most users have completed all of the steps in the tutorial and have marked more steps as “very easy”.