Product Manager

MongoDB - Dashboard Redesign

Dashboard Redesign

Project: UX Improvements for MongoDB’s Cloud Manager

Problem & Context

Business Goal

The goal is to improve the experience for people who have 20+ replica sets in their project and / or have a few sharded clusters with many replica sets across them. 

Specifically focusing on improving the management of these projects (setting up, configuration) and enabling customers to better troubleshoot their projects.

Unmet Need

Cloud Manager has an enhanced experience for simpler configurations but is more challenging as configurations becomes more complex. 

Context

This project took place through the summer of 2019. The team makeup was 2 product designers (myself + a little help from our intern), product manager, user researcher, and a team of engineers.

Process

This project was initiated by compiling a year of feedback and tickets to our Support team about the current state of our product. This was followed up by conversations with individuals in the Support organization and internal stakeholders who have a lot of experience dealing with this specific customer profile. Using this information, I began exploring the main areas to focus on within research.

  • deployments page (main dashboard)

  • modify page (where changing configurations occur)

  • servers page (second navigation page)

  • security (third navigation page)

In order to scope down the project, the research focused on the deployments page (left) and the modify page (right).


This initial round of discovery uncovered the following points:

  1. The current layout takes up too much real estate

  2. The information displayed is hard to parse and sort through, making it timely to the relevant cluster

  3. The information displayed may not be the most optimal

    1. Charts appear to not be relevant or top priority

    2. Node specific information (status/indicator) would be helpful

  4. Actions feel hidden in <...> and discovery is not great

Research

Given the initial insights, a User Researcher and I kicked off a round of in depth user interviews to better understand:

  1. Is more granular information helpful at this level?

  2. How charts are being used, if they’re being used, how could information be better surfaced?

  3. Is there a need for charts at all?

  4. What are better ways to provide users a quick state overview of their deployments?

  5. How are users filtering and parsing through information on this page?

  6. What are high need actions to raise to the top/front?

We ran 10, hour-long User Interviews over three weeks and the research brought to attention that this is a larger in scope project that called for a redesign the main dashboard page.

When looking at this cluster card,

clustercard.png

We heard the following information:

  • Most of the details on the left side aren’t overly useful

  • Most of the graphs on the right side aren’t showing useful information and even the ones that could, tend to be set at the wrong level of granularity (not helpful to see at a cluster level or only show the primary, when it would be better for analysis to show all).

  • The current layout forces users to tab over or drill deeper for most tasks they are in this area to accomplish.

  • Participants were frustrated by not having any context clues for why a cluster was unhealthy.

  • Participants responded well (or actively workshopped) the ability to choose custom charts to view across cards along with condensing which stats and charts were shown to make visual real estate for node level information.

  • Not seeing the status of each node from this view made it difficult to gauge the health of their clusters

When looking at the page as a whole:

  • Not having the ability to collapse/pin/filter/etc. made it difficult if not impossible for users with a large amount of clusters to conduct basic tasks.

  • Participants responded well (or actively workshopped) pinning clusters, having active filters, and having unhealthy clusters rise to the top of the list.

Design (designs available upon request)

After wrapping up research, I identified a few next steps:

  1. There was a need for a transition plan to that our changes and improvements could be piecemeal and more iterative in nature

  2. There was a need for better instrumentation on the page to get more quantitative information about usage rates

This project was then broken into a bunch of smaller parts:

  • A host of quick fixes

    • Improving discovery of a button

    • Making the pages responsive

    • Adding sticky columns in our spreadsheets

    • Better instrumentation of feature usage and tracking

    • Pinning as a way to organize content

  • Adding an Actionable Toolbar 

    • Adding links + error messaging

  • Redesigning the layout of the content on the dashboard

    • Making the toolbar across the top and charts spanning below

    • Animation of collapsible content

  • Investigating the Charts (in collaboration with an intern)

    • Customizable 

    • Individual chart improvements

  • At the second layer, determining information to addressed there

Pushback, More Research, More Design

In the initial rounds of design review, I faced a lot of pushback from stakeholders about the necessity of these designs and the direction I wanted to take them in. In order to keep the momentum of the project, I designed a survey that was sent to 100 users, with 39 responses ABC testing different design solutions to hone in on the best design. This helped substantiate previous insights we had gained from the discovery and qualitative research and better gain insight on which design solutions best address these issues.

Survey

Variables Tested:

  • At what level do our users want to see information about their deployment?

    • Basic info (Version, Data Used, Nodes, etc.)

  • What information do our users want to see about their deployment?

    • Is node level information important?

    • Is visual information (Charts) helpful at this level?

Quick Summary: 39 Responses, variety of job roles, all users who have used our tool in Cloud Manager at least once since May 2019

Current Implementation

The pros mentioned by participants centered primarily around this being a single screen view (in contrast with the original topology view that involved a lot of clicking to expand information) and a level of familiarity as it was the one those parti…

The pros mentioned by participants centered primarily around this being a single screen view (in contrast with the original topology view that involved a lot of clicking to expand information) and a level of familiarity as it was the one those participants were currently using.

The cons mentioned by participants centered heavily around each card taking up too much visual real estate, having too much information, and the general health of clusters not being clear from this view.

Design A(pples)

Participants highlighted that this view felt less cluttered and concise. More participants noticed the new features like (node shutdown and pinned cluster) and felt that the status + performance for the cluster is clear.The cons centered around ther…

Participants highlighted that this view felt less cluttered and concise. More participants noticed the new features like (node shutdown and pinned cluster) and felt that the status + performance for the cluster is clear.

The cons centered around there being no clear at a glance summary, ineffectual information (graphs, BI connector, node details) and the extra click to expand.

Design B(ananas)

Participants liked that this design was concise and lightweight. However, participants mentioned there was not enough information, no graphs &amp; metrics, and the cards were not diagnosable.

Participants liked that this design was concise and lightweight. However, participants mentioned there was not enough information, no graphs & metrics, and the cards were not diagnosable.

Design C(Herry)

Participants suggested that this design felt very similar to Apple. The pros they communicated that it felts concise, clean and felt like it had the right information.The cons participants communicated centered around there being no information on c…

Participants suggested that this design felt very similar to Apple. The pros they communicated that it felts concise, clean and felt like it had the right information.

The cons participants communicated centered around there being no information on collapsed clusters, the glance-ability felt poor and they would have to drill down for more information.

Across all three:

  • Participants generally preferred Design Apple design the most

    • However, when you dive into the qualitative feedback it becomes clear users liked the visual layout of Cherries more but preferred to have more information raised at a higher level much like the way Apple does

  • Design Cherry was the most successful at addressing user needs when evaluating a series of likert scales and evaluating averages + standard deviations 

  • Participants like an at a glance summary of their cluster (it’s configuration + health)

    • Configuration because it helps them differentiate between the clusters, sometimes a name is not enough

  • There is value in having node + shard-level information on the cluster card, however there is more exploration to be done on the best way to display it

The Solution

From the results of the survey, I took the design insights into a new design… 

  1. Explore designs that best emphasize → Glanceable, lightweight, concise 

  2. Include on future designs

    1. Charts → possible improvements 

    2. Node/shard health

  3. Exploring better displays of the graphical content, warning & monitoring material

The final version is live on cloud.mongodb.com, if you’d like a walk through of the Invision - please reach out.

Reflection

This project gave me my first real taste of what life as a Product Manager could be like. The strategy and stakeholder buy-in involved was pivotal for it to take off. While in the moment I had felt very frustrated that I had to constantly take it back to research or redesign certain elements - I am happy I did because it resulted in the best experience for our customers. It was a good reminder that design isn’t a one and done but instead a process that takes time.