top of page
Screen Shot 2022-10-17 at 4.37_edited.jpg
Screen Shot 2022-10-17 at 4.37.23 PM.png

Usability Test MVP

Context and Background

Sluff is an app that helps you organize activities with friends and family, and gives you suggestions based on mutual interests and availability. This case study will describe the work I did assessing the usability and relevance of their MVP to their target market.

My Role

When I came on as a user research intern, I was tasked with designing a comprehensive usability test of the MVP, measuring and assessing a variety of facets like usability, comprehension, expectations, sentiment, feature and layout desires, and ratings of quality, relevance, benefit vs. drawback, and comfort with sharing. I should state that I was the research department, and the guidance I received was from my UX research professor Danielle Green. I worked with a small but dedicated team of designers, a project manager, and a few developers. The PM, a designer, and a developer were present in each of the research sessions, facilitating group dynamics on the app and taking notes. Synthesis was mainly done by me, though I involved the PM and designers in putting the main themes together for final deliverables. Persona development had been completed before I arrived. Though I felt that more work needed to be done in that domain, the timeline for release of the MVP dictated the need for construction and execution of the usability test.


The project took 7.5 weeks to completion.

  • Getting familiar with app and discussing project: 1 week

  • Test design and recruitment: 2 weeks

  • Usability testing: 1.5 weeks

  • Analysis: 2 weeks

  • Report: 1 week

App Flow
(blurred for NDA)

Research Statement and Goals

We wanted to test for usability issues, understanding any pain points along the way, and get sentiment feedback for key app characteristics to improve our app’s experience.


  1. Find and assess usability issues

  2. Discover pain points

  3. Get sentiment feedback for key app characteristics

Research Methodology

Remote moderated usability testing


Due to budget and the ambitious scope of the research for the timeline, multiple methods were carefully combined to gain qualitative and quantitative feedback without overly disrupting the natural flow of the user’s experience:


  • Quantitative usability metrics:

    • Task success rate

    • Time on task (when appropriate)

    • Error rate

    • Clicks and interactions

  • Think-aloud protocol

  • Concurrent probing

Test Plan
(redacted for NDA)

Screen Shot 2023-07-13 at 6.47.27 PM.png

Recruitment Criteria and Process

Due to timelines and business goals (gaining investors and launching a beta version), I couldn’t construct a data-driven persona (though it was my strong recommendation to make that a priority). However, our provisional persona appeared to capture the essence of our target market: •Age: 18-24 years •Unmarried •No kids •Employment: part-time or unemployed •School: full or part-time •Spends at least a half hour a day on two or more days a week outside the house doing an activity just for their enjoyment •Prefers outdoor activities with multiple other people that they know already (friends, family, and acquaintances) We didn’t specify any pain points in our screener as we didn’t feel they were useful criteria affecting participant validity in a usability test.

Analysis and Synthesis Process

Directly after each session I would hold a 20-30 minute debrief with the team to discuss our findings, reflect and generally to grab the insights while they are still fresh in our minds. The next day I would watch the recording and map the responses and data points to the excel spreadsheet, coding responses for pain points, expectations etc. and highlighting impactful quotes. After we completed the 10 sessions, I had the stakeholders (PM, designer and developer) each choose two sessions to watch and tag, followed by a huddle in which we affinity mapped to find common trends. Afterwards, we used the RICE method (reach, impact, confidence, effort) to identify priority initiatives, and assigned ownership of these initiatives to ensure the insights turned into actions.

Sample Questions
(redacted for NDA)

Screen Shot 2023-07-13 at 4.32.43 PM.png

Outputs and Deliverables

For each session I generated a “postcard from the field” to give a high-level synthesis of key pain points and insights for that session. This helped provide an easily digestible scaffolding of similarities and variations between users that mapped to the overall research findings. After analysis was completed, I put together a research report which identified the key takeaways, action items and ownership, with links to analysis documents like the affinity maps, postcards, spreadsheets, and usability sessions so people in the company could have access to a high-level overview of the insights, and drill down to any resolution of detail or related department that they needed. These results were also presented at the weekly company “mind-meld” to ensure engagement with the research and allow for a lively discussion and clarifying questions. The final deliverable was a research repository in which pain points and insights could easily be accessed based on what stage of the app was being looked at.


The research impacted specific design decisions, both architecturally and aesthetically, something I’m very proud of. However, the largest impact was how it impacted research culture in the organization: Up until this research, design had been operating on their own assumptions and expertise, and didn’t know how users would experience the app. The biggest impact the research had was to adjust the perspective of designers and developers towards a real “user-first” approach. This isn’t to say they weren’t aware of the importance of designing to the user, but this gave tangible information to represent our users, invigorating the team by showing that the product we were creating was of real value to users, and giving them data to drive design decisions. Another benefit of the research was the effect of including stakeholders in the research process, which scaffolded a more cross-functional workflow than previously existed, with a greater level of curiosity about what questions research could answer as opposed to who had the “best idea.”

Screen Shot 2023-07-14 at 1.49.57 PM.png

Next Steps and Recommendations

App specific recommendations are confidential, but a few aesthetic changes were recommended, and some usability issues dictated smaller, rapid prototype testing in specific stages of the app.


Going backwards: More foundational research on our target audience; who they are and how they currently find and plan things to do with friends. Conduct interviews, and develop surveys for a data driven persona.


What went well:

  • The research was compelling enough to gain buy-in and affect app development positively

  • Involving stakeholders improved the research culture and created a more user-centric design philosophy

  • Large scope returned good generative insights for more specific follow-up research


  • Recruiting without having a thoroughly developed understanding of our target audience probably diluted some of our findings

  • Ambitious research goals reduced the quality of results in specific domains

bottom of page