An on-demand P2P college tutoring service.





Client: Tandlr (Also referred to as Sage throughout this case study.)




Tandlr is a web-app which enables college students to book sessions with peer tutors who are based in the same university and pass a rigorous screening process. The two co-founders of Tandlr discovered, however, that the web-app alone doesn’t satisfy their users’ needs. Their solution: develop a mobile application to become a stronger competitor in the emerging college tutoring market by improving the accessibility and feature set of Tandlr. 


My UX team was approached by Tandlr to improve their mobile concept by way of user testing/interviewing, competitive analysis and redesigning their existing mobile wireframes based on our research, all of which would result in a high-fidelity prototype. The wireframes Tandlr’s CTO gave us, however, didn’t properly communicate the co-founders’ vision of the application: Uber for college tutoring (on-demand 24/7 availability, surge pricing, cheaper than traditional avenues of tutoring, premium, etc.) with the addition of future booking functionality.

UX Team

Abhay Mistry

Tori Conner

Eric Cady


competitive analysis

heuristic evaluation

user personas




interaction design


This case study contains 3 phases: Phase 1: Understand, Phase 2: Create and Phase 3: Validate.


During this phase of the project, my team and I employed several techniques to understand the competitive landscape and the overall sentiment among potential users. We also sought to further understand the wireframes by checking them against Nielsen’s 10 Heuristics.


Both Eric and I began this phase by evaluating five primary competitors based on six categories while Tori created the user flow and a basic prototype from the existing wireframes. The goal of this analysis was to find differentiation opportunities. Implementing such features could result in Tandlr attaining some of the benefits a first-mover might enjoy.


PRICING MODEL: $25/hour, $.40/minute, 30 minute minimum

PLATFORMS: Desktop/Mobile (iOS/Android)

LOCATION POLICY: Request any location, must be approved by tutor.


TARGET AUDIENCE: Financially well off students who need flexibility with tutoring




PLATFORMS: Desktop/Mobile (iOS/Android)

LOCATION POLICY: Request any location, must be approved by tutor.


TARGET AUDIENCE: College students struggling with course material


logo (1)

PRICING MODEL: $20/30 minutes


LOCATION POLICY: Request any location, must be approved by tutor.


TARGET AUDIENCE: Financially well off students who need flexibility with tutoring 



PRICING MODEL: Cost is unknown

PLATFORMS: Desktop/Mobile (iOS/Android)



TARGET AUDIENCE: Financially well off students who need flexibility with tutoring 



PRICING MODEL: ~$15/hour, name your own price

PLATFORMS: Desktop/iOS/Android/Windows Mobile, smart watch support coming

LOCATION POLICY: Request a location

MEDIUM(S) FOR TUTORING: Face-to-face, online

TARGET AUDIENCE: College students who need assistance in a variety of areas including academics


With the exception of Sesh, which resembles Tandlr’s vision the most, we found that all competitors were nearly identical with respect to features and pricing models; most likely because every market player is still an early stage startup. This would make differentiating much easier than anticipated, especially since Sesh’s feature set is relatively basic. 

Some ways Tandlr could stand out: 

  • Build credibility by creating relationships with universities  
  • Provide 24/7 service 
  • Split payment for group sessions
  • Highlight transparency on the tutor screening process to build trust with students

Note that the last three characteristics align with a few of Uber’s.  


All three of us completed a heuristic evaluation of the wireframes while also creating the competitive analysis and wireframes/flows. I’ll admit that performing the evaluation felt like an inconsequential task since we knew full well the wireframes needed to be completely redesigned anyway. After further consideration, I realized that following through would help us avoid making the mistakes Tandlr initially made. My observations of the student and tutor flows Tori assembled are noted below. 

The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.

  • Transcript picture uploading lacks progress feedback, no success screen

The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms.

No issues found

Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.

  • No way of going back during onboarding, ability to cancel transcript upload is unknown
  • No clear way for tutors to delete or undo class selection
  • Can’t undo scheduling

Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.

  • The heading “Become a Sage” (Tandlr was previously called Sage) could be potentially misleading in the tutor onboarding screens
  • “Stop” is unclear on the session progress screen, should be “end session” instead to be consistent with “start session”
  • Headings should be added to most screens

Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

  • There’s no confirmation for ending a session

Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. 

  • Notifications button should be in the nav bar rather than menu
  • Messages button should also be in the nav bar

Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.

No issues found

Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.

  • Modals should take up more space on each screen to avoid distractions from main screen

Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.


Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.



My team moved on to testing and interviewing potential users of the app after gaining an intimate understanding of the competitive landscape and conceptual groundwork as communicated by the wireframes, but not before addressing a critical problem. As alluded to earlier, we felt the wireframes weren’t in a presentable state, even for initial testing. It would have been like feeding people a cake that wasn’t baked and had missing ingredients then wondering why no one liked it. 

The conundrum:  Do we redesign the wireframes (which were made in PowerPoint, by the way) solely based on the competitive and heuristic analyses so the few users we test can have a slightly better experience or do we save time and test the wireframes as they are, ignoring all minutia we already considered while directing most of our attention on the concept rather than usability? In other words, we had to choose between short and long term gains.

We ultimately decided to proceed with the latter approach since it would yield more useful data. It was at this point my team and I formed questions, predicated on Tandlr’s vision, to ask the potential users. The questions were divided into three categories: High level, Student and Tutor. Some of these questions are highlighted below.

  • How often do the users procrastinate studying or completing homework? 
  • What is the motivation for students to use a tutor?
  • What is the motivation for students to have repeat sessions with tutors? 
  • What is the motivation for a student to choose a specific tutor? (Location, rating, availability, testimonials) 
  • How are students incentivized to use this platform over another? (Quality, accessibility, price, variety, time)
  • Tell me about your most recent tutoring experience. 
    • Why did you use the tutor?
  • Tell me a time you chose to go back to the same tutor for a second time. 
  • What was your best tutoring experience like?
  • Your worst tutoring experience?
  • Do you normally go back to the same tutor?
    • Why?
    • Do you usually?

  • Why did you start tutoring?
  • If you weren’t paid, would you keep tutoring?
  • What keeps you tutoring?
  • Tell me about your most recent tutoring experience.
  • Would you prefer to use your computer or phone to schedule tutoring appointments?

One of three tutor tests is shown below. Tori guided tutors through several task flows so we could understand how the most intricate aspects of the concept were perceived by the users. My role consisted of note taking and setting up/monitoring the recording tech. 


After conducting several rounds of user interviews, we decoded all the data by affinity mapping. The patterns we uncovered were eventually translated into design principles and user personas. 


The decoding process began with affinity mapping every data point we collected during the tests and interviews. Our process consisted of condensing every main idea into a sticky note then finding patterns among all disparate data points. 


Key Insights

  • Main motivation is monetary
  • Interested in the idea of tutoring “on-demand”, given a monetary incentive and the ability to set their own hours
  • Tutors observe that students frequently procrastinate on projects, assignments, and studying for exams
  • Tutors feel that tutoring the same student more than once would typically benefit the student
    • Know where they left off with a student
    • Know what improvement looks like
    • Build rapport with student
  • Students tend to try to go back to the same tutor
    • Familiarity
    • If the tutor was good, they want to continue with that tutor
  • Willing to pay more for quality of tutor
  • Expressed potential interest in using “on-demand” tutor services – not a strong interest
  • Did not necessarily view themselves as frequent procrastinators
  • “Success” not always a grade improvement; success sometimes meant a better grasp of the material or more confidence in their ability to do well on an exam


Unearthing key insights from affinity mapping allowed us to create a set of design principles which would serve as a beacon for our design solution. We also developed a problem statement, or a statement which encapsulates the ultimate problem the team should solve. This statement had a sizable influence on the design principles. In order to humanize the insights and help us understand the users’ goals, we further extrapolated the data into personas: one tutor and two types of students.


Students and tutors need an easy way to connect because coordinating a meeting with a professional qualified for a specific topic has inherent complexity.


1. Help the user understand how vital processes work

2. The user should always know where they are (i.e. in the app as a whole or within a specific process)

3. Give the user confidence in the quality of the tutor and tutoring experience (build credibility)

4. Support the user’s end-to-end tutor or student experience

5. Increase user confidence through clarity



The second phase consisted of ideating solutions based on our codified insights in the previous phase. My team and I started by sketching out ideas using the Crazy Eights technique to develop the conceptual foundation, then further developed the concepts with wireframes and a high-fidelity prototype.


We adopted a method which is part of the well known Design Sprint developed at Google Ventures, called Crazy Eights to commence the ideation process. I conceptualized a payment flow while Tori and Eric conceptualized tutor/student dashboards and the onboarding process, respectively. 

After conceptualizing a basic payment flow, I created more comprehensible, mid-fidelity wireframes based on the sketches which incorporated iOS design patterns. Since the founders of Tandlr envisioned their business model to effectively become ‘Uber for college tutoring’, I drew inspiration from both Uber’s and Lyft’s payment flows to create Tandlr’s. 


While Tori and Eric wireframed the rest of the flows, I began prototyping existing concepts produced during Crazy Eights. Their flows included: 

  • Accept, cancel, and trade appointments
  • View upcoming and past appointments
  • Set his or her availability
  • Start a tutoring session and rate the student
  • Check messages and respond to messages
  • Check earnings history and add a new checking account
  • Check notifications and change settings
  • Onboarding


I used my prototyping tool of choice, Proto.io, to recreate the wireframes Tori and Eric produced. This strategy allowed me to accomplish three things simultaneously.

1) Interactivity – I was able to make the prototype more interactive, and hence provide a more accurate depiction of how the final product should function, since recreating screens gives ultimate flexibility with each element. Simply importing screens from Sketch and linking them together, in my opinion, doesn’t fully satisfy the purpose of a working prototype.

2) Redundancy – Recreating the screens allowed me to fill any conceptual gaps or other errors, which may have been left by my team members.

3) Standardization – Having three different people wireframe using disparate design languages could result in the developers or UI designers misinterpreting our concepts; the ramifications of which, are obvious. To prevent any potential misinterpretations, I standardized Tori’s and Eric’s wireframes by using an iOS 9 design library and adhering to iOS design principles


In this final phase, we performed another round of user tests with our new high-fidelity prototype as well as an unanticipated paper prototype to validate our design solution.


Since I wasn’t able to finish the student side of the prototype at the level of detail I desired by the time our scheduled student test subjects arrived, Tori and Eric quickly created a paper prototype simply by  printing out their wireframes. We tested the tutors with the high-fidelity prototype. 

Some of our findings are highlighted below. 


  • The prototype was mostly well received – “…it has way more features than what I use for scheduling now”
  • Confirmed that people like:
    • Seeing the tutor’s experience / credibility (# of sessions, ratings)
    • Having a quick summary of the session from the tutor
    • Seeing appointment history on both sides
  • Students confused by tutor location feature – why they would need to see it, campuses aren’t that big, “it’s a bit creepy”
  • Tutors confused by how they would be paid, how often, logistics, etc.
  • Safety and security, especially for female tutors/students
    • On the student side, how are students being confirmed as actual students
      • Validated that checking student email is very frustrating
      • Option to use school log in to validate students?
  • People wanted the ability to schedule via both desktop and mobile, but did not seem turned off from using the app if it was only mobile
  • Students expect much of their school information to be pre-populated
  • Need clear cancellation policies / penalties on both sides
Sage Paper Prototype
Sage App on Phone
Paper Prototype Testing



At this point, we presented our work to the clients one last time and provided a suggested path moving forward with considerations for post-beta release features as well as recommendations for the expansion of our work; and given the team only had three weeks with the client, I laid out a plan for implementing the feedback we received in the validation phase and adding the previously omitted key user flows after parting ways with the client.

Overall, the client was highly satisfied with our work.



Tutor End

  • User test the Ratings flow – validate the information we’re asking tutors to submit, is the rating process too long / too short, etc.
  • Create user flows, wireframes, and user test:
  • Rewards, No Show, and Student Account flows


Student End

  • Further user testing on the student Booking flow – validate what is important to students – proximity of tutor, availability of tutor, tutor rating, booking the same tutor, location, etc.
    • Includes booking as a group
  • Create user flows, wireframes, and user test:
    • Ratings, Packages, Appointment Info, Rewards, No Show, and Payment flows
    • Validate the criteria for rating a tutor, validate information included in appointment information, determine how payment accounts are set up, are processes too long / too short, etc.


  • Tutor-to-tutor messaging
  • Group sessions
    • With a tutor
    • With just students
  • LinkedIn integration
  • Incorporation of tutor review snippets
  • Ability for students to split payments
  • Ability for tutors to split deposit between accounts
  • Efficient student verification using SheerID

Tandlr can be downloaded from the App Store.