Skip to content

User Experience Lead

Screenshot of ALX

I led the Design and User Experience team at ALX, an education company in Nairobi, defining approaches, methodologies and processes, from research to execution. We underwent a number of changes in strategy, business model, and structure during my time there, as well as the upheaval of rapidly retooling for full remote working during the Covid-19 Pandemic.

As part of continual business development we investigated the opportunity space in feedback, and piloted a product designed to facilitate and enhance feedback within organisations.

Xcelerator Programme

Photo taken at an Xcelerator mixer

Xcelerator was a 6-month in-person leadership development programme offered by ALX and aimed at the B2B market. Most promotions to managerial positions are due to functional competency rather than leadership or management qualities. As well as taking on more responsibilities, a new manager has to learn how to be an effective leader, how to manage the interpersonal dynamics that come with leading a group of people, and also how to manage up. Xcelerator addressed this gap, and was designed to develop highly effective managers through the lenses of personal, team, commercial, and strategic leadership.

A key component of the programme was 360 feedback sessions to give participants a better understanding of how peers, direct reports and their own managers perceived them, what they were good at, and what their key development areas were. There were two sessions of feedback, at the start and end of the programme, which helped participants see how they had developed over the course of the programme.

The Problem

To complete the 360 feedback sessions we used an existing software solution, which was unpopular with the participants, and we found insufficient for our needs or the rapidly changing modern workplace. Some of the problems we found with the software included:

  1. The forms to submit the feedback were excessively long and complicated, often contributing to low engagement from feedback givers (or Raters).
  2. Much of the feedback received was not actionable by the Participant, either too positive, or critical with no pointers on how to improve.
  3. Further, there are a number of different feedback styles, which the software didn’t account for, making it more difficult for the Participant to parse.
  4. Getting feedback once in 6 months or more does not help to keep it fresh in mind. Participants expressed a need for more regular feedback so they could actively target development areas in their day to day work.
  5. Giving feedback is a skill in itself, and little support or pointers was given to Raters to help them give more effective, consistent feedback.
  6. The interface for the software was noticeably old and not well designed, detracting from the overall experience. It was also near impossible to give feedback on smaller screens.
  7. Engaged Raters wanted to understand more about how their feedback was contributing to the Participants development. The software provided no visibility for this, with Raters in effect sending their comments into the void.

Hypothesis

By providing people with a structured feedback model to use at regular intervals they will better understand what their key development areas are, gain better knowledge of how they compare against their peers and others at a similar stage of their careers, and become more comfortable with requesting and giving feedback.

Research & Initial Concept

Working closely with the CEO, Product Manager, and an external strategy consultant, we defined major problems related to the quality and regularity of feedback in organisations. Our primary goals were reducing power distance and making feedback a more routine and rewarding aspect of work life for both the giver and receiver.

Lean UX Canvas
Lean UX Canvas

Following interviews with more than 40 professionals across Africa and researching academic literature on the subject, we drew on in-house expertise in pedagogy, and learning and development to build competency models for a feedback system based on level of tenure/seniority.

Lean UX Canvas
The 5 Competency Model pillars

Competitive Analysis

Our competitive analysis included not just similar technological solutions (Matter, CultureAmp, Torch, BetterUp), but also considering feedback as a whole, which could come from a number of different places. We researched online peer groups on Facebook, Slack etc, which focused on either particular industries or functions, and provided extra spaces to ask questions and gain insights.

As a result of our research we wanted to ensure that the value of in-person feedback was not just acknowledged but actively encouraged. This would provide a further dimension to our competency model, and allow the participant to personalise their feedback journey.

Rater Engagement Cadence
Participant Engagement Cadence
User flows for the rater and participant, including in person feedback

Personal Board of Directors

A key component to how we wanted to approach feedback was the concept of a small group of trusted advisors people have. In a work setting this group would be peers, mentors and reports, who have first hand knowledge of and regular contact with who they are giving feedback to. In theory this provides the receiver with a well-rounded understanding of how they are perceived and what their development areas are.

Pilot

Our initial pilot consisted of a small cohort of 12 participants who created their own ‘Personal Board of Directors’. Using our competency model we created a series of questionnaires on Typeform, with different variants depending on work experience. Results were manually collated and emailed with commentary and suggestions to the participant.

Onboarding screens where the participant makes their Board of Directors
Onboarding screens where the participant makes their Board of Directors

Following an assessment of the pilot we began a second phase with a much larger group of roughly 80 participants. An external development house began to build some automation into the processes and some frontend components, although results were still reviewed by us to ensure quality control. We were due to assess the progress on the product when the pandemic hit in early 2020 and the product was shelved.

Four screens of a prototype detailing questions posed to raters when giving feedback
Prototype screens showing parts of our Competency Model
Example monthly report sent to participants
Example monthly report sent to participants

Learnings & Outcome

Key learnings from the pilot included:

  • Bottlenecks created due to some people being in high demand to be part of personal boards of directors
  • Attitudes to feedback being extremely varied given a country, culture, organisation, or industry, showing the need for further personalisation or specialisation depending on the scenario
  • The value of giving feedback not being apparent to some board members, resulting in rapid falling off of engagement and the quality of feedback received