Improving customer satisfaction scores by 140%
The problem
British Gas has a home cover range, called HomeCare, which covers everything from your boiler to your electrics. When users wanted to use their cover they would need to login to their online account and book a repair appointment. There were two different versions of the journey live (one slightly newer than the other) which resulted in an inconsistent experience and technical debt.
My Kanban team were tasked with consolidating the existing journeys whilst also improving the overall experience. As the only UX designer in the team, I owned the end to end design process covering everything from initial research to the final UI designs.
Outcomes
4% increase in booking conversion.
£15k saved per year in call reduction.
140% improvement in customer satisfaction scores, moving it from a negative score to a positive one.
Research
Analytics and journey mapping
Using Adobe Analytics, I got the analytics for both journeys. The biggest drop off point in both was at the first step where users had to select what their problem was. Before investigating this any further, I produced journey maps so that everyone had a single document we could refer to and have discussions around.
Understanding the business process
I asked stakeholders who were responsible for coordinating engineer appointments to help me understand what information they needed in order for them to be able to prioritise appointments and provide enough information to repair engineers. I also interviewed call centre agents to learn more about customer pain points and how we could improve the wording of our questions.
I found that we could reduce the amount of questions we ask users by 26% and still provide enough information to the engineer co-ordination team.
Analysing customer feedback
Feedback from a customer satisfaction survey showed that a lot of customers were complaining about not being able to find their problem. They were irritated by the amount of copy they had to read, especially as they were already distressed that something had gone wrong in their home. It was no surprise that the customer satisfaction score was negative.
Usability testing
I conducted usability testing to identify existing issues. The entire team were in the observation room so we could all build empathy for our users. It also helped them understand our design decisions later on, thus making the sign-off process easier.
The grouping of problems in the problem selection step was clearly an issue. Problems were grouped under the product they were covered by. This was problematic as most users couldn’t remember which product they had.
We also found that when users selected certain problems, other problems would disappear without any explanation. This caused a lot of confusion and uncertainty. Stakeholders told me that this was done in order to stop users selecting multiple problems that couldn’t all be fixed by the same engineer.
Design
Sketching
Based on everything I learnt, I started sketching out various ideas and iterating until I landed on one that I was happy with.
Keeping the usability testing findings in mind, I made sure I kept copy short and concise due distressed state of the user. I made sure that users would be clear as to what would happen on the day and what they’d need to do if they needed to amend their appointment. Producing sketches allowed stakeholders to understand the overall experience whilst not getting bogged down by the finer details.
Card sorting
Seeing as the problem selection step was one of our biggest identified issues, I wanted to know how users grouped problems. Using Xsort, I conducted closed card sorting sessions to help me do this. I tested the existing structure (which grouped problems by product) against a proposed structure which grouped problems by type. I came up with the proposed variant after listening to user feedback during testing.
The results for the existing grouping structure highlighted the issues we saw during user testing with around a quarter of the cards being placed in two or more groups. The results for the proposed structure was far better, we saw a 65% reduction in the amount of problems being placed in two or more groups.
Usability testing
Using Axure, I created a high fidelity prototype that featured the new grouping structure and a few other changes to the journey. The changes had a great impact on the journey, selecting a fault was clearly a lot easier than before and the rest of the journey was completed with ease.
Final designs
Using Sketch, I created final UI designs. I used existing responsive web patterns from our CSS to remain consistent and adhered to brand guidelines when sourcing imagery.
Going live
Continual improvement
So we could gather learnings as soon as possible, we started off by releasing a journey that could only be used by users with entry level products. We then incrementally built the journey to handle the more comprehensive cover products until the vast majority of users could use it. Between every release I was monitoring analytics and user feedback. I picked up any negative learnings and quickly made enhancements that would go live soon after.
Outcomes
We saw a 4% increase in journey conversion compared to the previous journey. This resulted in a £15k reduction in annual call centre costs. We also saw a massive 140% improvement in the customer satisfaction scores given by users who used the journey, moving it from a negative score to a positive one. After looking at the user feedback it was clear that users appreciated how quickly they could find their problem and book an appointment. Negative feedback around the amount of copy had also stopped which was great to see.