Improving search: how we made it easier for our members to find what they need

James Braddy
James Braddy
Search is one of the most important features on The Key for School Leaders - its been used over half a million times this school year alone. Whether school leaders are looking for a “statutory policies list” or “interview questions”, searching is often the fastest way for our members to find the information and resources they need in a critical moment.

In the product development team at The Key, we are always looking to improve the products and services we offer – meeting new sector needs with content, but also making sure our sites are quick and easy to use. With this in mind, we recently took on the challenge of improving our search feature – taking an iterative, outcome-focused approach to make this happen.

Defining success measures

Our first step was to agree how to measure the performance of our search feature. What did success look like? And what metrics would help us monitor whether we were moving towards this?

We agreed that success looked like ‘the most relevant result(s), at the top of the search results list, on the member’s first search’.

A click on a search result seemed like a strong indicator of relevancy, so we made ‘% article conversion’ (that is, the percentage of members converting to an article from the search results page) a key metric.

Other metrics we monitored closely included: 

  • Distribution of clicks across search result rankings – we wanted to increase the percentage of clicks on results at the top of the list.
  • % relevant articles in top 5 results – tracking this required manual testing on a sample of terms, but gave us a more complete view of relevancy than we could achieve with click data alone.
  • % search refinements – we wanted more members to find what they needed on their first search, without having to refine their search and try again. 
  • % exit rate from search results page – we wanted to reduce the % of searchers exiting from the search results page, as this behaviour indicates they didn’t find what they needed.

Discovering improvements

We continued our discovery process by addressing questions that helped us better understand the current problems we faced. What feedback had we heard from members? What issues were there with the design? How did our implementation compare to best practice? And what search terms were performing poorly against our key metrics?

Answering these questions helped us generate a long list of hypotheses for improvements we could make. We prioritised this list by considering both estimated impact on our key metrics and complexity.

Here are some of the changes that we made over time: 

  • Multiple updates to our ‘Elasticsearch’ implementation to improve the relevancy of results. We now consider factors like result popularity, article publish date, and article-type when ranking results.
  • Multiple updates to the design of our search results page and snippets to make it easier to visually scan for information. 
  • Reducing the number of topics displayed at the top of the page to make sure articles are listed higher up the page. 
  • Updating the ‘recommended result’ listed for some of our most popular searches.

As we’ve made changes, we’ve seen a significant improvement in our key metrics and it’s been great to hear from some of our members that they have noticed the difference. 

Have you noticed the changes? Let us know, along with any other ideas on how we can improve.