Craig Burton
Technical Director at MyJobMatcher
Industry: Staffing and Recruitment
Location: United Kingdom
Company Size: 11-50 Employees


  • Moving huge amounts of data, 100-200 million records per day, quickly and with little resources.
  • Connecting data from multiple sources.


  • By implementing’s ETL tool, the team was able to easily build data pipelines using minimal team resources.


  • MyJobMatcher can process analytical data much faster and easier with minimal team resources.
  • Because data processing has been simplified, MyJobMatcher can analyze data frequently to make near-real-time decisions.

Share This Story

Processing 200 million records per day goes from ‘daunting to delightful’


MyJobMatcher is a website that matches a person’s resume to the most relevant jobs on employment. By parsing and reading a person’s CV, MyJobMatcher is able to serve up relevant, local jobs from an expansive network of employment sites.

Use Case

Move data to external databases outside of a closed network and upload data to Redshift for analysis

Before Xpently processing our data was a daunting task. Now it’s a pleasure.
Craig Burton
Craig Burton
Technical Director at MyJobMatcher

From Strenuous to Effortless

Before using, MyJobMatcher had the ability to get insights from their data, but a tremendous amount of work and resources had to be used. The labor intensive process took too much time, meaning MyJobMatcher wasn’t able to use their data to update and optimize their algorithms as quickly as they wanted. When the company began using’s ETL tool, they were able to easily connect multiple data sources and send it all to Amazon Redshift for analysis. The speed and agility that MyJobMatcher now has allows them to process data faster, gain deeper insights into user activities stemming from email campaigns and website visits, and make frequent data-driven decisions on how to best update their algorithms.

Read similar stories