How to Use Amazon Redshift For a New Generation of Data Services

In 2014 Intuit’s then-CTO Tayloe Sainsbury went all in on the cloud and started migrating Intuit’s legacy on-premise IT infrastructure to Amazon AWS. By February 2018, Intuit had sold its largest data center and processed 100% of 2018 tax filings in the cloud.

But now Intuit had a different challenge – optimizing cloud spend and allocating that spend to products and users. AWS bills customers via a “Cost & Usage Report” (“CUR”). Because of the size of its cloud spend, the Intuit CUR comprises billions of rows, and it keeps growing by the day. Intuit switched from an on-premise data warehouse and now uses Amazon Redshift to process 4-5 billion rows of raw CUR data – each day.

In this post, I’m walking you through the approach that Jason Rhoades took to build Intuit’s data pipeline with Redshift. Jason is an Architect at Intuit, and with a small data team, they provide business-critical data to more than 8,000 Intuit employees.

Heads up – this is a long post with lots of detail!

Three major blocks:

  • An overview of Intuit’s business, and how the seasonality that comes in dealing with taxes made Intuit one of the earliest companies to migrate to the cloud.
  • How to use Amazon Redshift to build a new layer of data services that empowers the entire business with near real-time cost and usage data for the cloud services they consume.
  • The “new way” of working with data – data lakes, data science, etc. – making your entier company productive with data without any performance bottlenecks.

Let’s start with an overview of the business.

Digital Has Changed the Way Intuit Operates Its Business

Intuit builds financial management and compliance products and services for consumers and small businesses. Intuit also provides tax products to accounting professionals.

Products include QuickBooks, TurboTax, Mint and Turbo. These products help customers run their businesses, pay employees, send invoices, manage expenses, track their money, and file income taxes. Across these products, Intuit serves more than 50 million customers.

Intuit started in the 1980s and built the original version of its first product Quicken for the desktop, first MS-DOS and then Windows. With the Internet, that usage shifted to web and mobile. The impact of that change became clear as early as 2010.

Intuit Product Usage Forecast in 2014

Fast forward to today, and over 90% of Intuits customers file their taxes and manage their accounting online and via mobile apps.

The Cloud as a Catalyst to Handle Seasonality and Peak Demand

Consumption of Intuit products follows seasonal patterns.

  • Users file their taxes once a year.
  • Accountants close their books each quarter.
  • Businesses process payroll every two weeks.

Tax seasonality has the biggest impact on Intuit’s business. Each fiscal year, Intuit generates half of its annual revenue in the quarter ending on April 30th, with the US tax filing deadline on April 15th.

Seasonality also has a huge impact on the cost side of the equation. The shift to digital and online usage of Intuit’s products causes a dramatic usage spike for the IT infrastructure. Most users file their taxes online during the last two days of the tax season.

In the old world of on-premise infrastructure, and to handle the concurrent usage, Intuit had to size their data center for peak capacity. After tax season, demand drops back down to average usage. The gap between peak demand and average usage is so large, that 95% of Intuit’s infrastructure would sit idle for 95% of the year.

That’s why Intuit decided in 2014 to go all in with the cloud. With the cloud’s elasticity, Intuit is in a better position to accommodate spikes in customer usage during the tax season.

Shifting Priorities: From Migration Speed to Efficient Operations & Growth

By shifting to the cloud, Intuit reduced cost by a factor of six because it no longer maintained idle servers for an application only active during tax season. After the first success, Intuit moved more applications, services and enabling tools to the cloud. Today, over 80% of Intuit’s workloads are running in the cloud.

With now growing usage of AWS, the priorities of the program shifted from migration speed to efficient operations and growth.

Intuit now spends $100s of Millions on prepaid AWS services (aka “reserved instances” or short “RIs”) alone, plus fees for on-demand usage during the peaks. Interest grew in understanding the use of different AWS services and spend by different business units and teams within Intuit.

The source for that information sits in the “Cost & Usage Report” (“CUR”), a bill that Amazon AWS delivers to every customer. The CUR includes line items for each unique combination of AWS product, usage type, and operation and of course pricing. The CUR also contains information about credits, refunds and support fees.

Analyzing CUR data supports Intuit’s cloud program with two major use cases:

  1. Cost optimization. The goal is to understand opportunities to lower Intuit’s cloud spend. With $100s of Millions of spend on cloud infrastructure, the difference between on-demand usage vs. purchasing RIs can imply savings of 6-figure amounts per day. While humans look at cost data to make purchase and modification decisions, Intuit also has automated routines that take action based on the data.
  2. Cost allocation. The goal is to forecast and distribute the cost of using cloud resources. Unlike in the old on-premise world, things are dynamic, and engineers run load tests and spin up new services all the time. They are trying to understand “how much is this costing me?”

To build these two use cases, Jason’s team needs to transform the raw CUR data into a format consumable by the business. The raw CUR data comes in a different format from what Intuit uses to charge internal parties, distribute shared costs, amortize RIs and record spend on the general ledger.

The traditional way of Jason’s team to run the analytics on the CUR data was with an on-premise data warehouse.

The Next Bottleneck – the On-premise Data Warehouse

Unlike other companies, the size of Intuit’s CUR is very large. In 2017, it was around 500M rows at the end of a month.

Amazon delivers the report 3-4x per day to Intuit, and restates the rows with each report over the course  of a month, meaning it gets longer with each delivery. Coupled with a growing business, the amount of data the cluster has to process each time grows by the hour – literally.

You can see that trend play out in chart above, with data from 2017. The grey area indicates the batch size for the CUR data. Each day, the batch size gets bigger as the number of rows in the CUR grow. At the end of the month, the CUR reaches about 500 million rows and resets on day one of the new month.

The amount of rows the warehouse processes per minute stays constant at around 1 million rows per minute. Therefore, the time it takes the warehouse to process each batch (“batch duration”) goes up in linear fashion. With 500M rows at the end of the month, it takes the warehouse 500 minutes to process the full report, or 8 hours and 20 minutes.

Now extrapolate forward and calculate what that looks like in the future. With rising cloud spend, the data team realized that the CUR would start to blow up in size. In fact, today the CUR is larger by a factor of 10x with ~5 billion rows. Now we’re talking over 80 hours, almost four days.

3 Challenges for Data Teams: More data, more Workflows, more People

Intuit’s situation is a common scenario we see our customers run into: “More data, more workflows, more people”.

  • More Data: As the business grows, so does the volume of data it generates. Data teams and developers also instrument each “event’ for the business in new ways. That drives up the data points the infrastructure needs to collect for each event, and along with it data volume. As Intuit rolls out new products built in the cloud, their CUR keeps growing with new line items. We see an average of 10x growth over five years, in Intuit’s case is 10x within a year!
  • More Workflows: Data transformations are a fundamental part of analysis. Like in the case of the CUR, raw data is insufficient for answering complex questions about the business. To meet these needs, the Intuit team need to run transformations. In any data-driven enterprise, end-users become more sophisticated and curious about the different areas of their business. Workflows produce the answers. In response, the number of workflows data teams have to manage keeps growing.
  • More people: Today, the success of any organization depends on the ability to make fast, effective data-driven decisions. That means giving people access to the data they need. In Intuit’s case, Jason’s team distributes the processed CUR data to the entire organization. Intuit has over 8,000 employees, over half of them developers. Next to the business, there’s also growing pool of data scientists who build models and algorithms for machine learning and artificial intelligence applications.

For Intuit, it was clear that “keep on doing what we’re doing” was not an option. In a world where data is an asset, data and DevOps teams should focus on the value-creation part of pipelines.

With cloud usage and data volume going up, the old on-prem warehouse was already running into bottlenecks, and so the analytics team followed the business team into the cloud.

Building a Data Services Layer in the Cloud

The Intuit team followed their product team into the AWS cloud. The major goals included handling the explosion in data volume and adding value to the business.

With on-demand access to computing resources, access to usage data in near real-time is fundamental for Intuit’s business teams. Unlike in the old world, waiting for a report at the end of the month doesn’t work anymore. With the scale of Intuit’s cloud operations, a few hours of freshness have a substantial impact on the company.

Cloud analytics with Amazon Redshift

Jason migrated the entire stack from Oracle to Redshift, and deployed the same SQL and ETL processes.

Redshift handled the growth in data volume. Three major data points:

  1. The volume of total rows processed (grey area) goes up for each day of a given month as the size of the CUR grows, to about 4 billion rows per batch.
  2. Number of rows that Redshift processes every minute (yellow line) goes up as the size of the CUR grows, to about 100 million rows per minute. 22
  3. The batch duration (red line) to process a full CUR stays within 30-40 minutes.   

You can also see that the size of the grey area has a step change in April – tax season!  The change is due to new capabilities Intuit introduced, which tripled the number of rows of the bill (“more data”).

Despite tripling the number of rows, the batch duration stays within a narrow band and doesn’t spike. That’s because batch size and number of rows processed per minute grow at the same rate.  In other words, the cluster processes more data faster, i.e. performance goes up as workloads grow.

Let’s dive into how Jason’s team achieved that result.

Building A Data Architecture That Supports the Business

The cluster architecture and the data pipeline follow the best practices we recommend for setting up your Amazon Redshift cluster. In particular, pay attention to setting up your WLM to separate your different workloads from each other.

You can see the three major workloads in the architecture chart – stage, process and consume.

  • Stage: A place to put the raw data to load it into the cluster.
  • Process: Take the raw data into the platform, transform it by applying the business logic.  
  • Consume: Expose the transformed data to downstream users.

Among our customers, “ELT” is a standard pattern, i.e. the transformation of data happens in the cluster with SQL. Cloud warehouses like Redshift are both performant and scalable, to the point that data transformation uses cases can be handled much better in-database vs an external processing layer. SQL is concise, declarative, and you can optimize it.

Intuit follows the “ELT” vs. “ETL” approach. With a lot of SQL knowledge on the team, they can build transformations in SQL and run them within the cluster. AWS drops the CUR into an S3 bucket where Intuit extracts the raw data from (the “E”) into the staging area. Intuit leaves the raw data untouched and loads it into the cluster (the “L”), to then transform it (the “T”).

Underneath the processes is an orchestration layer that coordinates workflows and manages dependencies. Some workflows need to execute on an hourly or daily basis, others on arrival of fresh data. Understanding the workflows and their execution is a crucial component for data integrity and meeting your SLAs.

When workflows and data pipelines fail –  and they will – you have to a) know about it as it happens and b) understand the root cause of the failure. Otherwise you will run into data integrity issues and miss your SLAs. In Intuit’s case, the key SLA is the near real-time character of the data.

In Integrate.io, you can see these workflows via our “Query Insights”.

You can double-click into each user to see the underlying query groups and dependencies. As the engineer in charge, that means you can track your worfklows and understand which user, query and table are the cause of any issues.

End-to-end Data Flow, Toolchain and Business Services

Let’s go through the single steps of the data flow and the technologies involved to orchestrate the workflows.

Stage

S3 is the demarcation point. AWS delivers the CUR into S3. With various data sources next to the CUR, it’s easy for people to put data into an S3 bucket. Loading data into Redshift from S3 is easy and efficient with the COPY command.

Process

Amazon Redshift is the data platform. The workers for ingestion and post-ingestion processing include Lambda and EC2. Intuit uses Lambda wherever possible, as they prefer to not have any persistent compute they need to monitor or care for (patching, restacking, etc.).

Lambda functions can now run for 15 minutes, and for any job that runs under five minutes, the stack uses a lambda function. For larger jobs, they can deploy the same code stack on EC2, e.g. for staging the big CUR.

Orchestrate

AWS Step Functions coordinate the Lambda jobs. SNS triggers new worfklows as new data arrives, vs. CloudWatch for scheduling batch jobs. For example, when a new CUR arrives in an S3 bucket processing needs to start right away vs. waiting for a specific time slot. RDS helps to maintain state.

Consume

Data consumption happens across three major categories.

  1. Generic downstream consumers, where the landing zone for the transformed data is Intuit’s data lake in S3. Moving data from Redshift into S3 is fast and efficient with the UNLOAD command.
  2. A growing contingent of data scientist that run machine learning and artificial intelligence algorithms, with Sagemaker as their platform of choice. They can query data in Redshift, or call a growing set of APIs that run on Lambda with programmatic access to data. 23
  3. Business intelligence tools and dashboards to run the cost allocation programs, such as Tableau, Qlik, and QuickSight. This layer sees most of the consumption. Product managers have near real-time insights into the true allocated cost to make business choices.

Intuit supports new data use cases with Redshift, such as data APIs. Some of the uses cases have a transactional character that may require many small writes.

Instead of trying to turn Redshift into an OLTP, Intuit combines Redshift with PostgreSQL via Amazon RDS. By using dblink you can have your PostgreSQL cake and eat it too. By linking AmazonRedshift with RDS PostgreSQL, the combined feature set can power a broader array of use cases and provide the best solution for each task.

Comparing “Old” vs “New” – Benefits & Lessons Learned

Unlike with “all in one” data warehouses like Oracle or SQL Server, Redshift doesn’t offer system-native workflows. This may be at first intimidating

Instead, AWS takes the approach of providing a broad collection of primitives for low-overhead compute, storage, and development services. Coupled with a rich tool ecosystem for Redshift, you can build a data platform that allows for higher performing, more scalable and lower cost solutions than previously possible.

Overall, the migration rushed Intuit into a new era of data productivity. The platform:

  • transform an ever growing data volume into near real-time insights
  • reduces the cost of running the CUR analytics and
  • slashes the time it takes to develop new data services in half.

Meanwhile, the new data platform saves Intuit millions of spend on cloud infrastructure, and transforms the decision making process for 8,000+ employees.

A New Way of Working with Data

With the new platform in place, Intuit is architecting a number of new use cases.

Data Lake Architecture

Long term trends for the CUR data are interesting, but for cost optimization analysts are interested in the most recent data. It makes sense to unload data from the largest tables in Redshift into S3 in Parquet format. That saves cost and increases flexibility by separating storage and compute.

Data Lifecycle Management

Once data is in S3, other (serverless) query engines like Athena or Redshift Spectrum can access it. The main fact tables in the Intuit cluster are based on date – the CUR is a bill. The date serves as the criteria when to unload data. For example, you may only want to keep one-quarter of data within the cluster. By keeping historic data in S3 and using Spectrum to query it, you scale data outside of Redshift but keep retrieval seamless and performant.

In Integrate.io, you can filter for Spectrum queries by row count and scan size. You can also track their execution time and queue wait time.  In the screenshot below you see those metrics, incl. the uptick Spectrum queries beginning of June .

Data Science

The cost optimization program has delivered massive benefits. Teams know and predict computing costs in near real time. Deploying ML/AI capabilities against the CUR will allow making even smarter decisions – even 1% of improvement pays huge dividends.

Intuit expects the number of data scientists to go up several-fold, along with it the query volume. These queries patterns are more complex and less predictable. Concurrency Scaling offers an option to add more slots to a cluster to accommodate that incremental query volume, without adding nodes.

It’s a new way of working with data compared with the old, on-premise warehouse. Intuit is now in a position to embed a data services layer into all of Intuit’s products and services.

That’s all, folks!

That was a long post, and I hope it gave you a good peek behind the curtain on how Intuit is building their platform. I hope the post gives you enough information to get started with your data platform.

Now, I’d love to learn from you! Is there anything you can share about your own experience building a data platform? And if you want faster queries for your cloud analytics, and spend less time on Ops and more time on Dev like Intuit, then go ahead and schedule a demo or start a trial for Integrate.io.