What is ETL vs ELT?

What is the Main Difference Between ETL and ELT?

ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) are two methods of moving data from sources to a target repository. 

ETL applies transformations before loading data to its destination. With ELT, transformations happen on-demand, long after the transportation of data. This is an essential difference with significant implications for data storage and analytics. 

How Does ETL Work?

The ETL process works in three stages:

  • Extract: The ETL acquires data from target sources, such as relational databases, NoSQL databases, API calls, or file imports. All of this is structured data.
  • Transform: The ETL engine applies a transformation schema to the incoming data. It does this in a staging area, where the engine copies values from the current table into the new table. This standardizes all incoming data while also performing some data quality tasks.
  • Load: Now that the data is clean, consistent, and in the correct format, the ETL transfers it to its destination. With ETL, this destination is generally an OLAP data warehouse.

At the transformation, the ETL process cleanses, harmonizes, and normalizes raw data. It can also enrich or mask data if required. 

The data that arrives in the repository is quality assured and correctly formatted, making it ready for immediate uses such as analytics.

How Does ELT Work?

ELT follows the same basic steps above but implements them in a different sequence. This makes a massive difference to the outcomes.

  • Extract: The ELT process pulls data from the sources. This process can handle structured, semi-structured, or unstructured data.
  • Load: The process pushes data to the relevant data repository. When using ELT, the repository is generally a data lake built on a distributed file system such as Hadoop. The data doesn't go through a transformation at this point, which means that the process loads it to the repository in its original format.
  • Transform: Data transformation happens on-demand in ELT. If a user or application wants to retrieve data from the data lake, they will use a process such as MapReduce to sort through the available data.

ELT has the advantage of speed. The processing overhead between extraction and loading is minimal, as no transformation work takes place. ETL is an efficient way to build a data pipeline that feeds a data lake. However, this can become a data swamp if it's not adequately tended.

As this data isn't structured and integrated, it would take a substantial amount of time to execute a traditional-style query. Big Data tools such as Hadoop and Spark can significantly reduce the execution time. 

Common ETL Use Cases

ETL is best suited to particular use cases, especially those involving certain kinds of data repositories.

When Creating a Data Warehouse or Data Mart

ETL deals with structured data, which is why it is ideal for use as a data pipeline to populate a repository structure like a data warehouse.

This also works for similar structures, such as data marts. A data mart is essentially a specialized warehouse that provides information for a particular business function. You can deploy a finance data mart, for example, that only holds data relevant to finance. ETL will provide fresh data for such a structure.

When Integrating Data Sources

ETL is ideal for data integration use cases. In this instance, the organization will have disparate data sources, such as production databases, cloud services, and information stored in relational database tables.

If the enterprise wishes to bring these together in a unified format, ETL will apply the required changes at the transformation stage. The result is a repository of standardized, consistent data that is ready for immediate use.

When Applying a Master Schema

ETL is excellent in scenarios where you want to combine disparate sources into a single relational database. For example, you may have customer data on your CMS, your billing system, and your e-commerce system. You can create a schema in the ETL interface, and the ETL engine will perform the data integration process.

The result is a data repository filled with neatly formatted customer information. This process works with any volume of data, as long as it all fits in your schema.

When Storing Data for Future Production Use

ETL can be a two-way process if required. The data in a data warehouse is clean, integrated, and quality controlled. This means that you can potentially export data from the warehouse to another application.

For example, you may have an ETL process that creates a database of customer information. Once you have this database set up, you can export it into a Customer Relationship Management (CRM) system.

Common ELT Use Cases

ELT is useful in other use cases, such as:

When Creating a Data Lake

Data lakes seem like they have no rules when compared to other repository structures. The data pipeline deposits data in its native format without any kind of transformation.

This approach can work well if it's supported by the right tools. You'll also need excellent standards of data governance to ensure that you delete data when it expires. You must also ensure that you maintain adequate records, such as lineage metadata.

When Storing Different Types of Data Together

ETL can handle semi-structured data, such as CSV files, and unstructured data, like images and text files. ETL can't do this, as it tries to pass all data through the transformation layer, which only works with structured data.

All of these data entities persist in their native format, so you can recall them in their original form. This frees up storage capacity elsewhere in the enterprise network.

When Data is Too Diverse to Apply a Schema

When dealing with Big Data, the incoming data may be too structurally diverse to fit a single schema. If this is the case, the fight to organize data is likely to end in defeat.

The ELT approach makes sense in this scenario. You can load all information in its native format to the data lake, and then rely on techniques such as MapReduce to find what you need, when you need it.

When Archiving Data

Some data needs long-term storage, even if it's not going to be in regular usage. For example, financial records must be stored for a minimum period. During this time, they may not be subject to read actions.

ELT is ideal for this purpose, as it will transport data in its native format. It does this with minimal overhead and using cost-effective Big Data storage techniques such as Spark and Hadoop.

The ETLT Alternative

ETLT is an emerging framework that combines the best elements of ETL and ELT. In this model, there are four stages:

  • Extract: The ETLT service obtains data from sources.
  • Transform data: The service passes data through cleansing and validation.
  • Load: The service loads data into the data warehouse.
  • Transform relationships: Further SQL processing happens on the warehouse side to integrate incoming data sources.

ETLT is growing in popularity because it uses new technologies to improve overall process efficiency. ETL tools can handle most of the hard work of data replication, using dynamic approaches to handle different types of extraction. The ETLT framework manager can then schedule additional transformations at the optimal time, maximizing the available resources. The result is a much faster throughput of data, with less strain on enterprise systems.

Share This Article
facebook linkedin twitter

Glossary of Terms

A guide to the nomenclature of data integration technology.