Many Salesforce administrators use a free or low-cost data integration app from the AppExchange for simple integration tasks.  As of this writing, the most popular app in the Integration category of the AppExchange is Dataloader.Io. This simple but useful tool lets you export and import data from Salesforce, and schedule those exports and imports. Because it is inexpensive (free or low cost, depending on your usage) and easy to install, Dataloader.Io is often the first tool used by an administrator who has to solve a simple integration task.

Here at, we understand that everybody’s data integration needs are different, and Dataloader.Io is a great entry-level platform to handle your basic integration needs. If your integration needs remain basic, then you’ve found an inexpensive way to integrate Salesforce with your enterprise. Yahtzee! But what if your needs grow or change? Should you just follow the upgrade path suggested by Dataloader.Io -- Mulesoft -- or should you look for a different platform as your next step? Dataloader.Io to Mulesoft is a significant leap. Before you go down that road, we invite you to consider as a less expensive, less complex alternative. 

For more information on our native Salesforce connector, visit our Integration page. vs

The Basics

Both and allow the export and import of data, as well as the scheduling of data exports and imports.  Both allow writing the data to a file system on a server using standard file transfer. The Salesforce exports on both systems allow the use of SOQL queries, so you can filter your exports down to just the data you want to examine, as long as you are comfortable writing SOQL statements.  Both systems also allow you to insert, or upsert, data from a file into Salesforce.  

The key difference between the tools can be summarized in two words: connections and logic.

  • connects to much more than simple file-based systems.  Our tool can read and write data directly into all the major database systems, as well as to any system that supports the popular REST API standard.
  • An pipeline, our name for a set of transformations, is like a mini-program. Don’t be intimidated by the term “program” - as you’ll see in our example below, our programs are drag-and-drop connections between components.  If you’ve done Salesforce administration, you’ll likely find the program an easy adjustment.

To understand how connections and logic can make a big difference in your integration project, let’s consider a simple Salesforce task: lead integration. Because leads often come from and are shared with other systems, they are a very common subject for an integration project. For accuracy and speed, and to conserve staff time, most organizations automate lead loading between systems. Both Dataloader.Io and can automate lead exchange. As long as the system that you’re getting leads from, or sending leads to, can read a file, you can use Dataloader.Io’s scheduler to send and/or receive leads automatically.

The Challenges

What if the system where you’re sending leads can’t accept a file? What if the staff running the system wants you to insert leads directly into a staging table?  Or what if the system can accept a file, but can’t check for duplicates? These are all very common challenges, but Dataloader.Io can’t handle them directly. If you want to do anything more complex than what can be accomplished with a SOQL query statement, you need to have another strategy.

For Dataloader.Io, you have a couple of choices:

  1. Put a human in the middle. He or she will download emails for your current leads in the database, then perform a VLOOKUP in Excel to see if new leads from Salesforce are already in your database.
  2. Buy an integration tool that allows you to script data transforms.  

If you take the second path, Mulesoft (the owners of would like you to buy their flagship product, the Anypoint Platform. This product is a high-quality integration tool that requires a complex implementation and costs an order of magnitude more than, on the other hand, costs a fraction of the Anypoint Platform. 

A Simple Example That’s Too Complicated for

To help you understand why might be the right choice for you, let’s look at a simple data pipeline that’s too complicated to implement with Here’s what we want to do:

  • Get all the leads entered in Salesforce in the last 7 days.
  • Check to see if those leads are in another database.
    • If the lead is already in the database, ignore it.
    • If the lead is not in the database, insert it into the database.

In the real world, we might also want to update the existing lead in the database if it differs from our Salesforce lead, or do some data editing of the leads before inserting them into the database. data pipelines can do all of this, but let’s keep this example simple.

Here’s the overall pipeline:

thumbnail image

The top two boxes (or "components") are our data sources. We’re pulling all the leads from Salesforce that were entered in the last 7 days, and all the leads in the lead database.  

The next component is a Left Join. In this pipeline, it acts as a lookup, getting us all of the leads from Salesforce and the leads that match in the database.  

thumbnail image

After the left join, the Filter component lets us choose the leads from Salesforce that don’t match a lead in the database.

thumbnail image

Finally, the Database component inserts all of the leads that don’t already exist in the lead database into that database.

thumbnail image

Let’s look at one of these components in more detail.

thumbnail image

Pictured above is a configuration page for the Salesforce data source component. Instead of writing code, you only need to enter information into a few configuration pages. This is true of each component in this pipeline. The first configuration page (not shown) specifies the connection. The second page, shown here, gives the query that it will use to get data from the object.  As you can see, we added a where clause to grab the last 7 days of leads.

thumbnail image

Above is the third configuration page for the Salesforce data source component, which specifies the fields we want to grab. As you can see, it’s another drag-and-drop interface. gets the list of fields in the lead object when the user pulls up this dialog, and all that’s required from the user is to pick the fields that you want to use. If you look closely, you can see that we defined an alias for the email field in the lead object, so it would be clear where the email came from further down the pipeline.

thumbnail image

Finally, here is the configuration of the Join component.  As you can see, the user-friendly aliases allowed us to be crystal clear about what is being joined.  Since this is a left join, every SalesforceEmail will remain in the pipeline, and if a DatabaseEmail match does not exist, this join will put in a null value.


As we've demonstrated, the is a powerful ETL tool, and the easily-configured pipelines make it simple to use and easy to understand. Furthermore, even though the database in this example was behind a firewall, can still deliver by utilizing a reverse SSH tunnel between our cloud service and your firewalled on-premises or cloud servers. This example was constructed and tested in less than an hour, but it will save hundreds of hours of analyst time as well as prevent a lot of errors. The same cannot be said for a plus Excel solution.

We believe our tool is a more logical next step for organizations that have outgrown Dataloader.Io.  If you want to find out for yourself, we’re happy to provide a demo, a seven-day free trial, and a free setup session with our implementation team. Or, if you're strapped for time, one of our integration specialists can help you construct your data pipeline, or build your pipeline for you.  Drop us a line at or schedule a meeting.