Skip to main content

Connection Setup

To allow Integrate.io ETL access to Snowflake

In order to allow Integrate.io ETL access to your Snowflake account, login to your Snowflake account using your favorite SQL client and execute the following commands to create a user with proper permissions for Integrate.io ETL:
USE ROLE accountadmin; -- use this role to be able to grant permissions
CREATE ROLE integrate_io; -- create a role to assign to the integrate_io user
CREATE USER integrate_io PASSWORD='<password>' DEFAULT_ROLE = integrate_io; -- create a user with a password
GRANT ROLE integrate_io TO USER integrate_io; -- assign user to role

-- run the following statements for each database you'd like Integrate.io ETL to access:

GRANT USAGE ON DATABASE "<database_name>" TO ROLE integrate_io; -- grant database access to integrate_io
GRANT CREATE SCHEMA ON DATABASE "<database_name>" TO ROLE integrate_io; -- grant schema creation privilege to integrate_io
GRANT ALL ON SCHEMA "<schema_name>" TO ROLE integrate_io; -- grant integrate_io privileges to do anything within the defined schema
GRANT ALL ON SCHEMA public TO ROLE integrate_io;
GRANT ALL ON ALL TABLES IN SCHEMA "<database_name>"."<schema_name>" TO ROLE integrate_io; -- grant integrate_io privileges to all tables in the schema

-- run the following statements for each warehouse you'd like Integrate.io ETL to use:
GRANT ALL ON WAREHOUSE "<warehouse_name>" TO integrate_io; -- allow integrate_io to use a warehouse

Authentication Options

You can authenticate to Snowflake using either password authentication or key-pair authentication. Key-pair authentication is recommended for improved security. For enhanced security, follow these steps to set up key-pair authentication:
  1. Generate a Key Pair Use one of the following methods to generate your key pair:
    1. Method A - Standard Key Pair:
    openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8 -nocrypt
    openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub
    
    1. Method B - Encrypted Key Pair:
    openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out temp_rsa_key.p8
    openssl pkcs8 -topk8 -inform PEM -v1 PBE-SHA1-3DES -in temp_rsa_key.p8 -out rsa_key.p8 # rsa_key.p8 is the private key
    openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub # rsa_key.pub is the public key
    
    This will generate two files:
    • rsa_key.p8: Private key (keep this secure)
    • rsa_key.pub: Public key (to be assigned to your Snowflake user)
  2. Open the public key file (rsa_key.pub) and copy ONLY the content between -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- lines (not including these lines). For example, if your public key file contains:
    -----BEGIN PUBLIC KEY-----
    MIIBIjANB...AQAB
    -----END PUBLIC KEY-----
    
    You should only copy:
    MIIBIjANB...AQAB
    
  3. Assign the Public Key to the Snowflake User Run the following command in Snowflake:
    ALTER USER integrate_io SET RSA_PUBLIC_KEY='<paste_public_key_here>';
    
    Replace <paste_public_key_here> with the content of your public key file. Ex: MIIBIjANB...AQAB

Option 2: Password Authentication (to be deprecated following snowflake policy)

When creating the user, include a password as shown below:
CREATE USER integrate_io PASSWORD='<password>' DEFAULT_ROLE = integrate_io; -- create a user with a password

To create a Snowflake connection in Integrate.io ETL

1
Click the Connections icon (lightning bolt) on the top left menu.
2
Click New connection.
Connections page with New connection button
3
Choose Snowflake.
Selecting Snowflake from the connection type list
4
Type a name for the connection.
5
Enter your account name.How to get your Snowflake Account Name:
  1. Via classic console: from the url eg: https://xxxxxxx.us-east-1.(snowflakecomputing.com) — xxxxxxx is the value you need to input in the dashboard.
    Note:If the account URL shown on the dashboard does not contain account’s region like above or the account is using Azure, then “Account locator” is the value that should be used for account name.
  2. Via Snowsight web interface: go to your Account > Copy account URL
    Snowsight interface showing Copy account URL option
6
Enter the user name and password you created for Integrate.io ETL to use.
7
For Authentication Method, choose one of the following options:
  • Option A - Password Authentication: Select “Password” and enter the password you created for the user (Note: To be deprecated following snowflake policy)
    Password authentication option in Snowflake connection form
  • Option B - Key-Pair Authentication (Recommended): Select “Key-Pair Authentication”, then upload your private key file (rsa_key.p8). If you created an encrypted key, enter the passphrase in the Private key passphrase field. If your key is not encrypted, leave the passphrase field empty.
    Key-pair authentication with private key upload field
8
Enter the default database to use.
9
Enter the default warehouse to use.
10
Set the region to your Snowflake’s account’s region.
11
Click Test Connection. If the credentials are correct, a message that the connection test was successful appears.
12
Create Connection.
Snowflake connection with database, warehouse, and region fields

Identifier names

Integrate.io ETL uses quoted identifiers which means that by default, all identifiers (table and column names for instance) are case sensitive - as opposed to non-quoted identifiers which are case insensitive (and are stored in upper case). You may want to change the behavior by changing the QUOTED_IDENTIFIERS_IGNORE_CASE setting for Integrate.io ETL’s user or for your entire account. Note that tables and columns that were created with case sensitive names remain case sensitive, so it’s important to make this change before you create any tables. If you want to switch a case sensitive identifier name to a case insensitive identifier name after it’s been created use the ALTER ... RENAME clauses in Snowflake. Read more on Snowflake identifiers for more information.
Snowflake destination component in the pipeline designer

Connection

Select an existing Snowflake connection or create a new one (for more information, see Allowing Integrate.io ETL access to my Snowflake account.)

Destination Properties

Snowflake destination properties with schema, table, and auto-create options
  • Target schema - the target table’s schema. If empty, the default schema is used.
  • Target table - the name of the target table in your Snowflake database. By default, if the table doesn’t exist, it will be created automatically.
  • Automatically create table if it doesn’t exist - if unchecked and the table doesn’t exist, the job fails.
  • Automatically add missing columns - when checked, the job will check if each of the specified columns exist in the table and if one does not exist, it will add it. Keep in mind that if there are a very large number of fields, this could impact job run time. Key columns can’t be automatically added to a table.

Operation type

Operation type dropdown with append, overwrite, and merge options
  • Append (Insert only) - default behavior. Data will only be appended to the target table
  • Overwrite (Truncate and insert) - truncate the target table before data is inserted into the target table.
  • Overwrite (Delete all rows on table and insert) - deletes all of the target table before the data flow executes. If a truncate statement can’t be executed on the target table due to permissions or other constraints, you can use this instead. This operation does not clear the schema.
  • Merge with existing data using delete and insert - incoming data is merged with existing data in the table by deleting target table data that exists in both the data sets and then inserting all the incoming data into the target table. Requires setting the merge keys correctly in field mapping. Merge is done in a single transaction:
    1. The dataflow’s output is copied into a temporary table with the same schema as the target table.
    2. Rows with keys that exist in the temporary table are deleted from the target table.
    3. All rows in the temporary table are inserted into the target table.
    4. Temporary table is dropped.
  • Merge with existing data using update and insert - incoming data is merged with existing data in the table by updating existing data and inserting new data. Requires setting the merge keys correctly in field mapping. Merge is done in the following manner:
    1. The dataflow’s output is copied into a temporary table with the same schema as the target table.
    2. Existing records (by key) in the target table are updated and new records are inserted using the MERGE statement.
    3. Temporary table is dropped.

Pre and post action SQL

  • Pre-action SQL - SQL code to execute before inserting the data into the target table. If a merge operation is selected, the sql code is executed before the staging table is created.
  • Post-action SQL - SQL code to execute after inserting the data into the target table. If a merge operation is selected, the sql code is executed after the staging table is merged into the target table.

Advanced options

Advanced options with max errors, truncate columns, and null handling
  • Maximum errors - If this number of errors occurs in Snowflake while loading data into the table, the job fails.
  • Truncate columns - Truncates string values in order for them to fit in the target column specification.
  • Load empty as null - insert empty string as null values
  • Null string - String fields that match this value will be replaced with NULL.

Schema Mapping

Schema mapping with field names, types, and key column settings
Map the dataflow fields to the target table’s columns. Columns defined as key will be used as the sort key when Integrate.io ETL creates the table. If merge operation is used, you must select at least a field or multiple fields as keys, which will be used to uniquely identify rows in the table for the merge operation. The data types in Integrate.io ETL are mapped as follows when the table is created automatically. Note that since Integrate.io ETL doesn’t have a notion of maximum string length, the string columns are created with the maximum length allowed in Snowflake.
Integrate.io ETLSnowflake
StringVARCHAR
IntegerNUMBER
LongNUMBER
FloatDOUBLE
DoubleDOUBLE
DateTimeTIMESTAMP_TZ
BooleanBOOLEAN

Snowflake Source

Running Jobs

Scheduling Package Execution

IP Allowlist

Last modified on April 20, 2026