The AI Transformation component applies GPT-powered transformations to fields in your Integrate.io ETL pipeline. It operates per record (one request per configured field/record), supports prompt interpolation, and emits typed outputs you define.
Overview
Configuration
1. Model Settings
-
Model selection: Choose a Chat Completions model (e.g., gpt-4o, gpt-4o-mini).
- Note: GPT-5 models with reasoning and verbosity capabilities take longer to respond -- and can be verified manually with the completions API
-
Temperature: Controls randomness (0.0–2.0). Param ref: https://platform.openai.com/docs/api-reference/chat/create
-
Max tokens: Upper bound on response tokens. Param ref: https://platform.openai.com/docs/api-reference/chat/create
-
API key: Generate/manage at https://platform.openai.com/api-keys
2. Select Fields & Preview
-
Select field name: Pick the upstream column in the field row.
-
Set prompt: Write a concise instruction. Use
#{field_name} to interpolate values. If you do not include any #{...} tokens, the component appends the selected field value automatically so GPT has context.
-
Set alias (projected_name): Provide the output column name for this transformation; aliases are unique by design.
-
Preview: Click Preview to run on a small sample (row-limited to 5 by default). Review outputs and refine prompts/types if needed.
-
Note: All AI prompt fields are returned as string datatype.
Test (Preview)
![thumbnail image]()
You can test your transformation directly in the Package Designer:
- Connect AI Transformation after a source/transform component.
- Configure fields, prompts, and alias.
- Click Preview; a small number of rows(5 by default) will be processed.
- Inspect the outputs and iterate on prompts/types as needed.
The preview table mirrors the schema produced by the connected input component.
Best Practices
- Keep prompts short and explicit; specify the desired output format.
- Use interpolation to add context from multiple fields; create new output fields as needed.
- Use preview to validate behavior and manage token usage.
- Can use ChatGPT to create the prompt for you. E.g. "These are my inputs, this is what I want as my output, write the prompt I should use to achieve this".