We're excited to share our latest feature enhancement that improves reliability and control for outbound data delivery across the platform. This release introduces configurable request throttling on the REST API destination, giving data teams a native way to respect target API rate limits directly within their pipeline configuration.

REST API Destination Request Throttling

Stay within rate limits on target APIs without relying on external throttling proxies or custom middleware. The REST API destination now includes a configurable sleep interval between outbound requests, letting you control exactly how fast data is pushed to downstream services.

When syncing records to third-party APIs, whether CRMs, SaaS platforms, or internal microservices, exceeding rate limits can trigger 429 errors, dropped requests, and broken pipelines. The new sleep interval setting eliminates that risk by pacing requests at the source, right inside your pipeline configuration.

Define the pause duration (in milliseconds) between successive API calls, and the platform honors it automatically during execution. For example, if your target API allows 10 requests per second, setting the interval to 100ms keeps throughput safely within that window. The interval is applied per-request, so pacing scales predictably regardless of payload size or batch volume.

Key benefits:

  • No external tooling required- throttling is handled natively in the destination component, removing the need for queue-based architectures or middleware wrappers.

  • Predictable throughput- match your pipeline's request rate to any API's capacity with a single configuration value.

  • Fewer failed runs- avoid 429 responses, temporary bans, and manual restarts caused by rate-limit violations.

  • Full visibility- request pacing integrates with existing job logging, so you retain complete observability over delivery timing and outcomes.

This is especially useful when writing to APIs that enforce strict per-second or per-minute quotas, running large batch syncs that would otherwise require manual chunking, or delivering to third-party services where exceeding limits triggers cooldown periods.

How Request Throttling Strengthens Our Offering

Data integration doesn't end when data lands in the warehouse. Increasingly, the value is in activating that data, pushing enriched insights, computed scores, and unified profiles back into the tools where business teams operate. That's Reverse ETL, and it demands the same production-grade reliability that teams already expect from ingestion and transformation.

With configurable request throttling on the REST API destination, Integrate.io closes a critical gap in the outbound data delivery workflow. Until now, teams building Reverse ETL pipelines on any platform often had to bolt on external rate-limiting proxies, build custom retry middleware, or manually chunk batch syncs to avoid overwhelming target APIs. Each workaround added complexity, introduced failure points, and pulled engineering time away from higher-value work.

By bringing throttling natively into the destination component, Integrate.io ensures that the entire data lifecycle, extraction, transformation, loading, and now controlled activation, lives within a single platform. This means fewer moving parts in your data stack, consistent logging and observability from source to activation, reduced engineering overhead for teams managing outbound API integrations, and a smoother path to scaling Reverse ETL across more destination endpoints without rearchitecting delivery logic each time.

For teams already using Integrate.io's low-code ETL pipelines, CDC replication, or file delivery workflows, request throttling extends the same philosophy, reliable, configurable, and observable, into the last mile of data delivery. It's a natural evolution of the platform's commitment to giving data teams end-to-end control without requiring them to build and maintain infrastructure outside the pipeline.

Real-Time Package Validation

thumbnail image

Package Validation now runs continuously inside the Package Designer, with a status section in the toolbar that reflects the current state of the package as you edit. Auto-validation triggers on component changes without requiring a save, giving you an immediate view of exactly where your pipeline stands at any point in time.

A new two-tier classification separates 'to configure' warnings from genuine errors, so users can see at a glance what is actually blocking a run versus what is still in progress. This distinction removes ambiguity during iterative pipeline development and reduces wasted runs caused by overlooked configuration gaps.

REST API Connections via SSH Tunnel

thumbnail image

A new connection type routes REST API traffic through an SSH tunnel, letting users reach private APIs that sit behind a firewall without exposing them to the public internet. The connection is configured through the standard connection form, and it works with the existing REST API source, no migration or rework required.

This removes the need for custom networking workarounds when integrating with internal services, making it straightforward to securely connect to internally-hosted systems as part of any data pipeline.

Refreshed Package Designer 

thumbnail image

The Package Designer now ships dedicated components for each database and file storage connection type, replacing the generic Database and File Storage entries on the components list. Engineers can immediately identify and select the exact connector they need without navigating through generic entries, streamlining pipeline construction across a growing library of supported systems.

New Connectors

Seven new source connectors ship in this release, spanning finance, CRM, ITSM, project management, marketing, product analytics, and customer support.

  • Stripe Source Connector: Pull payments, subscriptions, customers, and invoices directly from Stripe into the warehouse, giving finance and revenue teams a single source for billing analytics, churn modeling, and reconciliation against the general ledger.
  • SugarCRM Source Connector: Bring accounts, contacts, opportunities, and custom modules from SugarCRM into downstream pipelines, so revenue teams can build pipeline forecasts, attribution models, and customer 360 views without exporting from the CRM by hand.
  • ServiceNow Source Connector: Move incidents, change requests, and service catalog data from ServiceNow into the warehouse, letting IT and operations teams report on SLA performance, ticket volume trends, and team workload alongside the rest of the business.
  • Kantata Source Connector: Pull project, resource, and financial data from Kantata into the warehouse, giving services and PMO teams the inputs they need to track utilization, project profitability, and forecast capacity.
  • Klaviyo Source Connector Ingest campaign, flow, and subscriber engagement data from Klaviyo, so marketing teams can attribute revenue back to specific sends and join email behavior with web and purchase data already in the warehouse.
  • Mouseflow Source Connector: Bring session replay and heatmap analytics from Mouseflow into pipelines, letting product and growth teams join behavioral signals with conversion data to understand where users drop off and which experiments are working.
  • Crisp Source Connector: Ingest conversation, contact, and helpdesk data from Crisp, so support and product teams can analyze response times, surface recurring issues, and tie customer questions back to product usage.

New Features

  • Postgres JSON Column Support: Postgres destinations can now write directly into JSON columns, removing the need to serialize JSON payloads as text and parse them downstream.
  • Postgres Staging Schema Configuration: Users can now configure staging tables to be created in the destination's target schema instead of the public schema, avoiding permission failures on locked-down databases.
  • REST API String Unwrapping: The REST API source now offers an option to unwrap extracted string values, returning plain strings instead of JSON-quoted strings for downstream consumption.

Improvements

  • The AI Assistant has been migrated to a new engine, providing in-product help that is based on our live product documentation pages.
  • Source component field aliases now consistently replace spaces and special characters with underscores during schema import, producing predictable column names regardless of the upstream field naming convention.
  • The cURL import parser in the REST API source and Expression Editor now recognizes the --url flag, so commands copied from a wider range of tools parse correctly without manual editing.
  • The Data Previewer now shows a human-readable timestamp tooltip when hovering over Unix datetime fields, removing the need to convert epoch values manually while inspecting data.
  • Test Connection error messages on connection forms now persist until explicitly dismissed instead of disappearing automatically after seven seconds, giving users time to read and act on the diagnostic.
  • Job error logs and component preview error output have been trimmed to remove stack traces and internal noise, surfacing the operator-relevant message instead of a wall of text.
  • File Source components now truncate oversized data previews to keep the application responsive when working with very large files.

All updates are rolled out. Connector availability may vary by plan. You can read more detailed information on all features that make Integrate.io a leader in data pipeline automation, which is available on Integrate.io's Documentation Page.

Integrate.io: Delivering Speed to Data
Reduce time from source to ready data with automated pipelines, fixed-fee pricing, and white-glove support
Integrate.io