Skip to content

How to Migrate from Apideck Without Re-Authenticating End Users

A step-by-step technical guide to migrating from Apideck without re-authenticating users - covering OAuth token export, API key extraction, credential import, testing, rollback strategy, and security.

Roopendra Talekar Roopendra Talekar · · 22 min read
How to Migrate from Apideck Without Re-Authenticating End Users

If you are evaluating an Apideck alternative, you are likely staring down the barrel of a migration cliff. You want to switch infrastructure, but the thought of asking hundreds of enterprise customers to re-authenticate their Salesforce, HubSpot, BambooHR, or QuickBooks accounts is terrifying. Forcing users to click "Reconnect" generates support tickets, introduces immediate churn risk, and burns social capital with your best accounts.

The good news is you do not have to do this. As we've demonstrated in our migration guides for Merge.dev and Finch, you can migrate away from Apideck without asking a single end user to reconnect. The process involves exporting OAuth tokens from Apideck Vault, importing them into your new platform's credential context, mapping the old unified schema to the new one, and flipping the DNS.

This guide breaks down the exact technical strategy to extract your credentials, handle rate limits post-migration, and use declarative mappings to mimic your old API responses so your frontend code does not have to change.

The Vendor Lock-In Trap: Why Teams Migrate from Apideck

Apideck is a well-built product. The docs are clean, the Vault connection UI is polished, and the real-time pass-through architecture that avoids caching customer data is a sound design decision. Integrations are a core revenue lever, and studies show that organizations use anywhere from 100 to more than 300 SaaS applications.

While Apideck helps teams get off the ground quickly, engineering leaders typically hit three specific scaling limits that force a migration conversation:

1. Virtual webhooks default to 24-hour polling. Apideck monitors enabled resources at regular intervals, typically every 24 hours, for providers without native webhook support. For providers like BambooHR and many HRIS platforms that don't support native webhooks, Apideck monitors integrations at an interval of every 24 hours by default. If you are building an applicant tracking system (ATS) integration and need immediate status updates, or a CRM sync syncing employee terminations or deal stage changes, a 24-hour delay is a compliance incident or a stale-pipeline problem.

2. Custom field mapping is hidden behind enterprise paywalls. As your customers scale, they heavily customize their CRMs and HRIS platforms. Apideck restricts Custom Field Mapping to its Scale plan and above. Their Launch plan at $599/month does not include it. You need the Scale tier at $1,299/month to access custom field mapping. The moment your first enterprise customer asks you to map their custom BambooHR employment type field or a custom Salesforce object, you are looking at more than doubling your monthly spend.

3. No auto-generated MCP server support. If your product team is building AI agents, those agents need secure, scoped access to third-party data. Agentic AI is becoming a standard pattern in B2B SaaS—nearly 33% of organizations with at least 1000 full-time employees have already deployed agentic AI. Apideck currently lacks native Model Context Protocol (MCP) server generation, forcing your engineers to manually build and maintain tool definitions for LLMs.

For a deeper dive into these architectural limits, see our technical breakdown in Truto vs Apideck: The Best Alternative for Enterprise SaaS Integrations.

The Migration Cliff: Why Re-Authenticating End Users is Not an Option

Unified API platforms abstract away the pain of dealing with terrible vendor API docs, inconsistent pagination, and undocumented edge cases. But the architecture of most platforms creates a dependency that is easy to miss during evaluation and incredibly painful to untangle later.

Apideck Vault acts as a centralized credential store. When your customer authenticates via the Apideck UI, the resulting OAuth access_token and refresh_token are held by Apideck. Your application only holds an Apideck consumer ID. If you simply switch integration vendors, those consumer IDs become useless. Forcing enterprise users to click "Reconnect" on their integrations is a nuclear option. The math is brutal:

  • Support ticket volume: Every reconnection generates at least one support ticket. At 200 linked accounts, that is 200+ tickets in a single week.
  • Churn risk: Enterprise buyers who have to re-link core systems of record will question whether your product is stable. Some will simply drop off.
  • Coordination overhead: Enterprise clients often require their IT team to approve OAuth grants. Scheduling that across dozens of accounts takes weeks.

The good news: OAuth tokens are just strings with an expiry date. If you can extract them from your current vendor's credential store and insert them into a new one, the end user never knows you switched infrastructure. The token keeps working until it expires, at which point the new platform refreshes it automatically.

When Zero-Reauth Migration Is Possible

Before you commit to a migration timeline, you need to answer one question: who owns the OAuth application that issued the tokens?

This single factor determines whether you can migrate without any re-authentication, or whether some subset of your connections will require users to reconnect.

Decision Matrix: Can You Migrate Without Re-Auth?

Scenario OAuth App Owner Tokens Portable? Re-Auth Required?
You brought your own client_id / client_secret to Apideck You Yes No
You use Apideck's managed OAuth app (their client_id) Apideck No Yes
API key / Basic auth connections (non-OAuth) N/A Yes, if you can extract values No
OAuth 2.0 Client Credentials (service-to-service) You (typically) Yes, re-acquire with your credentials No

Apideck supports bringing your own OAuth clients. In their dashboard, when configuring a connector, you can select "Use your client credentials" and enter your own client_id and client_secret. By default, Apideck uses sandbox OAuth apps for quick setup, but their own docs recommend switching to your own OAuth apps before going to production.

How to check which OAuth app you are using:

  1. Log into the Apideck dashboard at https://platform.apideck.com/.
  2. Navigate to Configuration and select the Unified API category (e.g., CRM, HRIS).
  3. Click on the specific connector (e.g., Salesforce, HubSpot).
  4. Look for the OAuth credentials section. If it shows "Use your client credentials" with your own values filled in, you own the OAuth app. If it shows Apideck's defaults or no custom credentials, Apideck's shared OAuth app was used.
  5. Cross-reference with your provider's developer console. For example, check your Salesforce Connected App or HubSpot Developer App to see if the redirect URI is set to https://unify.apideck.com/vault/callback - and whether that app belongs to your organization.
Tip

Pre-Migration Action Item Build a spreadsheet of every active connection. For each one, record the service_id, auth_type, consumer_id, and whether the OAuth app is yours or Apideck's. This inventory determines your migration path for each connection and gives you an accurate estimate of how many users (if any) need to re-authenticate.

If you own the OAuth app for a given provider, a zero-reauth migration is straightforward: the tokens were issued to your application, so any platform that holds your client_id and client_secret can refresh them. If Apideck owns the OAuth app, you have two choices: either negotiate a token export (more on this below) or plan a staggered re-auth for those specific connectors while migrating the rest silently.

Step 1: Exporting Credentials from Apideck Vault

The first technical hurdle is getting your data out. You need the raw OAuth tokens, the expiration timestamps, the scopes, and any provider-specific metadata (like subdomains or tenant IDs) stored in Apideck Vault.

Does Apideck Allow Token Export?

This is the question that trips up most teams. Here is the reality:

Apideck's Vault API has an import endpoint, but no corresponding export endpoint for raw tokens. Their migration guide documents POST /vault/connections/:unified_api/:service_id/import for importing tokens into Apideck, but there is no GET or export endpoint that returns raw access_token or refresh_token values. The standard GET /vault/connections endpoint returns connection metadata - service_id, state, auth_type, enabled status, and settings - but the actual OAuth tokens are stored server-side and injected into requests at runtime. They are never exposed in API responses.

This means you cannot programmatically self-serve a token export. Your extraction path depends on your setup:

Path A: You Own the OAuth App (Best Case)

If you brought your own OAuth app credentials when setting up integrations in Apideck, the tokens were issued to your OAuth application. This is the best-case scenario. You can:

  1. Contact Apideck support and request a secure, encrypted export of your Vault data. Be specific: you need the raw access_token, refresh_token, expires_at, and any per-connection metadata for each consumer_id.
  2. Negotiate the export as part of your offboarding. Most vendors will cooperate, especially if the OAuth app belongs to you. Frame it clearly: "These tokens were issued to our OAuth application. We need the raw credentials to continue serving our customers."
  3. If Apideck provides a data export mechanism (e.g., a support-assisted database export or a temporary API endpoint), request the data in a structured format like JSON or CSV with fields for consumer_id, service_id, unified_api, access_token, refresh_token, expires_at, and any connection settings.

Path B: Apideck Managed OAuth Credentials

If you used Apideck's shared OAuth app credentials (Apideck's client_id), the tokens belong to Apideck's OAuth application. Migrating these tokens to a new platform that uses a different OAuth application will not work - the provider (Salesforce, HubSpot, etc.) will reject refresh attempts from mismatched client credentials.

In this case, your options are:

  1. Re-register a new OAuth app with each provider and use the new platform's connection flow, but stagger this over time so it is not a sudden cliff.
  2. Use the migration to switch to your own OAuth app, which gives you token portability going forward.
  3. Prioritize by impact. Sort your connections by activity volume and business criticality. High-value enterprise accounts get white-glove migration support (a scheduled 5-minute reconnection call). Low-activity accounts get an in-app prompt that lets them reconnect at their convenience.
Warning

OAuth App Ownership Matters If you do not own the OAuth app that issued the tokens, token migration is not possible without re-authentication. This is the single most important architectural decision for integration vendor portability. We wrote a deep dive on this in OAuth App Ownership: How to Avoid Vendor Lock-In.

Path C: Extracting API Keys and Non-OAuth Credentials

Not all Apideck connections use OAuth. Many integrations - particularly HRIS platforms like BambooHR, ticketing systems, and smaller SaaS tools - use API key or Basic authentication. These are simpler to migrate because there is no token refresh cycle or OAuth app dependency.

Apideck's Vault API does expose connection settings for non-OAuth integrations. You can retrieve these via the GET /vault/connections/:unified_api/:service_id endpoint, which returns the connection's settings object. For API key integrations, the settings typically contain the fields your user entered (API key, subdomain, etc.).

# List all connections for a specific consumer
curl -X GET 'https://unify.apideck.com/vault/connections' \
  -H 'Authorization: Bearer {APIDECK_API_KEY}' \
  -H 'x-apideck-app-id: {APP_ID}' \
  -H 'x-apideck-consumer-id: {CONSUMER_ID}'

The response includes connection metadata and settings. For API key connections in callable state, the settings object contains the user-provided fields:

{
  "id": "hris+bamboohr",
  "service_id": "bamboohr",
  "unified_api": "hris",
  "auth_type": "apiKey",
  "state": "callable",
  "settings": {
    "subdomain": "yourcompany",
    "api_key": "..."
  }
}
Warning

Verify API Key Visibility Apideck may mask sensitive fields in API responses depending on your plan and configuration. Before building your export script, make a test request for a known API key connection and confirm the raw values are returned, not masked with asterisks. If they are masked, you will need to request a support-assisted export for these connections as well.

To bulk-export API key connections, iterate over all your consumers and pull their connection settings:

// Bulk export API key connections from Apideck
async function exportApiKeyConnections(
  apiKey: string,
  appId: string,
  consumerIds: string[]
) {
  const connections = [];
 
  for (const consumerId of consumerIds) {
    const response = await fetch(
      'https://unify.apideck.com/vault/connections',
      {
        headers: {
          'Authorization': `Bearer ${apiKey}`,
          'x-apideck-app-id': appId,
          'x-apideck-consumer-id': consumerId,
        },
      }
    );
    const { data } = await response.json();
 
    for (const conn of data) {
      if (conn.state === 'callable' && conn.auth_type === 'apiKey') {
        connections.push({
          consumer_id: consumerId,
          service_id: conn.service_id,
          unified_api: conn.unified_api,
          auth_type: conn.auth_type,
          settings: conn.settings,
        });
      }
    }
  }
 
  return connections;
}

The Target Export Payload

Whichever method you use, you are looking to build a dataset that maps your internal identifiers to the raw provider credentials. The structure you need to extract for each connection looks like this:

// Structure each exported connection for import
interface ExportedConnection {
  consumer_id: string;        // Your internal user/account ID
  service_id: string;         // e.g., 'salesforce', 'bamboohr'
  unified_api: string;        // e.g., 'crm', 'hris'
  auth_type: 'oauth2' | 'apiKey' | 'basic'; // Determines import path
  credentials?: {
    access_token: string;
    refresh_token: string;
    expires_at: string;       // ISO 8601 timestamp
    token_type: string;       // Usually 'Bearer'
    scope?: string;           // Original scopes granted
  };
  settings?: Record<string, any>; // API key, subdomain, etc.
  metadata: Record<string, any>; // Provider-specific: instance_url, realm_id, etc.
}
Warning

Beware of Token Rotation Race Conditions OAuth 2.0 refresh tokens are often single-use. If Apideck's background workers refresh a token after you have exported the data but before you have switched your routing, the exported refresh token becomes invalid. You must coordinate a hard cutover or pause Apideck syncs during the migration window.

Step 2: Importing Credentials into Truto's Generic Context

Most unified APIs maintain separate code paths for each integration. Adding custom credentials requires mapping them to rigid, integration-specific database columns.

Truto takes a radically different approach. The entire platform contains zero integration-specific code. Integration behavior is defined entirely as declarative JSON configurations. This makes importing historical credentials incredibly straightforward.

In Truto, every connected account is an IntegratedAccount, and its credentials live in a flexible, provider-agnostic JSON context object:

{
  "context": {
    "oauth": {
      "token": {
        "access_token": "eyJhbGciOiJSUzI1NiIs...",
        "refresh_token": "dGhpcyBpcyBhIHJlZnJlc2g...",
        "expires_at": "2026-04-10T14:30:00.000Z",
        "token_type": "Bearer",
        "scope": "read write"
      },
      "scope": "read write"
    },
    "instance_url": "https://yourcompany.my.salesforce.com"
  }
}

Provider-specific metadata like Salesforce's instance_url or QuickBooks' realm_id goes into the context root. The platform resolves these using JSONata expressions in the integration configuration - no custom code needed.

Import via API: OAuth Connections

For each OAuth connection you exported from Apideck, create an integrated account in Truto by passing the raw tokens directly into the context:

const response = await fetch('https://api.truto.one/integrated-accounts', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${TRUTO_API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    integration_id: 'salesforce',
    environment_id: 'env_prod_123',
    external_id: exportedConnection.consumer_id,
    authentication_method: 'oauth2',
    context: {
      oauth: {
        token: {
          access_token: exportedConnection.credentials.access_token,
          refresh_token: exportedConnection.credentials.refresh_token,
          expires_at: exportedConnection.credentials.expires_at,
          token_type: exportedConnection.credentials.token_type,
          scope: exportedConnection.credentials.scope
        }
      },
      // Provider-specific metadata
      instance_url: exportedConnection.metadata.instance_url
    }
  })
});

Import via API: API Key Connections

For non-OAuth connections (API key, Basic auth), the import is even simpler. The context stores the credentials exactly as the integration's configuration expects them:

// BambooHR (API Key + subdomain)
const response = await fetch('https://api.truto.one/integrated-accounts', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${TRUTO_API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    integration_id: 'bamboohr',
    environment_id: 'env_prod_123',
    external_id: exportedConnection.consumer_id,
    authentication_method: 'api_key',
    context: {
      api_key: exportedConnection.settings.api_key,
      subdomain: exportedConnection.settings.subdomain
    }
  })
});
// QuickBooks (OAuth + realm_id metadata)
const response = await fetch('https://api.truto.one/integrated-accounts', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${TRUTO_API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    integration_id: 'quickbooks',
    environment_id: 'env_prod_123',
    external_id: exportedConnection.consumer_id,
    authentication_method: 'oauth2',
    context: {
      oauth: {
        token: {
          access_token: exportedConnection.credentials.access_token,
          refresh_token: exportedConnection.credentials.refresh_token,
          expires_at: exportedConnection.credentials.expires_at,
          token_type: 'Bearer',
          scope: exportedConnection.credentials.scope
        }
      },
      realm_id: exportedConnection.settings.realm_id
    }
  })
});

When Truto receives this payload, it automatically encrypts sensitive fields (like access_token and refresh_token) at rest using AES-GCM encryption.

Proactive Token Refresh Architecture

One of the primary reasons integrations fail in production is reactive token refreshing - waiting until an API call returns a 401 Unauthorized before attempting to refresh the token. This creates latency spikes and often results in dropped webhooks if the refresh fails.

Truto eliminates this via proactive scheduling. The moment you import that OAuth token payload, Truto reads the expires_at timestamp. The platform schedules a durable background task to fire exactly 60 to 180 seconds before the token expires.

When the scheduled task fires, Truto uses your OAuth app's client_id and client_secret to execute the standard OAuth 2.0 refresh flow, updates the encrypted context with the new tokens, and schedules the next refresh.

If the refresh succeeds, the account stays active and your application never experiences a 401 Unauthorized. If it fails (e.g., invalid_grant because the refresh token was revoked by the user in Salesforce), the account is marked as needs_reauth and a webhook event is fired so you can prompt the specific user to reconnect.

sequenceDiagram
    participant App as Your Application
    participant Truto as Truto Platform
    participant Provider as SaaS Provider (e.g., Salesforce)
    
    App->>Truto: Import Apideck OAuth Tokens
    Truto->>Truto: Encrypt tokens at rest
    Truto->>Truto: Schedule proactive refresh (T-minus 180s)
    Note over Truto: ... time passes ...
    Truto->>Provider: Execute Refresh Grant (Background)
    Provider-->>Truto: New Access & Refresh Tokens
    Truto->>Truto: Update encrypted context
    App->>Truto: API Request (e.g., GET Contacts)
    Truto->>Provider: Authenticated Request (Always Valid)
    Provider-->>Truto: 200 OK
    Truto-->>App: Normalized Response

Step 3: Handling Rate Limits Post-Migration (The Standardized Approach)

Here is where you need to be honest about what changes post-migration. When you were on Apideck, rate limit handling was opaque - Apideck managed the upstream API calls and handled (or didn't handle) rate limits internally. Many unified APIs attempt to "absorb" rate limits by holding requests in memory and applying exponential backoff.

This is a massive anti-pattern for enterprise engineering. Absorbing 429s hides backpressure from your application. Your workers stay open, waiting for HTTP responses that take 45 seconds to resolve, eventually causing cascading timeouts across your own infrastructure.

Truto does NOT retry, throttle, or apply backoff on rate limit errors.

When an upstream API returns a rate-limit error, Truto passes that 429 directly back to your caller. This keeps the platform transparent and gives you full control over your retry strategy.

However, dealing with 50 different rate limit headers across 50 different APIs is a nightmare. Some use X-HubSpot-RateLimit-Daily, others use Sforce-Limit-Info, and some just put it in the response body.

What Truto does do is normalize the rate limit information from every upstream provider into standardized response headers based on the IETF RateLimit specification:

Header Meaning
ratelimit-limit Maximum requests allowed in the current window
ratelimit-remaining Requests remaining in the current window
ratelimit-reset Seconds until the rate limit window resets

Building Your Backoff Logic

Because Truto normalizes the headers, your engineering team can build a single, unified retry queue on your side of the architecture. When your worker receives a 429, it simply reads the ratelimit-reset header, pauses the job, and safely retries.

async function callTrutoWithBackoff(
  url: string,
  options: RequestInit,
  maxRetries = 3
): Promise<Response> {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    const response = await fetch(url, options);
 
    if (response.status === 429) {
      // Read the standardized header provided by Truto
      const resetSeconds = parseInt(
        response.headers.get('ratelimit-reset') || '60',
        10
      );
      // Add jitter to prevent thundering herd
      const waitMs = resetSeconds * 1000 + Math.random() * 1000;
      
      console.warn(
        `Rate limited. Waiting ${resetSeconds}s before retry ${attempt + 1}`
      );
      await new Promise((resolve) => setTimeout(resolve, waitMs));
      continue;
    }
 
    // Proactively slow down when remaining quota is low
    const remaining = parseInt(
      response.headers.get('ratelimit-remaining') || '100',
      10
    );
    if (remaining < 10) {
      console.warn(`Low rate limit quota: ${remaining} remaining`);
    }
 
    return response;
  }
  throw new Error('Max retries exceeded for rate-limited request');
}

This is more work than having the platform absorb 429s for you. But it is also more honest. You know exactly when you are being rate limited, which provider is throttling you, and how long to wait. No black-box retry logic hiding upstream failures. For a comprehensive guide on architecting this, read our Best Practices for Handling API Rate Limits.

Step 4: Declarative Mappings to Mimic Apideck's Unified Schema

Migrating credentials is only half the battle. If you switch to a new unified API, the JSON response shapes will change. A Contact in Apideck looks different than a Contact in Truto.

Normally, this means rewriting your entire frontend UI and backend business logic to handle the new schema. With Truto, your frontend code currently expects Apideck's unified response format, and you do not have to touch it.

Truto relies on declarative JSONata expressions to map data between the provider's native format and the unified format. Because these mappings are exposed and editable, you can write a custom JSONata expression in Truto that outputs the exact JSON shape your application currently expects from Apideck.

Example: Mimicking Apideck's Contact Schema

Let's say your frontend expects Apideck's specific nested structure for contacts. Your JSONata mapping in Truto can reproduce that exact shape field-for-field:

{
  "id": $.id,
  "first_name": $.properties.firstname,
  "last_name": $.properties.lastname,
  "company_name": $.properties.company,
  "emails": $.properties.emails.{
    "email": value,
    "type": type
  },
  "phone_numbers": [
    {
      "number": $.properties.phone,
      "type": "primary"
    }
  ],
  "custom_mappings": {
    "apideck_legacy_id": $.properties.hs_object_id,
    "employee_band": $.properties.Employee_Band__c
  },
  "updated_at": $.properties.lastmodifieddate
}

This mapping is defined as configuration data, not code. You can adjust it per-provider, per-customer, or per-environment without deploying anything. If your biggest Salesforce customer has a custom Employee_Band__c field that needs to appear as custom_mappings.employee_band in the response, you add a single line to the JSONata expression for that customer's account.

By deploying this mapping at the Truto layer, your backend receives the exact payload it is used to. You achieve a complete infrastructure migration without a single breaking change to your application logic.

Testing, Smoke Tests, and Rollback Strategy

A migration of this scale requires strict operational discipline. Do not attempt a "big bang" cutover. Use a phased approach.

Pre-Migration Checklist

  • Verify OAuth app ownership: Confirm you own the client_id/client_secret for every provider. If not, plan your re-auth strategy for those providers.
  • Import tokens into a staging environment: Truto supports environment-level credential overrides, so you can test with production tokens in a sandboxed context.
  • Validate token refresh: For each provider, trigger a manual token refresh and confirm the new access_token works against the provider's API.
  • Shadow Reads: Configure your application to perform "shadow reads." Fetch data from Apideck (to serve the user) and simultaneously fetch it from Truto (in the background). Diff the JSON payloads to ensure your JSONata mappings are perfectly mimicking the Apideck schema.
  • Test rate limit header normalization: Make enough requests to see rate limit headers appear. Verify your backoff logic reads them correctly.
  • Verify webhook delivery: If you are using Apideck's virtual webhooks, set up equivalent webhook subscriptions in Truto and confirm events arrive.

Smoke Tests for Each Provider

After importing credentials, run these checks for every provider before routing any production traffic:

  1. Token validity check. Make a lightweight read request (e.g., GET /unified/crm/contacts?limit=1) for each imported account. A 200 response confirms the token is live. A 401 means the token was already rotated or revoked - flag that account for re-auth.
  2. Token refresh check. Use Truto's manual refresh endpoint (POST /integrated-account/refresh-credentials with the integrated account ID) to force a proactive refresh. Confirm the new access_token is different from the imported one and that subsequent API calls succeed.
  3. Write operation check (if applicable). If your application creates or updates records, test a write against a sandbox record. Some providers issue tokens with different write scopes than Apideck requested - verify that the scopes you need are present.
  4. Schema mapping check. Compare a Truto response against the equivalent Apideck response for the same record. Use a JSON diff tool. Every field your frontend reads must match exactly.
  5. Webhook delivery check. If you rely on webhooks, trigger a test event (e.g., update a contact in the provider) and confirm the webhook arrives at your endpoint with the expected payload shape.

Rollback Procedure

If something goes wrong during cutover, you need a clean way to revert. Plan this before you start routing traffic.

During the canary phase (traffic split via feature flag or API gateway):

  • Flip the feature flag back to route 100% of traffic to Apideck. This is instant and requires no code deployment.
  • Do NOT deprovision Apideck connections during the canary phase. Keep them active and billing.

After full cutover but before Apideck deprovisioning:

  • Keep your Apideck subscription active for at least 2 weeks after routing all traffic to Truto.
  • If you discover a provider-specific issue (e.g., a mapping bug for a specific Salesforce org), you can temporarily route just that provider back to Apideck while you fix the mapping.
  • Your feature flag should support per-provider and per-account granularity, not just global on/off.

Point of no return:

  • Once Apideck refreshes a token after you've exported it, the old refresh token may be invalidated (depending on the provider's rotation policy). This is why the cutover window must be tight.
  • Once you deprovision Apideck connections, there is no going back without re-authentication. Do this only after Truto has successfully refreshed every token at least once.

Cutover Sequence

sequenceDiagram
    participant App as Your Application
    participant GW as API Gateway / Feature Flag
    participant Old as Apideck
    participant New as Truto

    App->>GW: API Request (integration call)
    alt Feature flag: canary group
        GW->>New: Route to Truto
        New-->>GW: Unified response
    else Feature flag: default
        GW->>Old: Route to Apideck
        Old-->>GW: Unified response
    end
    GW-->>App: Response (same shape)
  1. Canary a single low-risk integration (e.g., a file storage connector with low traffic). Route 5% of traffic through Truto.
  2. Monitor for 48 hours. Watch for token refresh failures, response shape mismatches, and rate limit behavior.
  3. Expand to 100% for that integration. Then repeat for the next provider.
  4. Deprovision Apideck connections only after the new platform has successfully refreshed each token at least once.

Security and Compliance Checklist for Token Transfer

Moving OAuth tokens between platforms means handling your customers' credentials in transit. Treat this with the same rigor as a database migration involving PII.

In Transit:

  • All API calls to Apideck's Vault API and Truto's API are over HTTPS/TLS 1.2+. Do not log raw token values to stdout, application logs, or error tracking services (Sentry, Datadog, etc.) during the migration.
  • If you write exported tokens to a file for batch processing, encrypt the file at rest. Use GPG or an equivalent tool. Delete the file immediately after import.
  • Never commit tokens to version control, even temporarily. Use environment variables or a secrets manager (e.g., HashiCorp Vault, AWS Secrets Manager) as the intermediary.

At Rest in Truto:

  • Truto encrypts sensitive fields - including access_token, refresh_token, api_key, and client_secret - at rest using AES-GCM encryption. These fields are stored in a separate encrypted column and are masked when listing integrated accounts via the API.
  • Full token values are only decrypted in-memory when making outbound API calls to the third-party provider. They are never returned in API responses to your application.

Audit trail:

  • Record which tokens were exported, when, and by whom. Record when each token was imported into Truto and when the first successful refresh occurred.
  • After migration, confirm that the old platform (Apideck) no longer holds active refresh tokens for your connections. Revoking tokens from the provider side is the most thorough approach, but only do this after the new platform has its own valid tokens.

Provider-specific considerations:

  • Salesforce rotates refresh tokens by default. If both Apideck and Truto attempt to refresh the same token, the second call will fail with invalid_grant. Coordinate your cutover to avoid this.
  • Google refresh tokens do not expire unless the user revokes access, but they are subject to a limit of 50 outstanding tokens per user per OAuth app. Migrating to a new OAuth app consumes one of these slots.
  • HubSpot refresh tokens are single-use. The moment either platform refreshes the token, the old refresh token is dead. This makes the cutover timing especially important.
  • QuickBooks requires the realm_id (company ID) as part of every API request. Make sure this metadata is included in your export - without it, the tokens are useless.

Troubleshooting Common Failure Modes

Even well-planned migrations hit edge cases. Here are the most common failure modes and how to resolve them.

invalid_grant on First Refresh

Cause: The most common failure. Either the refresh token was already rotated by Apideck between export and import, or you are using a different client_id / client_secret than the one that issued the token.

Fix: Verify that the OAuth app credentials configured in Truto exactly match the ones used in Apideck. If the client_id doesn't match, the provider will reject the refresh. If the token was rotated, you need to either re-export from Apideck (if you haven't deprovisioned yet) or re-authenticate that specific user.

needs_reauth Status After Import

Truto marks an integrated account as needs_reauth when a token refresh fails. After bulk import, you may see some accounts immediately enter this state.

Fix: Check Truto's last_error field on the integrated account. Common causes:

  • Missing provider-specific metadata (e.g., realm_id for QuickBooks, instance_url for Salesforce)
  • Scopes mismatch - the token was issued with scopes that the new OAuth app doesn't request
  • The user revoked access on the provider side

For each needs_reauth account, the resolution is almost always a targeted re-authentication for that specific user. Truto fires an integrated_account:authentication_error webhook event, so you can automate sending the user a reconnection prompt.

API Key Connections Return 401

Cause: API keys extracted from Apideck may have been masked or truncated in the export.

Fix: Verify the full API key value is present in the Truto integrated account context. Make a test request via Truto's proxy endpoint. If the key is invalid, the user will need to regenerate it from the provider's settings and update the connection.

Response Shape Mismatches

Cause: Your JSONata mappings don't perfectly match Apideck's response format for a specific provider.

Fix: Run shadow reads during the canary phase. For every Truto response, compare it field-by-field against the equivalent Apideck response. Common differences:

  • Date format variations (ISO 8601 vs. Unix timestamps)
  • Null handling (Apideck may return empty strings where Truto returns null, or vice versa)
  • Nested array structures for multi-value fields like emails and phone numbers
  • Pagination cursor format differences

Fix these in the JSONata mapping configuration - no code deployment needed.

Rate Limit Spikes During Migration

Cause: During the shadow-read phase, you are making double the API calls (one to Apideck, one to Truto), which doubles your rate limit consumption against the upstream provider.

Fix: Run shadow reads at reduced traffic (e.g., 10% of requests). Use the ratelimit-remaining header from Truto to throttle your shadow-read traffic dynamically.

The Trade-Offs You Should Know About

Let's be direct about what this migration costs you:

  • Engineering time: Plan for 2 to 4 weeks of dedicated engineering effort for a team with 50+ linked accounts across 5+ providers. The token import is fast; writing and testing JSONata mappings to match Apideck's exact response shape is the slow part.
  • Rate limit responsibility shifts to you. On Apideck, rate limit handling was opaque. On Truto, you get standardized headers and must implement your own backoff. This is more control, but also more code.
  • Provider-specific quirks don't disappear. Salesforce's SOQL-based filtering, HubSpot's association API, QuickBooks' realm_id requirement - these all still exist regardless of which unified API sits in front of them. A new platform doesn't make bad vendor APIs better.

The upside: you get custom field mappings on every plan, proactive token refresh that catches failures before users do, and a declarative architecture where adding a new integration is a data operation, not a code deployment.

What Comes Next

If you are at the point where Apideck's constraints are blocking enterprise deals or causing compliance gaps, the migration is worth doing. The average company uses over 100 SaaS apps, and your customers expect every one of them to integrate with your product.

Check your OAuth app ownership first, export your tokens before deprovisioning anything, and use declarative schema mappings to preserve your frontend contract. By understanding the mechanics of OAuth token portability and taking control of your own rate limit queues, you can upgrade your integration infrastructure without ever asking a customer to hit "Reconnect."

FAQ

Can I migrate from Apideck to another unified API without re-authenticating my users?
Yes, if you brought your own OAuth app credentials (your own client_id and client_secret) when setting up integrations in Apideck. The tokens were issued to your OAuth application, so any platform holding those same credentials can refresh them. If you used Apideck's managed OAuth app, those specific connections will require re-authentication.
Does Apideck expose raw OAuth tokens through its Vault API?
No. Apideck's Vault API has an import endpoint for bringing tokens into Apideck, but no corresponding export endpoint that returns raw access_token or refresh_token values. The standard Get Connections endpoint returns connection metadata but not the actual tokens. You will need to contact Apideck support for a credential export.
How do I check if I own the OAuth app used in Apideck?
Log into the Apideck dashboard, navigate to Configuration, select your connector, and look for the OAuth credentials section. If it shows 'Use your client credentials' with your own values, you own the app. You can also check your provider's developer console to see if the OAuth app with redirect URI https://unify.apideck.com/vault/callback belongs to your organization.
What happens to API key connections during migration from Apideck?
API key connections are simpler to migrate than OAuth. You can retrieve connection settings through Apideck's Vault API GET endpoint, extract the API key and configuration values, and import them into Truto's generic context. There is no token refresh cycle or OAuth app dependency to worry about.
How does Truto handle token refresh for migrated OAuth connections?
Truto uses proactive token refresh. When you import an OAuth token, the platform reads the expires_at timestamp and schedules a background task to refresh the token 60-180 seconds before it expires. If the refresh fails, the account is marked as needs_reauth and a webhook event is fired so you can prompt only the affected user to reconnect.
What is the rollback strategy if the Apideck migration fails?
During canary testing, keep Apideck connections active and use a feature flag to route traffic. If issues arise, flip the flag to route back to Apideck instantly. Only deprovision Apideck connections after Truto has successfully refreshed every token at least once. Keep your Apideck subscription active for at least 2 weeks after full cutover.

More from our Blog