What Does Zero Data Retention Mean for SaaS Integrations?
Learn what zero data retention means for SaaS integrations, why sync-and-store APIs fail enterprise security reviews, and how a pass-through MCP server for Coupa procurement data unblocks deals.
Your enterprise deal just stalled in procurement. The buyer's InfoSec team reviewed your vendor risk assessment and flagged a massive problem: your integration middleware caches their sensitive HRIS records and CRM contacts on shared infrastructure. They classified it as an unmanaged sub-processor, refused to sign the Business Associate Agreement (BAA), and the deal is effectively dead.
If you sell B2B SaaS to enterprise clients, healthcare organizations, or financial institutions, this scenario isn't hypothetical—it happens every week. Integration compliance is a binary go/no-go for revenue. The middleware that helped you ship basic integrations quickly in the SMB market will actively disqualify you upmarket. To pass strict InfoSec reviews and unblock revenue, you need an architecture that processes data in transit without ever writing it to a database.
Zero Data Retention (ZDR) for SaaS integrations means that your integration middleware processes third-party API payloads entirely in memory and never writes customer data to persistent storage. The payload enters, gets transformed into a normalized format, gets delivered to your application, and is immediately discarded. No cache. No replica. No 30-day retention window.
This guide breaks down exactly what ZDR means for integrations, why traditional sync-and-store architectures fail enterprise security audits, and how to build a stateless pass-through proxy that keeps your compliance footprint small.
What is Zero Data Retention (ZDR) in SaaS Integrations?
When evaluating integration architectures, the keyword you'll hear from every security team is "data at rest."
The term "zero data retention" has been popularized by AI API providers, but the concept applies directly to any middleware that touches your customers' sensitive data. Anthropic defines ZDR for its Claude API as an arrangement where "customer data is not stored at rest after the API response is returned, except where needed to comply with law or combat misuse." OpenRouter puts it even more simply: "Zero Data Retention (ZDR) means that a provider will not store your data for any period of time."
Apply that same principle to integration middleware and the definition becomes concrete:
- API payloads (CRM contacts, HRIS employee records, financial transactions) are processed in transit
- No persistent storage — no database tables, no object storage buckets, no disk-based caches hold your customers' data
- Transformation happens in memory — field mapping, schema normalization, and pagination assembly all occur without writing intermediate state
- Credentials are encrypted at rest, but the data flowing through the pipe never touches persistent storage
Zero Data Retention (ZDR) Definition A data processing standard where an integration layer processes information entirely in memory. The system routes, transforms, and delivers the payload to its final destination without ever writing the data to a hard drive, database, or persistent cache.
Consider a concrete example: your SaaS application pulls a list of employees from a customer's Workday instance. A ZDR integration proxy fetches that data, normalizes the JSON payload in memory, and hands it directly to your application backend. If the middleware provider's servers were physically seized five seconds later, there would be zero trace of that customer's employee data on the disks.
This is a fundamentally different architecture from the traditional "sync-and-store" model where integration platforms poll APIs on a schedule, dump results into a database, and serve cached records to your application.
Why Enterprise InfoSec Teams Demand Zero Data Retention
The financial math behind enterprise security scrutiny is straightforward. IBM's Cost of a Data Breach Report 2024 found the global average breach hit a record USD 4.88 million—a 10% increase from 2023 and the largest spike since the pandemic. For the 14th year in a row, healthcare saw the costliest breaches across industries, with average breach costs reaching $9.77 million.
These numbers explain why every system that stores customer data is a potential breach surface, and every breach surface gets scrutinized during procurement.
When a SaaS company uses an integration platform that stores customer data, that SaaS company inherits the security posture of the middleware provider. If the integration platform suffers a breach, the SaaS company's customers are compromised. Enterprise InfoSec teams understand this chain of liability intimately. They actively seek to minimize the number of sub-processors that handle their data at rest.
The moment your integration vendor becomes a sub-processor, you inherit their compliance obligations:
- You need a Data Processing Agreement (DPA) with them
- They appear on your sub-processor list, which your enterprise customers review
- Their SOC 2 report, penetration test results, and data residency policies all come under scrutiny
- If they store data in a region your customer's policy prohibits, the deal is dead
- If the data includes Protected Health Information (PHI), the integration platform must sign a BAA—and many developer tools flatly refuse to do this
A ZDR integration architecture eliminates this entire category of risk. If the middleware never stores customer data, it's not a sub-processor in the traditional sense—it's a conduit. The compliance conversation shrinks dramatically.
The SIG Core Questionnaire and the Sub-Processor Trap
When you move your SaaS integration strategy upmarket, procurement teams rely heavily on standardized risk assessments. The most common is the Standardized Information Gathering (SIG) questionnaire published by Shared Assessments.
The SIG Core Questionnaire is a comprehensive third-party risk assessment designed to evaluate vendors that store or maintain sensitive, regulated information. It covers 21 risk topics across hundreds of questions.
Domain 10—Third-Party Risk Management—is where integration deals go to die.
The Tripwire Question Somewhere in the SIG Core assessment, you will be asked: "Does any third-party sub-processor store, cache, or replicate our data?"
If your application relies on a traditional integration platform to sync data between your SaaS and the customer's internal systems, the answer is yes. Here's how the trap plays out in practice:
- Your B2B SaaS product integrates with your customer's Salesforce, Workday, or QuickBooks
- You use an integration middleware vendor that syncs data on a schedule and caches records in their database
- The buyer's InfoSec team asks about your sub-processors
- You disclose that your integration vendor stores their CRM contacts and HRIS records on shared infrastructure
- InfoSec flags this as an unmanaged sub-processor with an unacceptable data footprint
- The deal stalls for weeks—or months—while your vendor scrambles to provide a DPA, BAA, and acceptable answers to follow-up questions
By using an integration tool with a pass-through architecture, you bypass this trap entirely. Because the middleware does not store the data, it's classified as a conduit rather than a data custodian. You answer the question differently: "Our integration layer processes data in transit. No customer data is stored, cached, or replicated by any sub-processor." The follow-up questions disappear, and you can pass enterprise security reviews in days instead of months.
Evaluating Unified APIs: Sync-and-Store vs. Real-Time Pass-Through
When engineering teams evaluate unified APIs, they often treat them as interchangeable. They aren't. The industry is split between two fundamentally different architectures, and the distinction determines your compliance posture.
The Sync-and-Store Model
Most first-generation unified APIs use a polling and caching model:
sequenceDiagram
participant App as Your App
participant MW as Integration<br>Middleware
participant DB as Middleware<br>Database
participant API as Third-Party<br>API (e.g. Salesforce)
MW->>API: Poll for new/updated records (scheduled)
API-->>MW: Return records
MW->>DB: Write records to cache
App->>MW: GET /contacts
MW->>DB: Read from cache
DB-->>MW: Return cached records
MW-->>App: Serve cached responseThe data sits in the middleware's database. It's typically retained for 30 to 60 days, replicated across availability zones, and backed up. When your application requests data, you aren't actually querying the third-party API—you're querying the unified API provider's database.
The problems compound quickly:
- Massive compliance footprint: They store full copies of your customers' data. This fails the SIG Core questionnaire data storage requirements outright.
- Stale data: Because data is synced on a schedule (every 5 to 60 minutes), your application is always reading stale data. If a user updates a record in Salesforce, it won't reflect in your app until the next sync cycle completes.
- Faked webhooks: Many of these platforms simulate webhooks by diffing their database against the source API during syncs, leading to delayed and sometimes missing events.
- Data residency headaches: If the cache is in one region and your customer demands storage in another, you have a problem you can't easily solve.
The trade-off: Sync-and-store gives you faster read latency and the ability to run complex queries across records. For some use cases—analytics dashboards, bulk data processing—this trade-off makes sense.
The Real-Time Pass-Through Model
sequenceDiagram
participant App as Your App
participant MW as Integration<br>Middleware
participant API as Third-Party<br>API (e.g. Salesforce)
App->>MW: GET /contacts
MW->>API: Forward request (with auth, mapping)
API-->>MW: Return raw response
Note over MW: Transform in memory<br>(field mapping, normalization)
MW-->>App: Return unified responseNo data is stored. The middleware acts as a real-time proxy: it receives your request, translates it into the third-party's native format, makes the API call, transforms the response into a unified schema in memory, and returns it. The entire lifecycle happens in a single request/response cycle.
- True Zero Data Retention: Data is never written to disk. You ensure zero data retention when processing third-party API payloads.
- Real-time accuracy: You always interact with the live state of the third-party system. No sync delays.
- Enterprise ready: Procurement teams approve these architectures rapidly because there is no persistent shadow database to audit.
Be honest with yourself about which model you actually need. If your product requires running analytical queries across thousands of CRM records, a pass-through proxy alone won't cut it. But if you're reading and writing individual records or small lists—syncing records on user action, pulling employee data during onboarding, reading CRM context for an AI agent—real-time pass-through is almost always the better architecture for enterprise sales.
For a deeper comparison, see Tradeoffs Between Real-time and Cached Unified APIs.
How a Pass-Through Integration Architecture Actually Works
A ZDR pass-through architecture has three layers, each designed to avoid persisting customer data:
graph TD A[Your SaaS Application] -->|Unified Request| B[Integration Proxy Engine] B --> C[Load Auth Context & Config<br>From Metadata Store] B --> D[Fetch Live Data from<br>Third-Party API] D --> E[Provider Returns<br>Native JSON Response] E --> F[In-Memory JSONata<br>Transformation Engine] F --> G[Normalized Unified Payload] G -->|Direct Response| A G -.-> H[Garbage Collection<br>Payload Destroyed]
1. Credential Storage (Encrypted, Not Customer Data)
The middleware does store OAuth tokens, API keys, and connection metadata. This is necessary to authenticate against third-party APIs on your behalf. But credentials are not customer data—they're access tokens that get encrypted at rest and rotated automatically.
A well-designed system refreshes OAuth tokens proactively—shortly before they expire—so API calls never fail due to stale credentials. If a refresh fails, the account is flagged for re-authorization and a webhook event is fired to your application. No customer payload data is read from or written to the database at this layer.
2. Declarative Transformation via JSONata (In-Memory)
The hardest part of a unified API is mapping provider-specific fields to a common model. In a sync-and-store system, this transformation happens during the sync job and the result is written to a database. In a pass-through system, transformation happens in memory during the request lifecycle.
A declarative mapping language like JSONata evaluates expressions against the raw API response and produces the unified output—without ever writing intermediate results to disk:
// Raw Salesforce response (in memory)
const raw = {
FirstName: "Jane",
LastName: "Doe",
Email: "jane@acme.com",
Account: { Name: "Acme Corp" }
};
// JSONata expression (stored as config, not customer data)
const mapping = `{
"first_name": FirstName,
"last_name": LastName,
"email": Email,
"company_name": Account.Name
}`;
// Evaluated in memory, returned directly to caller
// Raw response is garbage-collected after the request completesThe mapping configuration is stored—it's platform config, not customer data. The payload that flows through it is never persisted. Because JSONata is side-effect free and evaluates entirely within the execution context, the transformation completes without requiring temporary database tables or persistent caching.
3. Webhook Forwarding (Process and Discard)
Inbound webhooks from third-party platforms follow the same principle: the middleware receives the webhook payload, verifies its signature, transforms it into a unified event format, and forwards it to your registered endpoint. The raw payload is not written to a database.
The honest caveat: webhook delivery is harder to make reliable without some form of intermediate storage. If your endpoint is down when the webhook arrives, a pure ZDR system can't replay it from its own storage. The practical solution is to use a transient message queue with a very short TTL (seconds to minutes) for delivery retries, then discard the message. This is a design trade-off worth understanding before you commit to a fully ZDR architecture.
What ZDR Does Not Cover (And Why That Matters)
Being precise about what ZDR means also requires being precise about what it doesn't mean:
| What ZDR Covers | What ZDR Does Not Cover |
|---|---|
| Third-party API payloads (contacts, employees, invoices) | OAuth tokens and connection credentials |
| Request/response bodies flowing through the middleware | API call logs and metadata (timestamps, status codes, latency) |
| Intermediate transformation state | Mapping configurations and integration definitions |
| Webhook payloads from third parties | Your own application's storage of integrated data |
Zero-data-retention does not automatically mean "no data ever exists." It refers specifically to storage practices after processing.
A ZDR integration vendor still stores your configuration—which integrations you've connected, what mappings you've defined, what OAuth apps you've registered. What it doesn't store is the actual CRM contacts, employee records, or financial transactions that flow through the pipe. That's the distinction that matters for procurement.
This approach minimizes your attack surface, reduces compliance scope, and virtually eliminates the risk of data breaches within the automation layer.
The Honest Trade-Offs of a Pass-Through Architecture
Being radically honest about architectural decisions is essential for engineering teams. While a zero-storage pass-through architecture solves your enterprise compliance blockers, it introduces specific engineering trade-offs you must design around.
1. Provider Rate Limits Are Real
Because you're not querying a middleware cache, every request hits the third-party provider's API directly. If you barrage a customer's HubSpot instance with 10,000 requests a minute, HubSpot will rate-limit you. You must build intelligent queuing and exponential backoff into your own application layer to respect provider limits.
2. Network Latency Is Variable
A pass-through proxy adds a minor network hop. More importantly, the response time of your API call is entirely dependent on the speed of the third-party provider. If an older on-premise ERP takes 4 seconds to return a query, your application will wait 4 seconds. You cannot fall back on a sub-millisecond local cache. Engineers must design their systems asynchronously to handle variable third-party response times.
3. Search and Filtering Capabilities Are Constrained
When you query a cached database, you can use complex SQL joins and filters. In a pass-through model, your filtering capabilities are limited to what the third-party API natively supports. If the provider's API doesn't support filtering by a specific custom field, the proxy cannot magically invent that capability without pulling all records into memory—which defeats the purpose of an efficient API.
4. Webhook Replay Requires Your Own Storage
As noted above, if your endpoint is down when a webhook arrives, a ZDR middleware can't replay from its own logs. You need to handle missed events through reconciliation on your side—periodic polling to catch anything your webhook handler missed.
What to Ask Your Integration Vendor
Before you sign with any integration middleware provider, ask these five questions:
- "Do you cache or replicate third-party API responses on your infrastructure?" — If yes, for how long? In which regions? Under what retention policy?
- "Will you appear as a sub-processor on our customer's data processing agreements?" — If the vendor stores data, the answer is almost certainly yes.
- "Can you provide a SOC 2 Type II report that covers your data handling practices?" — SOC 2 without ZDR just proves they securely store data. SOC 2 with ZDR proves they don't store it at all.
- "What happens to a webhook payload if my endpoint is unreachable?" — This reveals whether they have intermediate storage for retries.
- "Can we get a BAA signed for HIPAA-covered data?" — If you're in healthtech, this is non-negotiable. A vendor with true ZDR makes the BAA conversation far simpler.
Appendix: MCP Server for Coupa Procurement API - A ZDR Implementation Guide
Procurement data is some of the most sensitive information flowing through enterprise integrations - purchase orders, invoices, supplier contracts, and payment details. When you integrate with Coupa's Business Spend Management platform, every record passing through your middleware is a potential audit finding. This appendix walks through how to build a stateless, zero-retention MCP server integration for the Coupa procurement API.
Coupa API Authentication: What You Need to Know
Before discussing data flows, you need to understand how Coupa authenticates API requests - because the authentication model directly affects your token management strategy.
Coupa uses OAuth 2.0 and OIDC to authenticate API requests. API Keys are no longer supported. The client credentials grant type is used when there is no user involved, such as in system-to-system integrations. You create an OAuth2/OIDC client in Coupa's admin console, define scopes (which follow a service.object.right pattern like core.purchasing.read or core.invoicing.write), and receive a Client ID and Client Secret.
Coupa access tokens only last for 24 hours, so Coupa's recommendation is to renew the token every 20 hours. This is a critical detail for a ZDR integration proxy. A well-designed pass-through system handles this automatically - requesting a fresh access token well before the 24-hour window closes so that API calls to Coupa never fail due to an expired credential. The token refresh is entirely metadata management (credential rotation), not customer data storage.
Coupa Data Flow: Pass-Through vs. Sync-and-Cache
The Coupa Core API provides RESTful access to core resources including suppliers, purchase orders, invoices, and requisitions. These resources represent high-value procurement data that enterprise security teams scrutinize heavily.
Here's what the pass-through flow looks like for a typical Coupa integration:
sequenceDiagram
participant Agent as AI Agent / Your App
participant MCP as MCP Server<br>(Pass-Through)
participant Coupa as Coupa API<br>(coupahost.com)
Agent->>MCP: tools/call: list_all_coupa_purchase_orders
MCP->>MCP: Load OAuth token from<br>encrypted credential store
MCP->>Coupa: GET /api/purchase_orders?status=issued<br>Authorization: Bearer {token}
Coupa-->>MCP: Return PO records (JSON)
Note over MCP: Transform in memory<br>(normalize fields, apply schema)
MCP-->>Agent: Return unified response
Note over MCP: Payload garbage-collected<br>Zero bytes written to diskCompare this to a sync-and-cache approach where a middleware polls Coupa's purchase order endpoint on a schedule, writes every PO to its own database, and serves cached results. Coupa's API returns a lot of data by default (for example: full objects for associations). A sync-and-cache system stores all of that - supplier details, line items, account codes, shipping addresses, custom fields - on shared infrastructure. That's exactly the kind of data footprint that triggers sub-processor classification during procurement reviews.
With a pass-through proxy, none of that data touches persistent storage. Your Coupa POs, invoices, and supplier records flow through the integration layer and arrive directly at your application. The middleware is a conduit, not a warehouse.
Coupa limits the number of requests made to the API to 25 requests per second and a burst query limit of 20 calls. A pass-through architecture means every request from your application translates to a live Coupa API call. You must respect these limits. A good integration proxy handles rate limiting automatically - queuing requests, applying exponential backoff on 429 responses, and distributing load evenly within Coupa's per-instance limits.
Ephemeral Token and MCP Session Recommendations
When exposing Coupa procurement data through an MCP server, token lifetime management is a key security control. MCP servers support an expires_at field that automatically deactivates the server after a set time. For procurement operations, keep these lifetimes short.
Recommended ephemeral token lifetimes for Coupa MCP servers:
| Use Case | Recommended MCP Token Lifetime | Rationale |
|---|---|---|
| AI agent doing ad-hoc PO lookups | 1-4 hours | Short session for interactive queries. Limits blast radius if URL is leaked. |
| Contractor reviewing supplier data | 24-72 hours | Matches a typical review engagement. Auto-revokes when the task window closes. |
| Automated approval workflow | 1-7 days | Covers the typical PO approval cycle. Set config.methods: ["read"] to prevent writes. |
| Scheduled reporting job | 8-12 hours | Just long enough for the job to complete. Regenerate a fresh token for each run. |
| One-time audit data pull | 2-4 hours | Tight window for the audit extraction. Destroy immediately after. |
Combine ephemeral tokens with method restrictions for defense in depth. An MCP server configured with config.methods: ["read"] and a 4-hour expires_at can only list and fetch Coupa records - it cannot create purchase orders, update invoices, or delete suppliers - and it self-destructs after the session window.
For higher-security environments, enable the require_api_token_auth flag on the MCP server. This requires the caller to present both the MCP server URL and a valid Truto API token, so even if the MCP URL appears in logs or configuration files, it cannot be used without a second credential.
Example Stateless MCP Tool Implementation for Coupa
When Truto generates MCP tools for a Coupa integration, each tool maps to a specific Coupa API resource and method. The tools are generated dynamically from the integration's resource definitions and documentation - no Coupa data is stored to produce them.
Here's what the generated tool set for a typical Coupa MCP server looks like:
| Tool Name | Coupa API Endpoint | Method |
|---|---|---|
list_all_coupa_purchase_orders |
GET /api/purchase_orders |
list |
get_single_coupa_purchase_order_by_id |
GET /api/purchase_orders/{id} |
get |
create_a_coupa_requisition |
POST /api/requisitions |
create |
list_all_coupa_invoices |
GET /api/invoices |
list |
get_single_coupa_invoice_by_id |
GET /api/invoices/{id} |
get |
list_all_coupa_suppliers |
GET /api/suppliers |
list |
get_single_coupa_supplier_by_id |
GET /api/suppliers/{id} |
get |
update_a_coupa_purchase_order_by_id |
PUT /api/purchase_orders/{id} |
update |
Each tool call follows the same stateless lifecycle:
- The MCP client sends a
tools/callJSON-RPC request with the tool name and arguments - The MCP server looks up the tool, splits arguments into query and body parameters using JSON Schema definitions
- The server loads the Coupa OAuth token from the encrypted credential store
- The request is forwarded to the Coupa API at
https://{instance}.coupahost.com/api/{resource} - Coupa returns the response (JSON or XML)
- The response is transformed in memory and returned to the caller
- The raw Coupa payload is garbage-collected - nothing is written to disk
You can scope an MCP server to specific procurement workflows using tags. For example, creating an MCP server with config.tags: ["procurement"] would expose only purchase order, requisition, and supplier tools - excluding invoice or expense tools that belong to a different functional area.
By default, Coupa fetches 50 records per call, but the limit can be set to fetch a specific number of records as required. The MCP tool's query schema automatically includes limit and next_cursor parameters, letting AI agents paginate through large Coupa result sets without the integration layer buffering the entire dataset in memory.
Webhook Handling and Event Retention Guidance
Coupa's event notification model differs from platforms like Salesforce or HubSpot that push webhook payloads to registered endpoints in real time. Coupa uses callouts - objects configured to send messages externally for remote approval, tax assessment, process actions, or event notifications (webhooks). These callouts can be triggered on specific resource events (e.g., when a purchase order status changes to "issued").
For a ZDR integration, Coupa callout handling follows the standard webhook forwarding pattern:
- Coupa sends the callout payload to the integration middleware's webhook endpoint
- The middleware verifies the request, transforms the payload into a normalized event format, and forwards it to your application's registered URL
- The raw Coupa payload is discarded immediately after forwarding
The honest trade-off for Coupa specifically: Coupa's callout retry behavior is limited. If your endpoint is unreachable when Coupa fires a callout, the event may be lost. A ZDR system cannot replay it from its own storage because it didn't persist the payload.
The practical mitigation: use periodic polling against Coupa's API as a reconciliation mechanism. Query resources with updated_at filters (e.g., GET /api/purchase_orders?updated_at [gt]=2026-04-17T00:00:00Z) to catch any events your callout handler missed. This polling is itself stateless - each query hits the Coupa API directly and returns results in memory. Your application is responsible for tracking which records it has already processed.
Coupa-specific tip: Use Coupa's exported flag as a coordination mechanism. When you successfully process a record, mark it as exported via PUT /api/purchase_orders/{id} with exported: true. Then filter future queries with exported=false to retrieve only unprocessed records. This gives you at-least-once delivery semantics without storing any Coupa data in the integration layer.
Compliance Checklist for Enterprise Coupa Integrations
Use this checklist when your enterprise buyer's InfoSec team evaluates your Coupa integration architecture:
| Compliance Requirement | ZDR Pass-Through Answer |
|---|---|
| Does any sub-processor store Coupa procurement data? | No. Coupa data is processed in memory and delivered directly to your application. The integration layer stores only encrypted OAuth credentials. |
| Where is Coupa data cached? | Nowhere. No cache exists. Every request hits the live Coupa instance. |
| What is the data retention period for procurement payloads? | Zero. Payloads exist only for the duration of the HTTP request/response cycle (typically milliseconds to seconds). |
| Can the integration vendor access our Coupa records? | No. The pass-through proxy transforms data in memory. No human at the vendor has access to customer procurement data because it is never stored. |
| How are Coupa OAuth credentials protected? | Client ID and Client Secret are encrypted at rest. Access tokens are refreshed automatically before expiry (within the 24-hour window). Credentials are never exposed in API responses or logs. |
| Does the integration support data residency requirements? | Yes. Because no Coupa data is persisted by the middleware, there is no data-at-rest residency concern within the integration layer. Data flows directly from Coupa's infrastructure to yours. |
| Will the integration vendor appear on our sub-processor list? | A ZDR vendor that never stores customer data operates as a conduit, not a data custodian, simplifying the sub-processor classification. |
| Can MCP server access be time-limited? | Yes. Ephemeral tokens with expires_at auto-revoke after a configured period. Combine with read-only method restrictions and dual authentication for maximum control. |
This checklist turns a multi-week security review into a short conversation. When the answer to every data storage question is "we don't store it," the follow-up questions largely disappear.
Unblock Enterprise Revenue with Zero Storage
ZDR is not a feature checkbox—it's an architectural decision that ripples through your compliance posture, your enterprise sales velocity, and your engineering team's ability to ship integrations without a six-week security review.
If you're evaluating integration vendors today, start by mapping your data flows. Identify every point where customer data could be persisted by a third party. Then ask whether each persistence point is necessary or just an artifact of a sync-and-store architecture that made sense five years ago but creates compliance drag today. This architectural shift is exactly why Truto is the best zero-storage unified API for compliance-strict SaaS.
If you need an integration tool that doesn't store customer data, look past the marketing pages of legacy API aggregators and examine their underlying architecture. Enterprise procurement teams will not compromise on data security. By adopting a true zero data retention architecture, you eliminate the friction of sub-processor audits, protect your customers from expanded attack surfaces, and empower your sales team to close upmarket deals without InfoSec blockers.
Honest trade-off: If you need your integration platform to run scheduled sync jobs that write data to your data store, Truto supports that too. In that model, data flows through Truto to your infrastructure—Truto still doesn't retain it, but the sync process does involve buffering records in transit. The ZDR guarantee applies to Truto's infrastructure, not to the destination you configure.
FAQ
- What is Zero Data Retention (ZDR) in SaaS integrations?
- ZDR means your integration middleware processes third-party API payloads entirely in memory and never writes customer data to persistent storage. The data enters, gets transformed, gets delivered to your application, and is immediately discarded.
- How does a pass-through MCP server work with the Coupa procurement API?
- An MCP server for Coupa acts as a stateless proxy. When an AI agent or application calls a tool like list_all_coupa_purchase_orders, the server loads the encrypted OAuth token, forwards the request to Coupa's API, transforms the response in memory, and returns it directly. No Coupa procurement data is written to disk.
- What authentication does the Coupa API use?
- Coupa uses OAuth 2.0 with OpenID Connect (OIDC). API keys are deprecated. You create an OAuth2/OIDC client with a client credentials grant type and define scopes. Access tokens last 24 hours, and Coupa recommends renewal every 20 hours.
- How long should an MCP server token last for Coupa procurement operations?
- It depends on the use case. For ad-hoc AI agent queries, 1-4 hours. For contractor reviews, 24-72 hours. For automated approval workflows, 1-7 days. Always combine short-lived tokens with method restrictions (e.g., read-only) for defense in depth.
- How do you handle Coupa webhooks without storing data?
- Coupa uses callouts to send event notifications. A ZDR middleware receives the callout payload, transforms it, forwards it to your endpoint, and discards the raw payload immediately. For missed events, use periodic polling with updated_at filters and Coupa's exported flag as a reconciliation mechanism.