Skip to main content
Version: Next

Caching Flow

The Caching Flow is designed to enhance API performance and reliability by caching responses to frequently accessed API requests. This reduces the load on API providers, and improves response times for consumers.

Default Caching Flow Processors

The caching flow consists of two main processors:

  • Cache Read: Retrieves responses from the cache for requests that match an existing cache key.
  • Cache Write: Saves responses into the cache for future use when requests miss the cache.

Key Features

  • URL-Based Caching: Caches responses based on the request URL and query parameters.
  • Header-Based Caching: Supports caching based on both the URL and specific headers.
  • Cache Management: Differentiates between cache hits and misses, with status indicators in the response headers.

Use Cases

  • Reduced Latency: Serve responses faster by avoiding redundant API calls for frequently accessed data.
  • Cost Optimization: Reduce API call costs by minimizing requests to third-party providers.
  • Failover Support: Provide cached data when APIs are unavailable or experience downtime.
  • Header-Based Differentiation: Enable cache rules based on headers for advanced use cases like user-specific data or subscription tiers.