Caching Flow
The Caching Flow is designed to enhance API performance and reliability by caching responses to frequently accessed API requests. This reduces the load on API providers, and improves response times for consumers. Caching based on URL, query parameters and specific headers are supported.
Scenarios
- Reduced Latency: Serve responses faster by avoiding redundant API calls for frequently accessed data.
- Cost Optimization: Reduce API call costs by minimizing requests to third-party providers.
- Failover Support: Provide cached data when APIs are unavailable or experience downtime.
- Header-Based Differentiation: Enable cache rules based on headers for advanced use cases like user-specific data or subscription tiers.
Flow Components
Flow Example
name: CachingFlow
filter:
url: httpbin.com/*
processors:
ReadCache:
processor: ReadCache
parameters:
- key: caching_key_parts
value:
- $.request.headers.api_key
- $.request.query_param.resource_id
WriteCache:
processor: WriteCache
parameters:
- key: ttl_seconds
value: 600
- key: record_max_size_bytes
value: 8192
- key: max_cache_size_mb
value: 200
- key: caching_key_parts
value:
- $.request.headers.api_key
- $.request.query_param.resource_id
flow:
request:
- from:
stream:
name: globalStream
at: start
to:
processor:
name: ReadCache
- from:
processor:
name: ReadCache
condition: cache_miss
to:
stream:
name: globalStream
at: end
response:
- from:
stream:
name: globalStream
at: start
to:
processor:
name: WriteCache
- from:
processor:
name: WriteCache
to:
stream:
name: globalStream
at: end
- from:
processor:
name: ReadCache
condition: cache_hit
to:
stream:
name: globalStream
at: end
Troubleshooting
Ensure caching_key_parts is consistent between Read Cache and Write Cache processors.