Custom Metrics Collection Flow
The Custom Metrics Collector Flow tracks user-defined metrics for API requests and responses, such as request size or rate limits. This flow helps monitor API performance and usage patterns in real-time, offering flexibility with histograms, gauges, and other metric types.
Once this flow is running your metrics will be written to Prometheus which can be accessed directly at localhost:9090
or accessing Grafana at localhost:3000
.
Scenarios
- Track API Call Sizes: Monitor API payload sizes to identify performance-impacting requests.
- Monitor Rate Limit Usage: Track remaining rate limits from responses to manage API consumption.
- Analyze Request Trends: Use custom labels to analyze traffic based on HTTP methods, URLs, or headers.
- Performance Metrics: Measure performance of API endpoints to optimize response times and reduce overhead.
Flow Components
Flow Example
In this configuration:
- API Request Metrics: The flow captures the size of each API call (
api_call_size
) and categorizes the data into histogram buckets. Metrics are labeled withhttp_method
,url
, andconsumer_tag
to allow detailed tracking. - API Response Metrics: The flow collects the remaining rate limits from API responses using the
X-Ratelimit-Remaining
header, stored as a gauge metric with theurl
as a label.
/etc/lunar-proxy/flows/flow.yaml
name: CustomMetricsCollector
filter:
url: "*" # Capture metrics for all URLs
processors:
ResponseMetrics: # Collect custom metrics for API responses
processor: UserDefinedMetrics
parameters:
- key: metric_name
value: "rate_limit_remaining" # Metric to track remaining rate limits
- key: metric_type
value: "gauge" # Metric type (e.g., gauge)
- key: metric_value
value: "$[\"headers\"][\"X-Ratelimit-Remaining\"]" # JSON path for rate limit from headers
- key: labels
value:
- "url" # Label for URL
- key: custom_metric_labels
value:
"x-lunar-context": "$[\"headers\"][\"x-lunar-temp-context\"]"
"x-lunar-special-context": "$[\"headers\"][\"x-lunar-special-context\"]"
flow:
request:
- from:
stream:
name: globalStream
at: start
to:
stream:
name: globalStream
at: end
response:
- from:
stream:
name: globalStream
at: start
to:
processor:
name: ResponseMetrics
- from:
processor:
name: ResponseMetrics
to:
stream:
name: globalStream
at: end