RPC Proxy
The RPC Proxy routes JSON-RPC requests to the appropriate backend node (write-node for transactions, read-node for queries) and provides efficient WebSocket fan-out through a broadcast hub architecture.
Features
- Request Routing: Routes write methods to write-node, read methods to read-node
- WebSocket Broadcast Hub: Single upstream WebSocket connection with fan-out to multiple clients
- Efficient Message Routing: Messages are routed only to clients with matching subscriptions
- Automatic Reconnection: Exponential backoff reconnection for upstream failures
- Prometheus Metrics: Full observability for monitoring
Architecture
┌──────────┐
│read-node │
│ WS │
└────┬─────┘
│
▼
┌──────────────────┐
│ Broadcast Hub │
│ - Single upstream│
│ - Subscription │
│ routing │
│ - Auto-reconnect│
└────────┬─────────┘
│ broadcast channel
┌──────────────┼──────────────┐
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Client 1│ │Client 2│ │Client N│
└────────┘ └────────┘ └────────┘Previous Architecture (1:1 Proxy)
Before the broadcast hub, each client had a dedicated upstream connection:
- N clients = N upstream connections to read-node
- Same subscription data duplicated N times
- Read-node bore the full connection load
Current Architecture (Broadcast Hub)
With the broadcast hub:
- Single upstream WebSocket connection - Drastically reduces connection overhead on read-node
- Efficient message routing - Messages are routed only to clients with matching subscriptions
- Efficient fan-out -
tokio::sync::broadcastchannel delivers messages to all interested clients - Client isolation - Each client only receives messages for their subscriptions
Configuration
# CLI flags
proxy \
--port 8546 \
--write-node-url http://localhost:8545 \
--read-node-url http://localhost:8547 \
--read-node-ws-url ws://localhost:8547/ws \
--broadcast-capacity 4096 \
--cors-origins "*" \
--trust-proxy true
# Environment variables
PORT=8546
WRITE_NODE_URL=http://localhost:8545
READ_NODE_URL=http://localhost:8547
READ_NODE_WS_URL=ws://localhost:8547/ws
BROADCAST_CAPACITY=4096
CORS_ORIGINS=*
TRUST_PROXY=trueConfiguration Options
| Option | Environment Variable | Default | Description |
|---|---|---|---|
--port | PORT | 8546 | Port to listen on |
--write-node-url | WRITE_NODE_URL | http://localhost:8545 | Write node URL for transactions |
--read-node-url | READ_NODE_URL | http://localhost:8547 | Read node URL for queries |
--read-node-ws-url | READ_NODE_WS_URL | Auto-derived | Read node WebSocket URL |
--broadcast-capacity | BROADCAST_CAPACITY | 4096 | Broadcast channel buffer size |
--cors-origins | CORS_ORIGINS | * | Allowed CORS origins (comma-separated) |
--trust-proxy | TRUST_PROXY | true | Trust X-Forwarded-For headers |
Broadcast Capacity Tuning
The broadcast-capacity setting determines how many messages can be buffered:
- Too low: Slow clients may lag and miss messages
- Too high: Higher memory usage
Recommended values:
- Development:
1024 - Production:
4096-8192 - High-traffic:
16384
Request Routing
| Method Type | Target | Examples |
|---|---|---|
| Write | write-node | kaizen_sendTransaction |
| Read | read-node | kaizen_getAccount, kaizen_simulateTransaction |
| WebSocket | read-node (via hub) | eth_subscribe, all subscriptions |
Metrics
HTTP Metrics
| Metric | Description |
|---|---|
proxy_http_requests_total | Total HTTP requests by method, target, status |
proxy_http_request_duration_seconds | Request duration histogram |
proxy_rpc_methods_total | Total RPC methods by method name |
proxy_upstream_latency_seconds | Latency to upstream nodes |
proxy_upstream_errors_total | Upstream connection errors |
WebSocket Metrics
| Metric | Description |
|---|---|
proxy_ws_connections_active | Current active WebSocket connections |
proxy_ws_messages_total | Total messages by direction (upstream/downstream/lagged) |
Broadcast Hub Metrics
| Metric | Description |
|---|---|
proxy_broadcast_upstream_connected | 1.0 if connected to upstream, 0.0 otherwise |
proxy_broadcast_messages_received | Total messages received from upstream |
proxy_broadcast_messages_sent | Total messages broadcast to clients |
proxy_broadcast_reconnect_attempts | Total reconnection attempts |
proxy_broadcast_active_subscriptions | Number of active subscriptions in hub |
Subscription Handling
How It Works
- Subscribe Request: Client sends
eth_subscribe→ Hub forwards to upstream - Subscription Response: Upstream returns
subscription_id→ Hub trackssubscription_id → client_idmapping - Notifications: Upstream sends notification → Hub looks up clients for that subscription → Broadcasts to all subscribed clients
- Unsubscribe: Client sends
eth_unsubscribe→ Hub removes client from subscription - Client Disconnect: Hub automatically unsubscribes orphaned subscriptions from upstream
Connection Efficiency
The key benefit is the single upstream WebSocket connection:
- All client subscriptions go through one upstream connection (not N connections for N clients)
- Each subscription still creates its own upstream subscription ID
- Unsubscribe requests are only sent to upstream when the last client using that subscription_id either explicitly unsubscribes or disconnects
This dramatically reduces WebSocket connection overhead on read-node.
Health & Endpoints
| Endpoint | Description |
|---|---|
POST / | JSON-RPC endpoint |
GET /health | Health check (returns "OK") |
GET /metrics | Prometheus metrics |
GET /ws | WebSocket endpoint |
Running
Development
cargo run -p kaizen-proxy -- \
--port 8546 \
--write-node-url http://localhost:8545 \
--read-node-url http://localhost:8547Docker
docker compose up -d proxyProduction
# Behind nginx (recommended)
proxy \
--port 8546 \
--write-node-url http://write-node:8545 \
--read-node-url http://read-node:8547 \
--broadcast-capacity 8192 \
--cors-origins "https://app.example.com" \
--trust-proxy true \
--json-logs trueTroubleshooting
WebSocket Connections Not Working
-
Check upstream connectivity:
curl http://localhost:8547/health wscat -c ws://localhost:8547/ws -
Check broadcast hub status:
curl -s localhost:8546/metrics | grep proxy_broadcast_upstream_connected # Should show 1.0
Clients Missing Messages
Check if clients are lagging:
curl -s localhost:8546/metrics | grep lagged
# proxy_ws_messages_total{direction="lagged"} should be lowIf high, increase --broadcast-capacity.
High Memory Usage
Reduce broadcast capacity or check for slow clients:
curl -s localhost:8546/metrics | grep proxy_ws_connections_activeMany idle connections may indicate clients not properly closing WebSocket.
