Skip to content

RPC Proxy

The RPC Proxy routes JSON-RPC requests to the appropriate backend node (write-node for transactions, read-node for queries) and provides efficient WebSocket fan-out through a broadcast hub architecture.

Features

  • Request Routing: Routes write methods to write-node, read methods to read-node
  • WebSocket Broadcast Hub: Single upstream WebSocket connection with fan-out to multiple clients
  • Efficient Message Routing: Messages are routed only to clients with matching subscriptions
  • Automatic Reconnection: Exponential backoff reconnection for upstream failures
  • Prometheus Metrics: Full observability for monitoring

Architecture

                          ┌──────────┐
                          │read-node │
                          │   WS     │
                          └────┬─────┘


                    ┌──────────────────┐
                    │  Broadcast Hub   │
                    │  - Single upstream│
                    │  - Subscription  │
                    │    routing       │
                    │  - Auto-reconnect│
                    └────────┬─────────┘
                             │ broadcast channel
              ┌──────────────┼──────────────┐
              ▼              ▼              ▼
         ┌────────┐    ┌────────┐    ┌────────┐
         │Client 1│    │Client 2│    │Client N│
         └────────┘    └────────┘    └────────┘

Previous Architecture (1:1 Proxy)

Before the broadcast hub, each client had a dedicated upstream connection:

  • N clients = N upstream connections to read-node
  • Same subscription data duplicated N times
  • Read-node bore the full connection load

Current Architecture (Broadcast Hub)

With the broadcast hub:

  • Single upstream WebSocket connection - Drastically reduces connection overhead on read-node
  • Efficient message routing - Messages are routed only to clients with matching subscriptions
  • Efficient fan-out - tokio::sync::broadcast channel delivers messages to all interested clients
  • Client isolation - Each client only receives messages for their subscriptions

Configuration

# CLI flags
proxy \
  --port 8546 \
  --write-node-url http://localhost:8545 \
  --read-node-url http://localhost:8547 \
  --read-node-ws-url ws://localhost:8547/ws \
  --broadcast-capacity 4096 \
  --cors-origins "*" \
  --trust-proxy true
 
# Environment variables
PORT=8546
WRITE_NODE_URL=http://localhost:8545
READ_NODE_URL=http://localhost:8547
READ_NODE_WS_URL=ws://localhost:8547/ws
BROADCAST_CAPACITY=4096
CORS_ORIGINS=*
TRUST_PROXY=true

Configuration Options

OptionEnvironment VariableDefaultDescription
--portPORT8546Port to listen on
--write-node-urlWRITE_NODE_URLhttp://localhost:8545Write node URL for transactions
--read-node-urlREAD_NODE_URLhttp://localhost:8547Read node URL for queries
--read-node-ws-urlREAD_NODE_WS_URLAuto-derivedRead node WebSocket URL
--broadcast-capacityBROADCAST_CAPACITY4096Broadcast channel buffer size
--cors-originsCORS_ORIGINS*Allowed CORS origins (comma-separated)
--trust-proxyTRUST_PROXYtrueTrust X-Forwarded-For headers

Broadcast Capacity Tuning

The broadcast-capacity setting determines how many messages can be buffered:

  • Too low: Slow clients may lag and miss messages
  • Too high: Higher memory usage

Recommended values:

  • Development: 1024
  • Production: 4096 - 8192
  • High-traffic: 16384

Request Routing

Method TypeTargetExamples
Writewrite-nodekaizen_sendTransaction
Readread-nodekaizen_getAccount, kaizen_simulateTransaction
WebSocketread-node (via hub)eth_subscribe, all subscriptions

Metrics

HTTP Metrics

MetricDescription
proxy_http_requests_totalTotal HTTP requests by method, target, status
proxy_http_request_duration_secondsRequest duration histogram
proxy_rpc_methods_totalTotal RPC methods by method name
proxy_upstream_latency_secondsLatency to upstream nodes
proxy_upstream_errors_totalUpstream connection errors

WebSocket Metrics

MetricDescription
proxy_ws_connections_activeCurrent active WebSocket connections
proxy_ws_messages_totalTotal messages by direction (upstream/downstream/lagged)

Broadcast Hub Metrics

MetricDescription
proxy_broadcast_upstream_connected1.0 if connected to upstream, 0.0 otherwise
proxy_broadcast_messages_receivedTotal messages received from upstream
proxy_broadcast_messages_sentTotal messages broadcast to clients
proxy_broadcast_reconnect_attemptsTotal reconnection attempts
proxy_broadcast_active_subscriptionsNumber of active subscriptions in hub

Subscription Handling

How It Works

  1. Subscribe Request: Client sends eth_subscribe → Hub forwards to upstream
  2. Subscription Response: Upstream returns subscription_id → Hub tracks subscription_id → client_id mapping
  3. Notifications: Upstream sends notification → Hub looks up clients for that subscription → Broadcasts to all subscribed clients
  4. Unsubscribe: Client sends eth_unsubscribe → Hub removes client from subscription
  5. Client Disconnect: Hub automatically unsubscribes orphaned subscriptions from upstream

Connection Efficiency

The key benefit is the single upstream WebSocket connection:

  • All client subscriptions go through one upstream connection (not N connections for N clients)
  • Each subscription still creates its own upstream subscription ID
  • Unsubscribe requests are only sent to upstream when the last client using that subscription_id either explicitly unsubscribes or disconnects

This dramatically reduces WebSocket connection overhead on read-node.

Health & Endpoints

EndpointDescription
POST /JSON-RPC endpoint
GET /healthHealth check (returns "OK")
GET /metricsPrometheus metrics
GET /wsWebSocket endpoint

Running

Development

cargo run -p kaizen-proxy -- \
  --port 8546 \
  --write-node-url http://localhost:8545 \
  --read-node-url http://localhost:8547

Docker

docker compose up -d proxy

Production

# Behind nginx (recommended)
proxy \
  --port 8546 \
  --write-node-url http://write-node:8545 \
  --read-node-url http://read-node:8547 \
  --broadcast-capacity 8192 \
  --cors-origins "https://app.example.com" \
  --trust-proxy true \
  --json-logs true

Troubleshooting

WebSocket Connections Not Working

  1. Check upstream connectivity:

    curl http://localhost:8547/health
    wscat -c ws://localhost:8547/ws
  2. Check broadcast hub status:

    curl -s localhost:8546/metrics | grep proxy_broadcast_upstream_connected
    # Should show 1.0

Clients Missing Messages

Check if clients are lagging:

curl -s localhost:8546/metrics | grep lagged
# proxy_ws_messages_total{direction="lagged"} should be low

If high, increase --broadcast-capacity.

High Memory Usage

Reduce broadcast capacity or check for slow clients:

curl -s localhost:8546/metrics | grep proxy_ws_connections_active

Many idle connections may indicate clients not properly closing WebSocket.