Documentation

Chain Monitoring

How ChainRaven monitors smart contracts across multiple EVM chains and processes alerts

Architecture Overview

  Blockchain (Ethereum / Base)
       │
       │  eth_getLogs (topic-filtered RPC)
       ▼
  ┌─────────────────────┐
  │   Chain Worker       │  One per chain, runs in a loop
  │   (chainWorker.ts)   │  Cursor → Safe Head → Shard → Fetch → Persist
  └────────┬────────────┘
           │  Inserts into contract_events
           │  Enqueues BullMQ jobs
           ▼
  ┌──────────────────────────────────────────────┐
  │              BullMQ (Redis)                   │
  │                                               │
  │  ┌──────────────┐  ┌───────────┐  ┌────────┐ │
  │  │ verification  │  │   alert   │  │backstop│ │
  │  │   queue       │  │   queue   │  │  queue │ │
  │  └──────┬───────┘  └─────┬─────┘  └───┬────┘ │
  └─────────┼────────────────┼─────────────┼──────┘
            │                │             │
            ▼                ▼             ▼
   Reads on-chain     Sends alerts   Periodic state
   storage slots      to users via   checks for
   to verify events   4 channels     silent changes

Monitored Control Events (6)

EventSolidity SignatureTopic HashDefault Severity
Proxy UpgradeUpgraded(address)0xbc7cd75a...Critical
Beacon UpgradeBeaconUpgraded(address)0x1cf3b03a...Critical
Ownership TransferOwnershipTransferred(address,address)0x8be0079c...High
Admin ChangeAdminChanged(address,address)0x7e644d79...High
Role GrantedRoleGranted(bytes32,address,address)0x2f878811...High
Role RevokedRoleRevoked(bytes32,address,address)0xf6391f5c...High

Full topic hashes are defined in worker/config.tsCONTROL_TOPICS.

Supported Chains

ChainConfirmation DepthPoll IntervalDefault ShardMax Shard
Ethereum6 blocks12s2,000 blocks10,000
Base3 blocks2s2,000 blocks20,000

Configuration: worker/config.tsCHAIN_CONFIGS.


Worker Components

Entry Point — worker/index.ts

Startup sequence:

  1. Load environment (dotenv/config)
  2. Test Redis connection (ping)
  3. Load chains from chain table
  4. Create chain_cursors rows for any new chains
  5. Start 3 BullMQ workers (verification, alert, backstop)
  6. Schedule backstop scans for high-risk contracts
  7. Start a ChainWorker event loop per chain
  8. Start HTTP health server on port 3001
  9. Register SIGTERM/SIGINT graceful shutdown handlers

Chain Worker — worker/chainWorker.ts

Each chain gets its own ChainWorker instance running an async loop:

loop:
  1. Load cursor (last processed block from chain_cursors)
  2. Get current chain head from Alchemy
  3. Calculate safe head = current - confirmationDepth
  4. If caught up → sleep(pollIntervalMs) → retry
  5. Check for reorg (compare stored block hash)
  6. Plan next shard of blocks (adaptive sizing)
  7. Fetch logs via Alchemy getLogs (topic-filtered)
  8. On RPC failure → shrink shard, retry
  9. Normalize logs → filter by monitored addresses
  10. Persist events to contract_events (dedup by txHash+logIndex+eventType)
  11. Enqueue verification + alert jobs
  12. Update cursor with new block position
  13. Adjust shard planner based on density + latency

Log Fetcher — worker/logFetcher.ts

Calls Alchemy's getLogs with server-side topic filtering. Only the 6 control topic hashes are sent in the RPC request, so the node does the filtering before returning results. This is far more efficient than fetching all logs and filtering locally.

Address Filter — worker/addressFilter.ts

In-memory Map<address, contractId> for O(1) lookup of monitored contracts. Built from contracts with monitor_control=true that have active watchlists. Auto-refreshes every 60 seconds so newly added contracts are picked up without restarting the worker.

Event Normalizer — worker/eventNormalizer.ts

Converts raw Alchemy Log objects into typed ControlEvent objects:

  • Maps topic hash → event type
  • Parses indexed/non-indexed parameters from topics and data
  • Attaches contractId, chainId, severity
  • Filters out events for non-monitored addresses

Reorg Detector — worker/reorgDetector.ts

Stores lastBlockHash after processing each range. On the next loop iteration, fetches the block at lastProcessedBlock and compares its hash. If the hash changed, the chain was reorganized — rewinds the cursor 20 blocks and reprocesses. Dedup index on contract_events prevents duplicate event insertion.

Shard Planner — worker/shardPlanner.ts

Adaptive block range sizing inspired by TCP congestion control:

  • Shrinks fast: On RPC failure, halves span immediately
  • Grows slow: On success, grows 1.25x if log density is low
  • Dense mode: Halves span if >50 logs/block or RPC latency >5s
  • State persisted to chain_cursors for recovery across restarts

BullMQ Queues

All queues share a single Redis connection via worker/queues/index.ts.

Verification Queue

Purpose: Post-event verification by reading on-chain storage slots.

SettingValue
Concurrency5
Rate Limit10 jobs/second
Retries3 (exponential backoff, 60s initial)

Per event type:

Event TypeVerification Method
PROXY_UPGRADERead EIP-1967 implementation slot (0x3608...)
BEACON_UPGRADERead EIP-1967 beacon slot (0xa3f0...)
OWNERSHIP_TRANSFERCall owner() or admin()
ADMIN_CHANGERead EIP-1967 admin slot (0xb531...)
ROLE_GRANTED/REVOKEDVerified by log presence (no extra RPC call)

Updates contract_events.verified, verifiedAt, and verificationData.

File: worker/queues/verificationWorker.ts

Alert Queue

Purpose: Send notifications to users watching affected contracts.

SettingValue
Concurrency10
Retries3 (exponential backoff, 60s initial)

Processing flow:

  1. Find all users with active watchlists for the contract
  2. For each user, load their user_alert_preferences for this event type
  3. Skip if isActive=false or no channels enabled
  4. Check cooldown: query alert_jobs for recent sends within cooldownSeconds
  5. Calculate user-specific severity from severityConfig
  6. Load user integrations (Telegram chat ID, Discord webhook URL)
  7. Send to each enabled channel (email, telegram, discord, webhook)
  8. Record delivery in alert_jobs table

If one channel fails, other channels still attempt delivery.

File: worker/queues/alertWorker.ts

Backstop Queue

Purpose: Detect silent state changes not captured by events.

SettingValue
Concurrency3
ScheduleEvery 6 hours per high-risk contract

Reads current on-chain implementation address, owner, and admin. Compares to the last known values from event logs. If a mismatch is found, creates a synthetic event with source: 'backstop' and enqueues an alert.

Scheduled at startup via scheduleBackstopScans() for contracts with risk_tier='high'.

File: worker/queues/backstopWorker.ts


Alert Pipeline

Event → Alert Flow

chainWorker detects event
  → Inserts into contract_events
  → Enqueues { contractEventId, contractId, eventType, eventData, severity, chainId }
    to the alert queue

alertWorker picks up job
  → Queries watchlists for the contract (status='active')
  → For each watcher:
      → Loads user_alert_preferences for this event type
      → Checks cooldown against alert_jobs
      → Calculates severity (user override or system default)
      → Sends to enabled channels
      → Records in alert_jobs

Notification Channels

Email (Resend)

  • Client: lib/email/client.ts
  • Templates: lib/email/templates.ts
  • Sends HTML + plain text with event details, severity badge, and dashboard link
  • Env: RESEND_API_KEY, EMAIL_FROM

Telegram (Bot API)

  • Client: lib/telegram/client.ts
  • Templates: lib/telegram/templates.ts
  • Sends MarkdownV2 formatted message with event details
  • Env: TELEGRAM_BOT_TOKEN
  • User's telegramChatId stored in user_integrations

Discord (Webhooks)

  • Client: lib/discord/client.ts
  • Templates: lib/discord/templates.ts
  • Sends rich embed with colored sidebar (severity), fields, and links
  • No env vars — webhook URL stored per-user in user_integrations

Custom Webhook (HMAC)

  • Client: lib/webhook/client.ts
  • Templates: lib/webhook/templates.ts
  • Sends JSON payload with HMAC-SHA256 signature in X-ChainRaven-Signature header
  • SSRF protection: blocks private IPs, cloud metadata endpoints
  • 10-second timeout per request
  • Webhook URL and secret stored per-user in user_alert_preferences

Cooldown

Prevents alert spam for the same event type. Before sending, the alert worker queries:

sql
SELECT id FROM alert_jobs
WHERE user_id = ? AND alert_type = ? AND status = 'sent'
  AND sent_at >= NOW() - interval '? seconds'
LIMIT 1

If a recent alert exists within the user's cooldownSeconds threshold, the alert is skipped.

Severity

Two-level system:

  1. System default per event type (defined in worker/config.tsEVENT_TYPE_TO_DEFAULT_SEVERITY)
  2. User override via severityConfig JSONB on user_alert_preferences (e.g., { "severity": "critical" })

The user's severity feeds into risk score calculations and is displayed on the dashboard. Users can override via the Alert Preferences form.

Severity calculation: lib/alchemy/severity.tscalculateSeverityForUser()


Key Database Tables

All tables live in the scmon schema (Supabase).

TablePurpose
chainSupported blockchains (name, chainId, lastBlock)
chain_cursorsPer-chain cursor tracking (lastProcessedBlock, lastBlockHash, shardSpan)
contractMonitored contracts (address, chainId, monitor_control, risk_tier)
contract_eventsDetected events. Dedup index on (contractId, txHash, logIndex, eventType)
watchlistsUser subscriptions to contracts (userId, contractId, status)
user_alert_preferencesPer-user, per-event-type: channels, severity, cooldown
user_integrationsUser credentials: telegramChatId, discordWebhookUrl
alert_jobsDelivery audit trail. Cooldown index on (userId, alertType, status, sentAt)
profilesUser accounts (email, name)

Development Setup

Prerequisites

  • Node.js 20+
  • Redis — local instance or cloud (e.g., Upstash, Railway Redis)
  • PostgreSQL — Supabase project (or local Postgres with scmon schema)
  • Alchemy account — API keys for Ethereum and Base networks

Steps

bash
# 1. Install dependencies
npm install

# 2. Configure environment
cp .env.example .env.local
# Edit .env.local and fill in all values (see Environment Variables below)

# 3. Run database migrations
npm run db:migrate

# 4. Ensure chains exist in the database
# The worker reads from the chain table at startup. If no chains exist,
# no ChainWorkers will be created. Insert them manually or via seed script:
#
#   INSERT INTO scmon.chain (name, chain_id) VALUES ('ethereum', 1);
#   INSERT INTO scmon.chain (name, chain_id) VALUES ('base', 8453);

# 5. Start the Next.js app (separate terminal)
npm run dev

# 6. Start the worker (separate terminal)
npm run worker:dev
# This uses tsx --watch and auto-restarts on file changes

# 7. Verify the worker is running
curl http://localhost:3001/health
# Should return JSON with status: "ok" and per-chain metrics

Dev Tips

  • The worker and Next.js app share the same database and .env.local
  • worker:dev uses tsx --watch — saves trigger auto-restart
  • The address filter refreshes every 60s, so newly added contracts are picked up automatically
  • To test alerts without waiting for real events, use the admin test endpoints:
    • POST /api/admin/testing/events — Create a test event
    • POST /api/admin/testing/alerts — Send a test alert to a channel
    • POST /api/admin/testing/monitoring — Simulate a monitoring scenario

Production Deployment

Architecture

Two separate services sharing the same codebase:

ServiceStart CommandPort
Next.js web appnpm run start3000
Chain monitor workernpm run worker:start3001 (health only)

Railway Setup

Web Service:

  • Build: npm install && npm run build
  • Start: npm run start
  • Port: 3000

Worker Service:

  • Build: npm install
  • Start: npm run worker:start
  • Health check: GET /health on port 3001
  • No Dockerfile needed — Railway runs from package.json

Both services need the same DATABASE_URL and app-level env vars. The worker additionally needs REDIS_URL and ALCHEMY_API_KEY_*.

Graceful Shutdown

The worker listens for SIGTERM and SIGINT (Railway sends SIGTERM during deploys):

  1. Stops all chain worker loops
  2. Closes BullMQ workers (finishes in-flight jobs)
  3. Closes queue connections and Redis
  4. Closes health server
  5. Exits cleanly

Environment Variables

Required for Both (Web + Worker)

VariableDescription
DATABASE_URLPostgreSQL connection string
NEXT_PUBLIC_SUPABASE_URLSupabase project URL
NEXT_PUBLIC_SUPABASE_ANON_KEYSupabase anonymous key
SUPABASE_SERVICE_ROLE_KEYSupabase service role key
NEXT_PUBLIC_APP_URLApplication URL (used in alert links)

Worker Only

VariableDescriptionDefault
REDIS_URLRedis connection stringredis://localhost:6379
ALCHEMY_API_KEY_ETHEREUMAlchemy API key for Ethereum mainnet
ALCHEMY_API_KEY_BASEAlchemy API key for Base mainnet
WORKER_HEALTH_PORTPort for health HTTP endpoint3001

Alert Channels

VariableDescriptionRequired
RESEND_API_KEYResend API key for email deliveryYes (for email alerts)
EMAIL_FROMSender address for alert emailsDefault: ChainRaven <alerts@chainraven.io>
TELEGRAM_BOT_TOKENTelegram bot token (botId:token)Only for Telegram alerts

Discord and custom webhooks don't need env vars — they use per-user URLs stored in the database.

Security

VariableDescription
CRON_SECRETAuth token for cron endpoints
ADMIN_SECRETAuth token for admin testing endpoints

Troubleshooting

Worker starts but no events are detected

  • Check that chains exist in the chain table
  • Check that contracts have monitor_control=true and active watchlists
  • Check the health endpoint — are chain workers showing progress (block numbers advancing)?
  • Check Alchemy API key is valid for the correct network

Events detected but no alerts sent

  • Check that users have active watchlists for the contract
  • Check user_alert_preferences — is isActive=true and at least one channel enabled?
  • Check cooldown — a recent alert for the same type may be blocking
  • Check alert channel setup: email needs RESEND_API_KEY, Telegram needs TELEGRAM_BOT_TOKEN and telegramChatId

Worker crashes on startup

  • Redis connection failed — check REDIS_URL
  • Database connection failed — check DATABASE_URL
  • Missing Alchemy key — check ALCHEMY_API_KEY_ETHEREUM / ALCHEMY_API_KEY_BASE

Shard size keeps shrinking

  • High log density on that chain range — normal, the shard planner auto-adjusts
  • RPC latency too high — check Alchemy dashboard for rate limits
  • Persistent failures — check worker logs for RPC error details