How ChainRaven monitors smart contracts across multiple EVM chains and processes alerts
Blockchain (Ethereum / Base)
│
│ eth_getLogs (topic-filtered RPC)
▼
┌─────────────────────┐
│ Chain Worker │ One per chain, runs in a loop
│ (chainWorker.ts) │ Cursor → Safe Head → Shard → Fetch → Persist
└────────┬────────────┘
│ Inserts into contract_events
│ Enqueues BullMQ jobs
▼
┌──────────────────────────────────────────────┐
│ BullMQ (Redis) │
│ │
│ ┌──────────────┐ ┌───────────┐ ┌────────┐ │
│ │ verification │ │ alert │ │backstop│ │
│ │ queue │ │ queue │ │ queue │ │
│ └──────┬───────┘ └─────┬─────┘ └───┬────┘ │
└─────────┼────────────────┼─────────────┼──────┘
│ │ │
▼ ▼ ▼
Reads on-chain Sends alerts Periodic state
storage slots to users via checks for
to verify events 4 channels silent changes
| Event | Solidity Signature | Topic Hash | Default Severity |
|---|---|---|---|
| Proxy Upgrade | Upgraded(address) | 0xbc7cd75a... | Critical |
| Beacon Upgrade | BeaconUpgraded(address) | 0x1cf3b03a... | Critical |
| Ownership Transfer | OwnershipTransferred(address,address) | 0x8be0079c... | High |
| Admin Change | AdminChanged(address,address) | 0x7e644d79... | High |
| Role Granted | RoleGranted(bytes32,address,address) | 0x2f878811... | High |
| Role Revoked | RoleRevoked(bytes32,address,address) | 0xf6391f5c... | High |
Full topic hashes are defined in worker/config.ts → CONTROL_TOPICS.
| Chain | Confirmation Depth | Poll Interval | Default Shard | Max Shard |
|---|---|---|---|---|
| Ethereum | 6 blocks | 12s | 2,000 blocks | 10,000 |
| Base | 3 blocks | 2s | 2,000 blocks | 20,000 |
Configuration: worker/config.ts → CHAIN_CONFIGS.
worker/index.tsStartup sequence:
dotenv/config)ping)chain tablechain_cursors rows for any new chainsChainWorker event loop per chainSIGTERM/SIGINT graceful shutdown handlersworker/chainWorker.tsEach chain gets its own ChainWorker instance running an async loop:
loop:
1. Load cursor (last processed block from chain_cursors)
2. Get current chain head from Alchemy
3. Calculate safe head = current - confirmationDepth
4. If caught up → sleep(pollIntervalMs) → retry
5. Check for reorg (compare stored block hash)
6. Plan next shard of blocks (adaptive sizing)
7. Fetch logs via Alchemy getLogs (topic-filtered)
8. On RPC failure → shrink shard, retry
9. Normalize logs → filter by monitored addresses
10. Persist events to contract_events (dedup by txHash+logIndex+eventType)
11. Enqueue verification + alert jobs
12. Update cursor with new block position
13. Adjust shard planner based on density + latency
worker/logFetcher.tsCalls Alchemy's getLogs with server-side topic filtering. Only the 6 control topic hashes are sent in the RPC request, so the node does the filtering before returning results. This is far more efficient than fetching all logs and filtering locally.
worker/addressFilter.tsIn-memory Map<address, contractId> for O(1) lookup of monitored contracts. Built from contracts with monitor_control=true that have active watchlists. Auto-refreshes every 60 seconds so newly added contracts are picked up without restarting the worker.
worker/eventNormalizer.tsConverts raw Alchemy Log objects into typed ControlEvent objects:
worker/reorgDetector.tsStores lastBlockHash after processing each range. On the next loop iteration, fetches the block at lastProcessedBlock and compares its hash. If the hash changed, the chain was reorganized — rewinds the cursor 20 blocks and reprocesses. Dedup index on contract_events prevents duplicate event insertion.
worker/shardPlanner.tsAdaptive block range sizing inspired by TCP congestion control:
chain_cursors for recovery across restartsAll queues share a single Redis connection via worker/queues/index.ts.
Purpose: Post-event verification by reading on-chain storage slots.
| Setting | Value |
|---|---|
| Concurrency | 5 |
| Rate Limit | 10 jobs/second |
| Retries | 3 (exponential backoff, 60s initial) |
Per event type:
| Event Type | Verification Method |
|---|---|
| PROXY_UPGRADE | Read EIP-1967 implementation slot (0x3608...) |
| BEACON_UPGRADE | Read EIP-1967 beacon slot (0xa3f0...) |
| OWNERSHIP_TRANSFER | Call owner() or admin() |
| ADMIN_CHANGE | Read EIP-1967 admin slot (0xb531...) |
| ROLE_GRANTED/REVOKED | Verified by log presence (no extra RPC call) |
Updates contract_events.verified, verifiedAt, and verificationData.
File: worker/queues/verificationWorker.ts
Purpose: Send notifications to users watching affected contracts.
| Setting | Value |
|---|---|
| Concurrency | 10 |
| Retries | 3 (exponential backoff, 60s initial) |
Processing flow:
user_alert_preferences for this event typeisActive=false or no channels enabledalert_jobs for recent sends within cooldownSecondsseverityConfigalert_jobs tableIf one channel fails, other channels still attempt delivery.
File: worker/queues/alertWorker.ts
Purpose: Detect silent state changes not captured by events.
| Setting | Value |
|---|---|
| Concurrency | 3 |
| Schedule | Every 6 hours per high-risk contract |
Reads current on-chain implementation address, owner, and admin. Compares to the last known values from event logs. If a mismatch is found, creates a synthetic event with source: 'backstop' and enqueues an alert.
Scheduled at startup via scheduleBackstopScans() for contracts with risk_tier='high'.
File: worker/queues/backstopWorker.ts
chainWorker detects event
→ Inserts into contract_events
→ Enqueues { contractEventId, contractId, eventType, eventData, severity, chainId }
to the alert queue
alertWorker picks up job
→ Queries watchlists for the contract (status='active')
→ For each watcher:
→ Loads user_alert_preferences for this event type
→ Checks cooldown against alert_jobs
→ Calculates severity (user override or system default)
→ Sends to enabled channels
→ Records in alert_jobs
Email (Resend)
lib/email/client.tslib/email/templates.tsRESEND_API_KEY, EMAIL_FROMTelegram (Bot API)
lib/telegram/client.tslib/telegram/templates.tsTELEGRAM_BOT_TOKENtelegramChatId stored in user_integrationsDiscord (Webhooks)
lib/discord/client.tslib/discord/templates.tsuser_integrationsCustom Webhook (HMAC)
lib/webhook/client.tslib/webhook/templates.tsX-ChainRaven-Signature headeruser_alert_preferencesPrevents alert spam for the same event type. Before sending, the alert worker queries:
SELECT id FROM alert_jobs
WHERE user_id = ? AND alert_type = ? AND status = 'sent'
AND sent_at >= NOW() - interval '? seconds'
LIMIT 1
If a recent alert exists within the user's cooldownSeconds threshold, the alert is skipped.
Two-level system:
worker/config.ts → EVENT_TYPE_TO_DEFAULT_SEVERITY)severityConfig JSONB on user_alert_preferences (e.g., { "severity": "critical" })The user's severity feeds into risk score calculations and is displayed on the dashboard. Users can override via the Alert Preferences form.
Severity calculation: lib/alchemy/severity.ts → calculateSeverityForUser()
All tables live in the scmon schema (Supabase).
| Table | Purpose |
|---|---|
chain | Supported blockchains (name, chainId, lastBlock) |
chain_cursors | Per-chain cursor tracking (lastProcessedBlock, lastBlockHash, shardSpan) |
contract | Monitored contracts (address, chainId, monitor_control, risk_tier) |
contract_events | Detected events. Dedup index on (contractId, txHash, logIndex, eventType) |
watchlists | User subscriptions to contracts (userId, contractId, status) |
user_alert_preferences | Per-user, per-event-type: channels, severity, cooldown |
user_integrations | User credentials: telegramChatId, discordWebhookUrl |
alert_jobs | Delivery audit trail. Cooldown index on (userId, alertType, status, sentAt) |
profiles | User accounts (email, name) |
scmon schema)# 1. Install dependencies
npm install
# 2. Configure environment
cp .env.example .env.local
# Edit .env.local and fill in all values (see Environment Variables below)
# 3. Run database migrations
npm run db:migrate
# 4. Ensure chains exist in the database
# The worker reads from the chain table at startup. If no chains exist,
# no ChainWorkers will be created. Insert them manually or via seed script:
#
# INSERT INTO scmon.chain (name, chain_id) VALUES ('ethereum', 1);
# INSERT INTO scmon.chain (name, chain_id) VALUES ('base', 8453);
# 5. Start the Next.js app (separate terminal)
npm run dev
# 6. Start the worker (separate terminal)
npm run worker:dev
# This uses tsx --watch and auto-restarts on file changes
# 7. Verify the worker is running
curl http://localhost:3001/health
# Should return JSON with status: "ok" and per-chain metrics
.env.localworker:dev uses tsx --watch — saves trigger auto-restartPOST /api/admin/testing/events — Create a test eventPOST /api/admin/testing/alerts — Send a test alert to a channelPOST /api/admin/testing/monitoring — Simulate a monitoring scenarioTwo separate services sharing the same codebase:
| Service | Start Command | Port |
|---|---|---|
| Next.js web app | npm run start | 3000 |
| Chain monitor worker | npm run worker:start | 3001 (health only) |
Web Service:
npm install && npm run buildnpm run startWorker Service:
npm installnpm run worker:startGET /health on port 3001package.jsonBoth services need the same DATABASE_URL and app-level env vars. The worker additionally needs REDIS_URL and ALCHEMY_API_KEY_*.
The worker listens for SIGTERM and SIGINT (Railway sends SIGTERM during deploys):
| Variable | Description |
|---|---|
DATABASE_URL | PostgreSQL connection string |
NEXT_PUBLIC_SUPABASE_URL | Supabase project URL |
NEXT_PUBLIC_SUPABASE_ANON_KEY | Supabase anonymous key |
SUPABASE_SERVICE_ROLE_KEY | Supabase service role key |
NEXT_PUBLIC_APP_URL | Application URL (used in alert links) |
| Variable | Description | Default |
|---|---|---|
REDIS_URL | Redis connection string | redis://localhost:6379 |
ALCHEMY_API_KEY_ETHEREUM | Alchemy API key for Ethereum mainnet | — |
ALCHEMY_API_KEY_BASE | Alchemy API key for Base mainnet | — |
WORKER_HEALTH_PORT | Port for health HTTP endpoint | 3001 |
| Variable | Description | Required |
|---|---|---|
RESEND_API_KEY | Resend API key for email delivery | Yes (for email alerts) |
EMAIL_FROM | Sender address for alert emails | Default: ChainRaven <alerts@chainraven.io> |
TELEGRAM_BOT_TOKEN | Telegram bot token (botId:token) | Only for Telegram alerts |
Discord and custom webhooks don't need env vars — they use per-user URLs stored in the database.
| Variable | Description |
|---|---|
CRON_SECRET | Auth token for cron endpoints |
ADMIN_SECRET | Auth token for admin testing endpoints |
Worker starts but no events are detected
chain tablemonitor_control=true and active watchlistsEvents detected but no alerts sent
user_alert_preferences — is isActive=true and at least one channel enabled?RESEND_API_KEY, Telegram needs TELEGRAM_BOT_TOKEN and telegramChatIdWorker crashes on startup
REDIS_URLDATABASE_URLALCHEMY_API_KEY_ETHEREUM / ALCHEMY_API_KEY_BASEShard size keeps shrinking