Home / Solutions / Analytics
Analytics

2 petabytes of validated blockchain data, one API

Query full-history data from genesis across 225+ networks without running a single node. Backfill years of chain data in hours, stream new blocks in milliseconds, and pipe everything into the warehouse you already use.

2 PB+ Data indexed
225+ Networks
48.5ms P90 latency
0% Error rate
Capabilities

Built for analytics at scale

*

Single endpoint, 225+ networks

One POST to portal.sqd.dev returns decoded events, transactions, traces, and state diffs from any supported chain. No per-chain RPC setup, no endpoint management, no data reconciliation across providers.

|

Full archival depth from genesis

Every block, every transaction, every event log from block 0 to head — on every chain. Backfill a year of Ethereum data in hours, not the 3-5 days other indexers take on high-throughput chains like Monad.

>

Real-time NDJSON streaming

Stream new blocks, events, and state changes as NDJSON over HTTP. No WebSocket complexity, no polling intervals. Your dashboards update in real time with the same API you use for historical queries.

=

Direct warehouse integration

Pipe raw or transformed data directly into ClickHouse, BigQuery, Snowflake, or Postgres. The Pipes SDK outputs to Parquet/S3 for data lake architectures. No intermediate storage layer required.

Use cases

What teams build with SQD

Ecosystem health dashboards with cross-chain comparisons
Whale & smart money tracking across DEXs and lending protocols
Gas price analytics and fee market modeling
Cross-chain activity benchmarking for L2 comparisons
Institutional research with validated historical datasets
Internal tooling powered by real-time on-chain feeds
The architecture difference

Why running your own nodes is a losing game

A full Ethereum archive node requires 15+ TB of storage and takes weeks to sync. Now multiply that by the 225+ networks your analytics platform needs to cover. Most teams give up after chain #3 and start making compromises — sampling data, caching stale results, or just ignoring chains that are "too hard."

SQD's network of 2,800+ worker nodes has already done the hard work. 2 petabytes of validated data stored in columnar Parquet format, optimized for the exact query patterns analytics platforms need. You get 30x faster data retrieval than direct RPC calls, with none of the infrastructure burden.

Backfill time 3-5 days Minutes
RPC calls/day ~1M ~270K
Chain setup Per-chain One endpoint
Data validation None 6-step pipeline
Historical depth Varies Full genesis
Two paths, same data

Choose the SDK that matches your architecture

Squid SDK

Correctness-first ETL

Batch-oriented processing with automatic reorg handling, TypeORM schema migrations, and optional auto-generated GraphQL APIs. Built for dApp backends where data correctness is non-negotiable.

  • - PostgreSQL, BigQuery, S3 outputs
  • - Auto-generated GraphQL endpoint
  • - Reorg-aware by default
  • - Deploy to SQD Cloud or self-host
Pipes SDK

Throughput-first streaming

Composable streaming pipelines with materialized views and no ORM lock-in. Portal-native from day one, designed for data lake architectures and high-throughput analytics workloads.

  • - ClickHouse, S3/Parquet, any sink
  • - Materialized views built-in
  • - No ORM, flexible schema
  • - Lowest latency path to data
Get started

Your blockchain data infrastructure, handled.

Private Portal. Dedicated. Validated. Managed. Tell us what you're building — we'll show you what it looks like on SQD.