SDKs

Build indexers your way

Two TypeScript SDKs for blockchain data. One batteries-included framework, one composable library — both powered by Portal.

Powered by Portal →

Squid SDK vs Pipes SDK

Two paths from the same data source. One optimized for correctness, the other for throughput.

Portal (streaming)
Rust engine, 200+ chains
.setPortal()portalSource()
Squid SDK
Indexer framework
processor.run(db, handler)
Batch-oriented ETL loop
Reorg handling
Unfinalized blocks, rollbacks
Data sinks
Postgres, BigQuery, S3, CSV
GraphQL API
Auto-gen from schema.graphql
SQD Cloud or self-host
Managed hosting, CLI deploy, or your own infra
Correctness-first
dApp backends, real-time APIs
TypeORM, migrations, schema
Portal retrofitted
EVM + Substrate processors
Status
GA · Production
Battle-tested by the biggest DeFi projects. Actively developed. Deployable to SQD Cloud or self-hosted.
Pipes SDKBeta
Composable data pipeline
source().pipe().pipe()
Web Streams, composable
Parallel named pipes
swaps, metaplex, pumpfun…
ClickHouse
S3 / Parquet
Any DB
Materialized views
Pre-aggregated, lowest latency
Your analytics / app
Dashboards, AI agents, APIs
Throughput-first
Analytics at scale, data lakes
No ORM, no schema lock-in
Portal-native from day one
Land anywhere, own your infra
Status
Beta → GA
In development since H2 2025, now transitioning to GA. Used internally 6+ months, adopted by major DeFi projects. Self-host only.
Same data source · different paradigm · vertical app stack vs horizontal data infrastructure

Each SDK has a sweet spot

See where each SDK excels — and where the other is the better choice.

Squid SDK generates a full GraphQL API from your schema — zero extra setup.

Squid SDKBest fitschema.graphql + src/main.ts

Define your schema, get a production GraphQL API automatically.

// 1. Define your schema — Squid generates the API
// schema.graphql:
// type Transfer @entity {
//   id: ID!
//   from: String!
//   to: String!
//   value: BigInt!
//   block: Int!
// }

import { EvmBatchProcessor } from '@subsquid/evm-processor'
import { TypeormDatabase } from '@subsquid/typeorm-store'
import { Transfer } from './model'  // auto-generated from schema
import * as erc20 from './abi/erc20'

const processor = new EvmBatchProcessor()
  .setPortal('https://portal.sqd.dev/datasets/ethereum-mainnet')
  .addLog({
    address: ['0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48'],
    topic0: [erc20.events.Transfer.topic],
  })

processor.run(new TypeormDatabase(), async (ctx) => {
  for (const block of ctx.blocks) {
    for (const log of block.logs) {
      const { from, to, value } = erc20.events.Transfer.decode(log)
      await ctx.store.insert(new Transfer({
        id: log.id, from, to, value,
        block: block.header.height,
      }))
    }
  }
})
// GraphQL API is live at localhost:4350/graphql
// Query: { transfers(where: { from_eq: "0x..." }) { to, value } }

Compare the SDKs

Both are production-grade TypeScript tools. The right choice depends on your architecture and deployment needs.

Aspect
Squid SDK
Pipes SDKBeta
Philosophy
Batteries-included framework
Composable streaming library
Database
PostgreSQL (built-in TypeORM)
Any — PG, ClickHouse, Mongo, SQLite…
API Layer
Auto-generated GraphQL
Manual / flexible
Deployment
SQD Cloud (sqd deploy)
Self-host — Railway, Docker, any infra
Codegen
Required (sqd typegen)
No codegen — defineAbi() from JSON
Fork Handling
Automatic rollback
Automatic rollback
VM Support
EVM, Solana, Substrate, Fuel
EVM, Solana (Substrate planned)
Monitoring
Cloud dashboard
Prometheus + Pipes UI
Maturity
Production-ready
Approaching GABeta

Which SDK should you use?

Answer a few questions to get a recommendation.

Do you need an auto-generated GraphQL API?
Will you use ClickHouse, MongoDB, or a custom database?
Do you want managed cloud deployment?
Do you prefer a zero-codegen setup?
Squid SDK

Batteries-included framework

Production-grade indexing with auto-generated GraphQL, built-in PostgreSQL, and managed cloud deployment.

Powered by Portal

Stream data from 200+ chains via SQD Portal with automatic real-time + historical coverage.

EvmBatchProcessor

High-performance batch processing engine. Backfill historical data at tens of thousands of blocks per second.

Auto-generated GraphQL

Define your schema, get a production-ready GraphQL API served automatically. No extra setup.

TypeORM + Migrations

Built-in PostgreSQL persistence with TypeORM entities and automatic schema migrations.

Multi-VM Support

Index data from EVM, Solana, Substrate, and Fuel virtual machines with dedicated processors.

Deploy to SQD Cloud

One-command deployment with sqd deploy. Managed infrastructure, scaling, and monitoring.

CLI Tooling

Configure, scaffold, deploy, and manage indexers from the SQD CLI.

Open-source

Fully open-source framework. Modify and extend as needed.

Pipes SDK Beta

Composable streaming library

Lightweight, flexible data pipelines with any database target and zero codegen overhead.

Powered by Portal

Same blazing-fast Portal data source. Consistent, validated data across 200+ networks.

Composable .pipe() Pattern

Chain transforms with .pipe() and write to any target with .pipeTo(). Functional, readable, testable.

Any Database Target

PostgreSQL, ClickHouse, MongoDB, SQLite, Parquet, or build your own custom target.

No Codegen

Use defineAbi() to get typed events directly from JSON ABIs. Zero build steps, instant type safety.

Factory Pattern

Track dynamically created contracts (DEX pools, tokens) with built-in factory support.

Prometheus + OpenTelemetry

Production-grade observability. Export metrics to Grafana, Datadog, or any monitoring stack.

Pipes UI

Local development dashboard for monitoring pipeline progress, metrics, and debugging.

Scaffolding CLI

npx pipes init — scaffold a project with templates, custom contracts, and ready-to-run code.

Data Source

Two SDKs, one data layer

Both SDKs consume data from SQD Portal — the fastest way to access validated blockchain data across 200+ networks. No archive nodes, no rate limits.

Squid SDK Portal Pipes SDK
Learn about Portal →

Get started in minutes

Pick an SDK and have a working indexer running locally.

Squid SDK
$ npx sqd init my-indexer
$ cd my-indexer && npm i
$ npx sqd deploy
Squid SDK Docs →
Pipes SDK Beta
$ npx pipes init my-pipeline
$ cd my-pipeline && npm i
$ npm run dev
Pipes SDK Docs →
Get started

Your blockchain data infrastructure, handled.

Private Portal. Dedicated. Validated. Managed. Tell us what you're building — we'll show you what it looks like on SQD.