Data Feeds Powered by Continuum
Queryable event streaming for blockchain data
Event sourcing for data that rewrites itself
Blockchain history can be rewritten. Continuum is the streaming layer that handles it — server-side Arrow conversion, Parquet storage, 18 normalized topic types, and atomic reorg handling before any downstream system sees stale data.
How Continuum works
Any protocol in. Any protocol out. Arrow in the middle.
The difference is where conversion happens. Regardless of ingestion protocol or input schema, Continuum converts server-side to Apache Arrow and stores as Parquet.
Any Protocol In
Any Schema
Protobuf · Avro · Apache Arrow · JSON
Server-side conversion
Apache Arrow
Stored as
Parquet + ZSTD · S3
18 normalized topics
Pre-decoded · Consistent schema · Every chain
Any Protocol Out
Any Query Engine
ClickHouse · DuckDB · Spark · Snowflake · BigQuery · Pandas · dbt
Data published via any protocol is immediately readable via all others. No ETL. No connectors. No secondary copies.
Why we built it
Three problems general-purpose streaming can’t solve
Every chain we added to Moralis’s infrastructure hit the same three walls. Continuum is the architecture we built to climb them.
Reorgs are not an edge case
Standard streaming tools assume history is immutable. Blockchains are not.
The wall you hit
- On Ethereum, short reorgs happen every few days. On faster chains, more frequently.
- Transactions that appeared confirmed can shift block position or disappear entirely.
- The safe-delay workaround costs latency — the right N differs per chain and per risk tolerance.
- For custody platforms and real-time risk feeds, delayed or corrupted position data is unacceptable.
What we built instead
Continuum validates every block against its parent hash before making it readable. Blocks can be written in any order but only become visible once the unbroken hash chain reaches them. When a reorg is detected, consumers receive two feeds: the data to clean up, and the data to replace it with. No silent state corruption. No polling. No safe delay required.
Ordering across partitions is a lie
Partition-based parallelism breaks the block as the natural unit of ordering.
The wall you hit
- Partition-based brokers give ordering guarantees per partition only — not globally.
- For blockchain data, the natural unit of ordering is the block. Everything in block N happened before block N+1.
- If partitioning splits events from the same block, you must reconstruct order downstream.
- Across 40+ chains with different block times and finality models, the complexity compounds fast.
What we built instead
Continuum maintains a single globally-ordered stream per topic. One sequence of positions, with a watermark that only advances when consecutive sessions complete. Parallelism is handled via session allocation — distributing position ranges across workers without breaking the ordering guarantee. Multiple writers ingest in any order. Readers always see a gapless, ordered stream.
Decoded data is not someone else’s problem
Without structured data at the broker layer, decoding scatters across every consumer team.
The wall you hit
- Each consumer team maintains its own ABI decoders, chain-specific parsing logic, and schema management.
- At scale, decoding logic is scattered across the codebase, duplicated across teams, inconsistently maintained.
- A token transfer on Ethereum looks nothing like one on Solana until someone makes it so.
- That someone is always you — and it needs redoing every time you add a chain, protocol, or consumer team.
What we built instead
Regardless of ingestion protocol or input schema, Continuum converts server-side to Apache Arrow and stores as Parquet. 18 normalized topic types — pre-decoded, consistent schema, every chain. A token transfer on Ethereum arrives in the same schema as one on Solana. Teams consume structured data. No ABI decoders to maintain.
Normalized data model
18 topic types. One schema per event. Every chain.
Pre-decoded. Consistent schema regardless of chain. No ABI decoders to maintain. No chain-specific parsing logic. Structured data, ready to use.
All 18 types normalized across EVM, Bitcoin, Solana, and custom chains. Same field names. Same types. Same guarantees.
See full schema documentationChain coverage
One engine. Every chain type.
The same session-based engine, the same Arrow/Parquet storage model, the same five wire protocols, the same 18 topic types — regardless of chain. A new chain is a configuration change, not a project.
EVM chains
Ethereum, BNB, Polygon, Arbitrum, Optimism, Base + 17 more
Bitcoin
Mainnet. UTXO model. Same normalized schema.
Solana
Account model. Same five wire protocols.
Custom chains
Any chain via the session API. Bring your own RPC.
Stellar
Payments, issued assets, balances, and DEX activity - ready to query.
Hyperliquid
Orders, trades, balances, and account activity in one normalized schema.
Protocol adapters
One data store. Any protocol.
Change one configuration line. Your existing producers and consumers connect immediately. No code changes. No migration.
Migration example — Kafka
Before
bootstrap.servers=kafka-broker:9092
group.id=my-consumer-group
auto.offset.reset=earliest
After
bootstrap.servers=continuum:9092
group.id=my-consumer-group
auto.offset.reset=earliest
Same producer API, same consumer groups, same SASL/TLS config, same Schema Registry compatibility.
Producer API, Consumer Groups, SASL/PLAIN, TLS, Schema Registry. Offsets map to global Continuum positions.
pika, lapin, amqplib, bunny, Spring AMQP. Dead letter exchanges, publisher confirms, prefetch.
boto3, @aws-sdk/client-sqs, Go and Rust SDKs. FIFO queues, DLQ, visibility timeout.
Any HTTP client. Topic management, range reads, column projection, SQL-over-partitions. No client library required.
Columnar gRPC streaming. pyarrow and Pandas consume RecordBatches natively — no deserialization, no schema translation.
Cross-protocol access
Data published via any protocol is readable via all others. Team A publishes via AMQP. Team B consumes via Kafka. Analytics queries via Arrow Flight into Pandas. Same data. No ETL.
Head-to-head
Moralis vs the competition
The Graph, Goldsky, Envio, and Ponder operate at the application layer. Continuum is the storage and streaming layer beneath — with a normalized data model none of them provide.
Decentralized query market + SubGraph ecosystem
Strengths
- Largest SubGraph ecosystem. Thousands of production subgraphs already deployed.
- Decentralized indexing with crypto-economic guarantees for censorship resistance.
- Battle-tested on DeFi protocols at scale since 2020.
Where Moralis wins
- Schema change in SubGraph DSL triggers a full reindex — measured in days at historical scale. Continuum stores raw positions: add a derived read topic, consume from any point, no RPC re-fetch.
- graph-node persists to PostgreSQL. Postgres on EVM-scale event data compresses 3-5x. Parquet hits 17x — a permanent structural gap that compounds every month.
- SubGraph data is accessible only through their query layer. Continuum Parquet partitions are standard files every analytics engine speaks: ClickHouse, DuckDB, Spark, BigQuery, Athena, Pandas.
- Reorg handling in graph-node is a handler concern. In Continuum, resetPosition is atomic at the storage layer: consumers lock, positions rewrite, consumers unlock. One call — before downstream sees stale data.
Managed subgraph hosting + Mirror streaming pipelines
Strengths
- Best-in-class managed SubGraph hosting with minimal DevOps overhead.
- Mirror: real-time pipelines into Postgres, ClickHouse, and S3 with a visual builder.
- Fast setup, good documentation, responsive support team.
Where Moralis wins
- Your data lives on Goldsky infrastructure. Leaving requires an export job. Continuum writes to your own S3 bucket from day one — no vendor custody, no migration risk.
- Mirror reorg handling is best-effort eventual consistency. Continuum's reorg is atomic: resetPosition locks consumers, rewrites positions, unlocks. No stale reads, no consistency window.
- Commodity S3 pricing after 17x Parquet compression is structurally cheaper than any managed pipeline tier — at every volume, from 1TB to petabytes.
- Goldsky Mirror does not expose raw Parquet. Continuum partitions are standard Arrow files your analytics stack already speaks.
EVM indexing framework with HyperSync backfill
Strengths
- HyperSync delivers fast EVM historical backfill — measurably faster than standard JSON-RPC polling.
- TypeScript handler model is approachable for teams from Node.js or frontend backgrounds.
- Publishes an open-source indexer benchmark — a transparency gesture worth acknowledging.
Where Moralis wins
| Indexer | Events / sec | vs Envio |
|---|---|---|
| Moralis | 106,662 / s | 10.6x faster |
| Envio | 9,984 / s | baseline |
| SQD | 1,627 / s | 5.9x slower |
| Ponder | 44 / s | 153.7x slower |
| SubQuery | 22 / s | 341.3x slower |
Rocket Pool ERC-20 Transfer Events — Ethereum Mainnet, block 18,600,000 to latest. Moralis processed every block in the range.
- Envio runs an open-source benchmark. Moralis submitted: 106,662 events/second — 10.6x faster than Envio's 9,984/s on their own test, their own hardware. Envio then published a 'Best Blockchain Indexers 2026' ranking that did not mention Moralis.
- HyperSync covers EVM chains exclusively. Continuum indexes Bitcoin, Solana, and any custom chain through the same engine, the same storage model, the same 18 topic types. No second indexer.
- Envio is an opinionated framework: TypeScript handlers, EVM-only. Continuum is a storage and streaming layer — bring your own handler language, deployment model, and query engine.
- Schema changes in Envio trigger a full reindex from RPC. Continuum stores raw positions: add a derived read topic, consume from any historical point — no RPC re-fetch.
- Reorgs in Envio require handler-level logic in application code. In Continuum, resetPosition is atomic at the storage layer — before any downstream system sees stale data.
TypeScript-native EVM indexer for developers
Strengths
- Outstanding developer experience: TypeScript, local hot reload, type-safe schema, direct SQL access.
- Minimal boilerplate. Excellent for teams where speed to first query is the priority.
- Good choice for small-to-medium EVM projects.
Where Moralis wins
- Ponder runs in a single Node.js process. When it falls behind, you restart on a larger instance. Continuum scales horizontally by adding session workers — no downtime, no instance ceiling.
- Ponder persists to Postgres. Postgres on EVM event data compresses 3-5x. Continuum Parquet hits 17x — a permanent, compounding difference on your infrastructure bill every month.
- Ponder is EVM-only. Continuum indexes EVM, Bitcoin, Solana, and custom chains through the same session-based engine with the same normalized schema.
- Ponder schema changes require a full reindex from RPC. Continuum stores raw positions: add a derived read topic, consume from any historical position — no RPC re-fetch.
Reorgs. Ordering. Decoding.
The three problems general-purpose streaming can’t solve for blockchain data — and exactly how we solved them. Five years. 40+ chains. One architecture.
When Continuum is not the right choice
Kafka Streams / ksqlDB
Continuum speaks the Kafka wire protocol but does not run streaming topology or ksqlDB. Keep native Kafka if those are core to your stack.
Exactly-once semantics
At-least-once is the delivery contract. Cross-producer-consumer-sink exactly-once is not supported.
Managed indexing without infrastructure ownership
If you want managed subgraph hosting without owning any infrastructure, The Graph or Goldsky are purpose-built. Continuum is for teams who own their stack.
Sub-second latency requirements
If you need sub-100ms consumer latency for HFT-style workloads, Continuum's S3-native model has different characteristics than in-memory brokers. We will tell you upfront.
What problem are you solving?
Continuum is available as a managed service — bring your own S3 bucket, your own VPC, your own query engine.
Running high-throughput data pipelines?
Your pipeline works. But everything downstream needs another ETL step to make data queryable. Continuum stores as Parquet natively — ClickHouse, DuckDB, Spark, and Pandas without an export step.
Get a workload modelDealing with chain reorgs or ordering?
Transactions that appeared confirmed are shifting. Materialized views are wrong. Continuum's atomic resetPosition handles reorgs at the storage layer before downstream ever sees stale data.
See how resetPosition worksDrowning in ABI decoder maintenance?
18 normalized topic types. Pre-decoded. Consistent schema across every chain. Stop maintaining ABI logic in every consumer and synchronizing it across teams every time you add a chain.
See the 18 topic typesThree protocols, three pipelines, three ETL jobs?
Team A runs Kafka. Team B uses SQS. Analytics queries Parquet on S3. One Continuum deployment serves all of them from the same data. No connectors. No secondary copies. No sync jobs.
Talk to EngineeringAI-ready context
Evaluate Continuum with AI
Drop these files into Claude, ChatGPT, or any LLM. Ask technical questions, run competitive comparisons, or explore integration scenarios — all in one conversation.
Continuum Intelligence Brief
Sales-grade LLM reference · ~24 KB · Markdown
Written specifically for LLM consumption. Covers the complete competitive landscape, performance benchmarks, objection handling scripts, ICP profiles, qualification signals, and the full 18-topic data model. Paste it into any AI and ask your evaluation questions.
What’s inside
Paste into Claude, ChatGPT, Gemini, or any LLM with a large context window.
Moralis Docs
docs.moralis.com/llms.txt
Full Moralis product documentation in LLM-ready format. All APIs, Streams, DataShare, and platform features — structured for AI consumption.
Moralis Docs (Full)
docs.moralis.com/llms-full.txt
Extended documentation including code examples, SDK references, and integration guides. Ideal for technical evaluation and implementation planning.
How to use
- 1 Copy the IntelligenceBrief above
- 2 Open Claude, ChatGPT, or Gemini
- 3 Paste and ask your question
Example: “How does Continuum compare to Goldsky for a team running ClickHouse?”
Book a demo
See Continuum in action.
15 minutes with Moralis engineering. We walk through your specific use case and show you exactly how Continuum handles it.
- Live walkthrough of reorg handling and atomic resetPosition
- See your protocol (Kafka, AMQP, SQS, REST, Flight) connected in real time
- Realistic migration timeline for your current stack
- Storage and throughput model for your data volumes
Not ready for a demo?
Read the technical story, browse the comparison pages, or ask an AI using our LLM brief.