TiCDC background and positioning (Legacy)
Why TiCDC exists (vs TiDB Binlog), what problems it solves, and where it fits in the TiDB ecosystem.
TiCDC is TiDB’s change data capture (CDC) component. It streams changes from TiKV to downstream systems (MySQL / Kafka / storage / etc.) and is commonly used for:
- Disaster recovery (DR) / replica clusters
- Data integration (streaming changes into search, analytics, and real-time pipelines)
1. Why TiCDC (and not TiDB Binlog)
Historically, TiDB had TiDB Binlog as a replication toolchain. TiCDC was introduced to address major limitations commonly seen in TiDB Binlog deployments, such as:
- Limited scalability (e.g. single-node bottlenecks)
- Throughput constraints in common MQ setups
- Protocol/ecosystem constraints (harder to integrate with standard CDC consumers)
- Operational fragility and limited self-healing behavior
In short: TiCDC is designed as a more scalable, more maintainable CDC pipeline for modern integration/DR use cases.

2. Where TiCDC sits in the TiDB ecosystem
You can think of TiCDC as the “egress” layer between TiDB and downstream systems. In practice, downstream targets include:
- MySQL-compatible databases (TiDB / MySQL / RDS-like services)
- Message queues (Kafka, etc.) for stream processing (Flink / Spark / custom consumers)
- Custom sinks via supported protocols (Open Protocol / Canal / Maxwell / Avro, etc., depending on version/support level)

3. Typical pipelines
Common production patterns:
- TiDB → TiCDC → Kafka → Flink → (Warehouse / Serving / TiDB)
- TiDB → TiCDC → MySQL (for a replica / validation / cutover)
- TiDB → TiCDC → MQ / sink for search, recommendation, and real-time analytics
4. Getting started (recommended path)
This page is a legacy note. For the most accurate and up-to-date configuration, use the official docs:
- TiCDC overview: https://docs.pingcap.com/tidb/stable/ticdc-overview
- Scaling TiCDC using TiUP: https://docs.pingcap.com/tidb/stable/scale-tidb-using-tiup#scale-out-ticdc-nodes
If you want a quick sanity check, validate a minimal changefeed to:
- Kafka (validate topic/partitioning and message format)
- MySQL (validate DML/DDL propagation and lag)
References
- Infra Meetup: Why and how we build TiCDC: https://www.bilibili.com/video/BV1HT4y1A7mh
- Infra Meetup: TiCDC ecosystem and community: https://www.bilibili.com/video/BV1bD4y1o7qU
- TiDB Binlog overview: https://docs.pingcap.com/tidb/stable/tidb-binlog-overview
- TiCDC protocol list (design doc): https://github.com/pingcap/tiflow/blob/master/docs/design/2020-11-04-ticdc-protocol-list.md