Kafka Connectors¶
You can use self-managed Apache Kafka® connectors to move data in and out of Kafka. The self-managed connectors are for use with Confluent Platform. For more information on fully-managed connectors, see Confluent Cloud.
JDBC Source and Sink
The Kafka Connect JDBC Source connector imports data from any relational database with a JDBC driver into an Kafka topic. The Kafka Connect JDBC Sink connector exports data from Kafka topics to any relational database with a JDBC driver.
JMS Source
The Kafka Connect JMS Source connector is used to move messages from any JMS-compliant broker into Kafka.
Elasticsearch Service Sink
The Kafka Connect Elasticsearch Service Sink connector moves data from Kafka to Elasticsearch. It writes data from a topic in Kafka to an index in Elasticsearch.
Amazon S3 Sink
The Kafka Connect Amazon S3 Sink connector exports data from Kafka topics to S3 objects in either Avro, JSON, or Bytes formats.
HDFS 2 Sink
The Kafka Connect HDFS 2 Sink connector allows you to export data from Apache Kafka topics to HDFS 2.x files in a variety of formats. The connector integrates with Hive to make data immediately available for querying with HiveQL.
Replicator
Replicator allows you to easily and reliably replicate topics from one Kafka cluster to another.