Kafka Cluster Types in Confluent Cloud

Confluent offers different types of Kafka clusters in Confluent Cloud. The cluster type you choose determines the features, capabilities, and price of the cluster. Use the information in this topic to find the cluster with the features and capabilities that best meets your needs. Use Basic clusters for experimentation and early development. For a production cluster, choose from Standard, Enterprise, or Dedicated.

The table below offers a high-level comparison of features across Kafka cluster types.

Category Feature Basic Standard Enterprise Dedicated
Kafka Kafka ACLs Yes Yes Yes Yes
  Exactly Once Semantics Yes Yes Yes Yes
  Fully-managed replica placement Yes Yes Yes Yes
  Key based compacted storage Yes Yes Yes Yes
  User interface to manage consumer lag Yes Yes Yes Yes
  Topic management Yes Yes Yes Yes
Connect Fully-Managed Connectors Yes Yes Yes Yes
  Custom Connectors Yes Yes Yes Yes
  View and consume Connect logs Yes Yes Yes Yes
Stream Processing Flink Yes Yes Yes Yes
  ksqlDB Yes Yes No Yes
Governance Stream Governance Yes Yes Yes Yes
  Stream Catalog Yes Yes Yes Yes
  Stream Lineage Yes Yes Yes Yes
Networking Public networking Yes Yes No Yes
  Private networking No No Yes Yes
Security Encryption-at-rest Yes Yes Yes Yes
  Encryption-in-transit Yes Yes Yes Yes
  Role-based Access Control (RBAC) Yes * Yes Yes Yes
  OAuth No Yes Yes Yes
  Audit logs No Yes Yes Yes
  Self-managed encryption keys No No No Yes
Other Automatic Elastic scaling Yes Yes Yes No
  Uptime Service Level Agreement (SLA) Yes Yes Yes Yes
  Stream Sharing Yes Yes No Yes
  Client Quotas No No No Yes
  Cluster Linking Yes ** Yes ** Yes †† Yes

* RBAC roles for resources within the Kafka cluster (DeveloperRead, DeveloperWrite, DeveloperManage, and ResourceOwner) are not available on Basic clusters.

Stream Sharing does not support every private networking option.

** Source only

†† Source only depending on network type

Important

The capabilities provided in this topic are for planning purposes, and are not a guarantee of performance, which varies depending on each unique configuration.

Cluster limit comparison

Use the table below to compare cluster limits across cluster types.

Dimension Basic Standard Enterprise Dedicated
Ingress (MBps) * 250 250 600 9,120
Egress (MBps) * 750 750 1800 27,360
Partitions (pre-replication) * 4096 4096 30,000 100,000
Number of partitions you can compact * 4096 4096 3,600 100,000
Total client connections * 1000 1000 45,000 2,736,000
Connection attempts (per second) * 80 80 2500 76,000
Requests (per second) * 15,000 15,000 75,000 2,280,000
Message size (MB) 8 8 20 20
Client version (minimum) 0.11.0 0.11.0 0.11.0 0.11.0
Request size (MB) 100 100 100 100
Fetch bytes (MB) 55 55 55 55
API keys 50 100 500 2,000
Partition creation and deletion (per five minute period) 250 500 500 5,000
Connector tasks per Kafka cluster 250 250 250 250
ACLs 1,000 1,000 4,000 10,000
Kafka REST Produce v3 - Max throughput (MBps): 10 10 10 7,600
Kafka REST Produce v3 - Max connection requests (per second): 25 25 25 45,600
Kafka REST Produce v3 - Max streamed requests (per second): 1000 1000 1000 456,000
Kafka REST Produce v3 - Max message size for Kafka REST Produce API (MB): 8 8 8 20
Kafka REST Admin v3 - Max connection requests (per second): 25 25 25 45,600

* Limit based on Elastic Confluent Unit for Kafka (eCKU). You only pay for the capacity you use up to the limit. For more information, see Elastic Confluent Unit for Kafka.

† Limit based on a Dedicated Kafka cluster with 152 CKU. For more information, see CKU limits per cluster and Confluent Unit for Kafka.

eCKU/CKU comparison

The table below compares limits for a single billing unit for each cluster type.

Basic, Standard, and Enterprise clusters are elastic, shrinking and expanding automatically based on load. You don’t resize these clusters (unlike Dedicated clusters). When you need more capacity, your cluster expands up to the fixed ceiling. If you’re not using capacity above the minimum, you’re not paying for it. If you’re at zero capacity, you don’t pay for anything.

Dimension Basic eCKU Standard eCKU Enterprise eCKU Dedicated CKU
Ingress (MBps) 5 25 60 60
Egress (MBps) 15 75 180 180
Partitions (pre-replication) 30 250 3,000 4,500
Number of partitions that you can compact (pre-replication) 30 250 360 4,500
Total client connections 20 1000 4,500 18,000
Connection attempts (per second) 5 50 250 500
Requests (per second) 100 1,500 7,500 15,000
Kafka REST Produce v3 - Max throughput (MBps): N/a N/a N/a 50
Kafka REST Produce v3 - Max connection requests (per second): N/a N/a N/a 300
Kafka REST Produce v3 - Max streamed requests (per second): N/a N/a N/a 3000
Kafka REST Admin v3 - Max connection requests (per second): N/a N/a N/a 300

Pricing changes for Basic/Standard clusters

Beginning 4/16/2024, the pricing model for Basic & Standard clusters will utilize Elastic CKUs (eCKU) instead of Base & Partitions. These changes are only applicable to Confluent Cloud organizations created on or after 4/16/2024. All organizations created before this date are not impacted and will continue to utilize their existing cluster pricing model and limits.

If you have any questions, please contact us by creating a Support request via the Confluent Cloud Support Portal or by reaching out to your account team.

Basic clusters

Basic Kafka clusters are designed for experimentation, early development, and basic use cases. Basic clusters support the following:

You can view connector events in Confluent Cloud Console with a Basic cluster, but you can’t consume events from a topic using Confluent CLI, Java, or C/C++. For more information, see View Connector Events.

Confluent uses Elastic Confluent Unit for Kafka (eCKU) to provision and bill for Basic Kafka clusters.

Pricing changes for Basic/Standard clusters

Beginning 4/16/2024, the pricing model for Basic & Standard clusters will utilize Elastic CKUs (eCKU) instead of Base & Partitions. These changes are only applicable to Confluent Cloud organizations created on or after 4/16/2024. All organizations created before this date are not impacted and will continue to utilize their existing cluster pricing model and limits.

If you have any questions, please contact us by creating a Support request via the Confluent Cloud Support Portal or by reaching out to your account team.

eCKU limits per Basic cluster

Basic clusters are elastic, shrinking and expanding automatically based on load. You don’t resize your cluster. When you need more capacity, your Basic cluster expands up to the fixed ceiling. If you’re not using capacity above the minimum, you’re not paying for it.

Basic cluster capacity:

  • Minimum: 1 eCKU *
  • Fixed ceiling: 50 eCKU

* If consumption in a given hour is zero across all billable dimensions, you pay nothing. For more information, see Elastic Confluent Unit for Kafka.

eCKU capacity guidance

The dimensions in the following table describe the capacity of a single eCKU. For more information about eCKU, see Elastic Confluent Unit for Kafka and Compare Billing Units for Kafka clusters.

Dimension eCKU capacity
Ingress 5 megabytes per second (MBps)
Egress 15 megabytes per second (MBps)
Partitions (pre-replication) 30 partitions
Total client connections 20 connections
Connection attempts 5 connection attempts per second
Requests 100 requests per second

Basic limits per cluster

Dimension Capability Additional details
Ingress Max 250 MBps

Number of bytes that can be produced to the cluster in one second.

Available in the Metrics API as received_bytes (convert from bytes to MB).

If you are self-managing Kafka, you can look at the producer outgoing-byte-rate metrics and broker kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec metrics to understand your throughput.

To reduce usage on this dimension, you can compress your messages. lz4 is recommended for compression. gzip is not recommended because it incurs high overhead on the cluster.

To achieve maximum throughput, you must locate clients in the same region as the Kafka cluster.

Egress Max 750 MBps

Number of bytes that can be consumed from the cluster in one second.

Available in the the Metrics API as sent_bytes (convert from bytes to MB).

If you are self-managing Kafka, you can look at the consumer incoming-byte-rate metrics and broker kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec to understand your throughput.

To reduce usage on this dimension, you can compress your messages and ensure each consumer is only consuming from the topics it requires. lz4 is recommended for compression. gzip is not recommended because it incurs high overhead on the cluster.

To achieve maximum throughput, you must locate clients in the same region as the Kafka cluster.

Storage (pre-replication) Max 5 TB

Number of bytes retained on the cluster, pre-replication.

Available in the Metrics API as retained_bytes (convert from bytes to TB). The returned value is pre-replication.

You can configure policy settings retention.bytes and retention.ms at a topic level so you can control exactly how much and how long to retain data in a way that makes sense for your applications and helps control your costs.

If you are self-managing Kafka, you can look at how much disk space your cluster is using to understand your storage needs.

To reduce usage on this dimension, you can compress your messages and reduce your retention settings. lz4 is recommended for compression. gzip is not recommended because it incurs high overhead on the cluster.

Partitions (pre-replication) Max 4096

Maximum number of partitions that can exist on the cluster at one time, before replication. All topics that the customer creates as well as internal topics that are automatically created by Confluent Platform components–such as ksqlDB, Kafka Streams, Connect, and Control Center–count towards the cluster partition limit. The automatically created topics are prefixed with an underscore (_). Topics that are internal to Kafka itself (e.g., consumer offsets) are not visible in the Cloud Console, and do not count against partition limits or toward partition billing.

Available in the Metrics API as partition_count.

Attempts to create additional partitions beyond this limit will fail with an error message.

If you are self-managing Kafka, you can look at the kafka.controller:type=KafkaController,name=GlobalPartitionCount metric to understand your partition usage. Find details in the Broker section.

To reduce usage on this dimension, you can delete unused topics and create new topics with fewer partitions. You can use the Kafka Admin interface to increase the partition count of an existing topic if the initial partition count is too low.

You can compact any number of partitions up to the limit for the cluster.

Total client connections Max 1000

Number of TCP connections to the cluster that can be open at one time. Available in the Metrics API as active_connection_count. Filter by principal to understand how many connections each application is creating.

If you are self-managing Kafka, you can look at the broker kafka.server:type=socket-server-metrics,listener={listener_name},networkProcessor={#},name=connection-count metrics to understand how many connections you are using. This value may not have a 1:1 ratio to connections in Confluent Cloud, depending on the number of brokers, partitions, and applications in your self-managed cluster.

How many connections a cluster supports can vary widely based on several factors, including number of producer clients, number of consumer clients, partition keying strategy, produce patterns per client, and consume patterns per client.

Connection attempts Max 80 per second

Maximum number of new TCP connections to the cluster that can be created in one second. This means successful authentications plus unsuccessful authentication attempts.

Available in the Metrics API as successful_authentication_count (only includes successful authentications, not unsuccessful authentication attempts).

If you are self-managing Kafka, you can look at the rate of change for the kafka.server:type=socket-server-metrics,listener={listener_name},networkProcessor={#},name=connection-count metric and the Consumer connection-creation-rate metric to understand how many new connections you are creating. For details, see Broker Metrics and Global Connection Metrics.

To reduce usage on this dimension, you can use longer-lived connections to the cluster.

Requests Max ~ 15,000 per second

Number of client requests to the cluster in one second.

Available in the the Metrics API as request_count.

If you are self-managing Kafka, you can look at the broker kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce FetchConsumer FetchFollower} metrics and client request-rate metrics to understand your request volume.

To reduce usage on this dimension, you can adjust producer batching configurations, consumer client batching configurations, and shut down otherwise inactive clients.

Message size Max 8 MB For message size defaults at the topic level see Configuration Reference for Topics in Confluent Cloud.
Client version Minimum 0.11.0 None
Request size Max 100 MB None
Fetch bytes Max 55 MB None
API keys Default limit 50 You can request an increase. For more information, see Service Quotas for Confluent Cloud.
Partition creation and deletion Max 250 per 5 minute period

The following occurs when partition creation and deletion rate limit is reached:

  • For clients < Kafka 2.7:

    The cluster always accepts and processes all the partition creates and deletes within a request, and then throttles the connection of the client until the rate of changes is below the quota.

  • For clients >= Kafka 2.7:

    The cluster only accepts and processes partition creates and deletes up to the quota. All other partition creates and deletes in the request are rejected with a THROTTLING_QUOTA_EXCEEDED error. By default, the admin client will automatically retry on that error until default.api.timeout.ms is reached. When the automatic retry is disabled by the client, the THROTTLING_QUOTA_EXCEEDED error is immediately returned to the client.

Connector tasks per Kafka cluster Max 250 1 task per connector
ACLs Default limit 1,000 None
Kafka REST Produce v3

Max throughput: 10 MBps

Max connection requests: 25 connection requests per second

Max streamed requests: 1000 requests per second

Max message size for Kafka REST Produce API: 8 MB

Client applications can connect over the REST API to Produce records directly to the Confluent Cloud cluster.

To learn more, see the Kafka REST concepts overview.

The max connection requests limit is shared between Produce and Admin v3. For example, if Produce is running at 20 connection requests per second, Admin can run at five connection requests per second maximum.

To learn more about the Kafka REST Produce API streaming mode, see the examples and explanation of streaming mode in the concept docs, and the Produce example in the quick start.

A streamed request is a single record to be produced to Kafka. Concatenate the records to produce multiple records in the same request. Delivery reports are concatenated in the same order as the records are sent. For more information, see Produce Records.

Kafka REST Admin v3 Max connection requests: up to 25 connection requests per second This limit is shared between Produce and Admin v3. For example, if Produce is running at 20 connection requests per second, Admin can run at approximately five connection requests per second maximum.

Basic limits per partition

The partition capabilities that follow are based on benchmarking and intended as practical guidelines for planning purposes. Performance per partition will vary depending on your individual configuration, and these benchmarks do not guarantee performance.

Dimension Capability
Ingress per Partition ~5 MBps
Egress per Partition ~15 MBps
Storage per Partition Unlimited
Storage per Partition for Compacted Topics Unlimited

Cluster Linking capabilities

  • A Basic cluster can be a source cluster of a cluster link (can send data).
  • A Basic cluster cannot be a destination cluster (cannot receive data).

To learn more, see Supported cluster types in the Cluster Linking documentation.

Standard clusters

Standard clusters are designed for production-ready features and functionality. Standard clusters support the following:

Confluent uses Elastic Confluent Unit for Kafka (eCKU) to provision and bill for Standard Kafka clusters.

Pricing changes for Basic/Standard clusters

Beginning 4/16/2024, the pricing model for Basic & Standard clusters will utilize Elastic CKUs (eCKU) instead of Base & Partitions. These changes are only applicable to Confluent Cloud organizations created on or after 4/16/2024. All organizations created before this date are not impacted and will continue to utilize their existing cluster pricing model and limits.

If you have any questions, please contact us by creating a Support request via the Confluent Cloud Support Portal or by reaching out to your account team.

eCKU limits per Standard cluster

Standard clusters are elastic, shrinking and expanding automatically based on load. You don’t resize your cluster. When you need more capacity, your Standard cluster expands up to the fixed ceiling. If you’re not using capacity above the minimum, you’re not paying for it.

Standard cluster capacity:

  • Minimum: Depends on SLA requirements *
    • 99.9% uptime SLA: 1 eCKU minimum
    • 99.99% uptime SLA: 2 eCKU minimum
  • Fixed ceiling: 10 eCKU

Your cluster scales to meet demand or save costs but your SLA does not change. You can upgrade a Standard cluster from 99.9% uptime to 99.99% uptime SLA. Standard SLA upgrades changes the minimum cluster capacity from 1 to 2 eCKU.

* If consumption in a given hour is zero across all billable dimensions, you pay nothing. For more information, see Elastic Confluent Unit for Kafka.

eCKU capacity guidance

The dimensions in the following table describe the capacity of a single eCKU. For more information about eCKU, see Elastic Confluent Unit for Kafka and Compare Billing Units for Kafka clusters.

Dimension eCKU capacity
Ingress 25 megabytes per second (MBps)
Egress 75 megabytes per second (MBps)
Partitions (pre-replication) 250 partitions
Total client connections 1000 connections
Connection attempts 50 connection attempts per second
Requests 1,500 requests per second

Standard limits per cluster

Dimension Capability Additional details
Ingress Max 250 MBps

Number of bytes that can be produced to the cluster in one second.

Available in the Metrics API as received_bytes (convert from bytes to MB).

If you are self-managing Kafka, you can look at the producer outgoing-byte-rate metrics and broker kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec metrics to understand your throughput.

To reduce usage on this dimension, you can compress your messages. lz4 is recommended for compression. gzip is not recommended because it incurs high overhead on the cluster.

To achieve maximum throughput, you must locate clients in the same region as the Kafka cluster.

Egress Max 750 MBps

Number of bytes that can be consumed from the cluster in one second.

Available in the the Metrics API as sent_bytes (convert from bytes to MB).

If you are self-managing Kafka, you can look at the consumer incoming-byte-rate metrics and broker kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec to understand your throughput.

To reduce usage on this dimension, you can compress your messages and ensure each consumer is only consuming from the topics it requires. lz4 is recommended for compression. gzip is not recommended because it incurs high overhead on the cluster.

To achieve maximum throughput, you must locate clients in the same region as the Kafka cluster.

Storage (pre-replication) Infinite

Number of bytes retained on the cluster, pre-replication. Standard Confluent Cloud clusters support Infinite Storage. This means there is no maximum size limit for the amount of data that can be stored on the cluster.

Available in the Metrics API as retained_bytes (convert from bytes to TB). The value returned is pre-replication.

You can configure policy settings retention.bytes and retention.ms at a topic level so you can control exactly how much and how long to retain data in a way that makes sense for your applications and helps control your costs. If you exceed configured maximum retention values, producers will be throttled to prevent additional writes, which will register as non-zero values for the producer client produce-throttle-time-max and produce-throttle-time-avg metrics.

If you are self-managing Kafka, you can look at how much disk space your cluster is using to understand your storage needs.

To reduce usage on this dimension, you can compress your messages and reduce your retention settings. lz4 is recommended for compression. gzip is not recommended because it incurs high overhead on the cluster.

Partitions (pre-replication) Max 4096

Maximum number of partitions that can exist on the cluster at one time, before replication. All topics that the customer creates as well as internal topics that are automatically created by Confluent Platform components–such as ksqlDB, Kafka Streams, Connect, and Control Center–count towards the cluster partition limit. The automatically created topics are prefixed with an underscore (_). Topics that are internal to Kafka itself (e.g., consumer offsets) are not visible in the Cloud Console, and do not count against partition limits or toward partition billing.

Available in the Metrics API as partition_count.

Attempts to create additional partitions beyond this limit will fail with an error message.

If you are self-managing Kafka, you can look at the kafka.controller:type=KafkaController,name=GlobalPartitionCount metric to understand your partition usage. Find details in the Broker section.

To reduce usage on this dimension, you can delete unused topics and create new topics with fewer partitions. You can use the Kafka Admin interface to increase the partition count of an existing topic if the initial partition count is too low.

You can compact any number of partitions up to the limit for the cluster.

Total client connections Max 1000

Number of TCP connections to the cluster that can be open at one time. Available in the Metrics API as active_connection_count. Filter by principal to understand how many connections each application is creating.

If you are self-managing Kafka, you can look at the broker kafka.server:type=socket-server-metrics,listener={listener_name},networkProcessor={#},name=connection-count metrics to understand how many connections you are using. This value may not have a 1:1 ratio to connections in Confluent Cloud, depending on the number of brokers, partitions, and applications in your self-managed cluster.

How many connections a cluster supports can vary widely based on several factors, including number of producer clients, number of consumer clients, partition keying strategy, produce patterns per client, and consume patterns per client.

Connection attempts Max 80 per second

Maximum number of new TCP connections to the cluster that can be created in one second. This means successful authentications plus unsuccessful authentication attempts.

Available in the Metrics API as successful_authentication_count (only includes successful authentications, not unsuccessful authentication attempts).

If you are self-managing Kafka, you can look at the rate of change for the kafka.server:type=socket-server-metrics,listener={listener_name},networkProcessor={#},name=connection-count metric and the Consumer connection-creation-rate metric to understand how many new connections you are creating. For details, see Broker Metrics and Global Connection Metrics.

To reduce usage on this dimension, you can use longer-lived connections to the cluster.

Requests Max ~ 15,000 per second

Number of client requests to the cluster in one second.

Available in the the Metrics API as request_count.

If you are self-managing Kafka, you can look at the broker kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce FetchConsumer FetchFollower} metrics and client request-rate metrics to understand your request volume.

To reduce usage on this dimension, you can adjust producer batching configurations, consumer client batching configurations, and shut down otherwise inactive clients.

Message size Max 8 MB For message size defaults at the topic level, see Configuration Reference for Topics in Confluent Cloud.
Client version Minimum 0.11.0 None
Request size Max 100 MB None
Fetch bytes Max 55 MB None
API keys Default limit 100 You can request an increase. For more information, see Service Quotas for Confluent Cloud.
Partition creation and deletion Max 500 per 5 minute period

The following occurs when partition creation and deletion rate limit is reached:

  • For clients < Kafka 2.7:

    The cluster always accepts and processes all the partition creates and deletes within a request, and then throttles the connection of the client until the rate of changes is below the quota.

  • For clients >= Kafka 2.7:

    The cluster only accepts and processes partition creates and deletes up to the quota. All other partition creates and deletes in the request are rejected with a THROTTLING_QUOTA_EXCEEDED error. By default, the admin client will automatically retry on that error until default.api.timeout.ms is reached. When the automatic retry is disabled by the client, the THROTTLING_QUOTA_EXCEEDED error is immediately returned to the client.

Connector tasks per Kafka cluster Max 250 None
ACLs Default limit 1,000 None
Kafka REST Produce v3

Max throughput: 10 MBps

Max connection requests: 25 connection requests per second

Max streamed requests: 1000 requests per second

Max message size for Kafka REST Produce API: 8 MB

Client applications can connect over the REST API to Produce records directly to the Confluent Cloud cluster.

To learn more, see the Kafka REST concepts overview.

The max connection requests limit is shared between Produce and Admin v3. For example, if Produce is running at 20 connection requests per second, Admin can run at five connection requests per second maximum.

To learn more about the Kafka REST Produce API streaming mode, see the examples and explanation of streaming mode in the concept docs, and the Produce example in the quick start.

A streamed request is a single record to be produced to Kafka. Concatenate the records to produce multiple records in the same request. Delivery reports are concatenated in the same order as the records are sent. For more information, see Produce Records.

Kafka REST Admin v3 Max connection requests: up to 25 connection requests per second This limit is shared between Produce and Admin v3. For example, if Produce is running at 20 connection requests per second, Admin can run at approximately five connection requests per second maximum.

Standard limits per partition

The partition capabilities that follow are based on benchmarking and intended as practical guidelines for planning purposes. Performance per partition will vary depending on your individual configuration, and these benchmarks do not guarantee performance.

Dimension Capability
Ingress per Partition ~5 MBps
Egress per Partition ~15 MBps
Storage per Partition Unlimited
Storage per Partition for Compacted Topics Unlimited

Cluster Linking capabilities

  • A Standard cluster can be a source cluster of a cluster link (can send data).
  • A Standard cluster cannot be a destination cluster (cannot receive data).

To learn more, see Supported cluster types in the Cluster Linking documentation.

Enterprise clusters

Enterprise clusters are designed for production-ready functionality that requires private endpoint networking capabilities. Enterprise clusters support the following:

Confluent uses Elastic Confluent Unit for Kafka (eCKU) to provision and bill for Enterprise Kafka clusters.

eCKU limits per Enterprise cluster

Enterprise clusters are elastic, shrinking and expanding automatically based on load. You don’t resize your cluster. When you need more capacity, your Enterprise cluster expands up to the fixed ceiling. If you’re not using capacity above the minimum, you’re not paying for it.

Enterprise cluster capacity:

  • Minimum: Depends on SLA requirements *
    • 99.9% uptime SLA: 1 eCKU minimum
    • 99.99% uptime SLA: 2 eCKUs minimum
  • Fixed ceiling: 10 eCKU

Your cluster scales to meet demand or save costs but your SLA does not change. You can upgrade a Enterprise cluster from 99.9% uptime to 99.99% uptime SLA. Enterprise SLA upgrades changes the minimum cluster capacity from 1 to 2 eCKUs.

* If consumption in a given hour is zero across all billable dimensions, you pay nothing. For more information, see Elastic Confluent Unit for Kafka.

eCKU capacity guidance

The dimensions in the following table describe the capacity of a single eCKU. For more information about eCKU, see Elastic Confluent Unit for Kafka and Compare Billing Units for Kafka clusters.

Dimension eCKU capacity
Ingress 60 megabytes per second (MBps)
Egress 180 megabytes per second (MBps)
Partitions (pre-replication) 3,000
Number of partitions that you can compact (pre-replication) 360
Total client connections 4,500 connections
Connection attempts 250 connection attempts per second
Requests 7,500 requests per second

Enterprise limits per cluster

Enterprise clusters have a maximum capacity of 10 eCKU. For any Confluent Cloud cluster, the expected performance for any given workload is dependent on a variety of dimensions, such as message size and number of partitions.

Use the information in the following tables to determine if a given workload fits within the fixed ceiling, how to monitor dimensions, and suggestions to reduce your use of a particular dimension.

Dimension Limits per cluster
Ingress * 600 megabytes per second (MBps)
Egress * 1800 megabytes per second (MBps)
Storage (pre-replication) * Infinite
Partitions (pre-replication) * 30,000 partitions
Number of partitions you can compact (pre-replication) * 3,600
Total client connections * 45,000 connections
Connection attempts * 2500 connection attempts per second
Requests * 75,000 requests per second
Message size Max 20 MB
Client version Minimum 0.11.0
Request size Max 100 MB
Fetch bytes Max 55 MB
API keys Default limit 500 (You can request an increase. For more information, see Service Quotas for Confluent Cloud.)
Partition creation and deletion Max 500 per 5 minute period
Connector tasks per Kafka cluster Max 250
ACLs Default limit 4,000
Kafka REST Produce v3

Max throughput: 10 MBps

Max connection requests: 25 connection requests per second

Max streamed requests: 1000 requests per second

Max message size for Kafka REST Produce API: 8 MB

* Dimension is per eCKU. For more information, see eCKU limits per Enterprise cluster.

Enterprise limits per partition

The partition capabilities that follow are based on benchmarking and intended as practical guidelines for planning purposes. Performance per partition will vary depending on your individual configuration, and these benchmarks do not guarantee performance.

Dimension Capability
Ingress per Partition ~6 MBps
Egress per Partition ~18 MBps
Storage per Partition Unlimited
Storage per Partition for Compacted Topics Unlimited

Cluster Linking capabilities

Enterprise clusters can be a source of a cluster link, dependent on the networking type and the other cluster involved. To learn more, see Supported cluster types in the Cluster Linking documentation.

Dedicated clusters

Dedicated clusters are designed for critical production workloads with high traffic or private networking requirements. Dedicated clusters support the following:

  • Single-tenant deployments with a 99.95% uptime SLA for Single-Zone, and 99.99% for Multi-Zone
  • Private networking options including VPC peering, AWS Transit Gateway, AWS PrivateLink, and Azure PrivateLink.
  • Self-managed keys when AWS, Azure, or Google Cloud is the cloud service provider.
  • Multi-zone high availability (optional). A multi-zone cluster is spread across three availability zones for better resiliency.
  • Can be scaled to achieve gigabytes per second of ingress.
  • Simple scaling in terms of CKUs.
  • Cluster expansion, and Cluster shrinking.
  • Cluster Linking for fully-managed replication (multiregion, multicloud, hybrid cloud, and inter-organization).

Dedicated clusters are provisioned and billed in terms of Confluent Unit for Kafka (CKU). CKUs are a unit of horizontal scalability in Confluent Cloud that provide a pre-allocated amount of resources. How much you can ingest and stream per CKU depends on a variety of factors including client application design and partitioning strategy. For more information, see Monitor Dedicated Clusters in Confluent Cloud and Dedicated Cluster Performance and Expansion in Confluent Cloud.

CKU limits per cluster

Dedicated clusters can be purchased in any whole number of CKUs up to a limit.

  • For organizations with credit card billing, the upper limit is 4 CKUs per Dedicated cluster. Clusters up to 152 * CKUs are available by request.
  • For organizations with integrated cloud provider billing or payment using an invoice, the upper limit is 24 CKUs per Dedicated cluster. Clusters up to 152 * CKUs are available by request.

For clusters that can scale to 152 * CKU, contact Confluent Support to discuss the onboarding process and product considerations.

Single-zone clusters can have 1 or more CKUs, whereas multi-zone clusters, which are spread across three availability zones, require a minimum of 2 CKUs. Zone availability cannot be changed after the cluster is created.

* AWS and Google Cloud support Kafka clusters to 152 CKUs. Azure supports Kafka clusters to 100 CKUs.

Limits per CKU

CKUs determine the capacity of your cluster. For a Confluent Cloud cluster, the expected performance for any given workload is dependent on a variety of dimensions, such as message size and number of partitions.

There are two categories of CKU dimensions:

  • Dimensions with a fixed limit that cannot be exceeded.
  • Dimensions with a more flexible guideline that may be exceeded depending on the overall cluster load.

The recommended guideline for a dimension is calculated for a workload optimized across the dimensions, enabling high levels of CKU utilization as measured by the cluster load metric. You may exceed the recommended guideline for a dimension, and achieve higher performance for that dimension, usually only if your usage of other dimensions is less than the recommended guideline or fixed limit.

Also note that usage patterns across all dimensions affect the workload and you may not achieve the suggested guideline for a particular dimension. For example, if you reach the partition limit, you will not likely reach the maximum CKU throughput guideline.

You should monitor the cluster load metric for your cluster to see how your usage pattern correlates with cluster utilization.

When a cluster’s load metric is high, the cluster may delay new connections and/or throttle clients in an attempt to ensure the cluster remains available. This throttling would register as non-zero values for the producer client produce-throttle-time-max and produce-throttle-time-avg metrics and consumer client fetch-throttle-time-max and fetch-throttle-time-avg metrics.

Use the information in the following tables to determine the minimum number of CKUs to use for a given workload, how to monitor a dimension and suggestions to reduce your use of a particular dimension.

Dimensions with fixed limits

The following table lists dimensions that have a fixed maximum limit that cannot be exceeded.

Dimension Maximum per CKU Details
Storage (pre-replication) Infinite

Number of bytes retained on the cluster, pre-replication. Dedicated clusters have Infinite Storage, which means there is no maximum size limit for the amount of data that can be stored on the cluster.

Available in the Metrics API as retained_bytes (convert from bytes to TB). The returned value is pre-replication.

You can configure policy settings retention.bytes and retention.ms at a topic level so you can control exactly how much and how long to retain data in a way that makes sense for your applications and helps control your costs. If configured retention values are exceeded, producers will be throttled to prevent additional writes, which will register as non-zero values for the producer client produce-throttle-time-max and produce-throttle-time-avg metrics.

If you are self-managing Kafka, you can look at how much disk space your cluster is using to understand your storage needs.

To reduce usage on this dimension, you can compress your messages and reduce your retention settings. lz4 is recommended for compression. gzip is not recommended because it incurs high overhead on the cluster.

Partitions (pre-replication) 4,500 partitions (100,000 partitions maximum across all CKUs)

Maximum number of partitions that can exist on the cluster at one time, before replication. All topics that the customer creates as well as internal topics that are automatically created by Confluent Platform components–such as ksqlDB, Kafka Streams, Connect, and Control Center–count towards the cluster partition limit. The automatically created topics are prefixed with an underscore (_). Topics that are internal to Kafka itself (e.g., consumer offsets) are not visible in the Cloud Console, and do not count against partition limits or toward partition billing.

Available in the Metrics API as partition_count.

Attempts to create additional partitions beyond this limit will fail with an error message.

If you are self-managing Kafka, you can look at the kafka.controller:type=KafkaController,name=GlobalPartitionCount metric to understand your partition usage. Find details in the Broker section.

To reduce usage on this dimension, you can delete unused topics and create new topics with fewer partitions. You can use the Kafka Admin interface to increase the partition count of an existing topic if the initial partition count is too low.

You can compact any number of partitions up to the limit for the cluster.

Connection attempts 500 connection attempts per second

Maximum number of new TCP connections to the cluster that can be created in one second. This means successful authentications plus unsuccessful authentication attempts.

If you exceed the maximum, connection attempts may be refused.

If you are self-managing Kafka, you can look at the rate of change for the kafka.server:type=socket-server-metrics,listener={listener_name},networkProcessor={#},name=connection-count metric and the Consumer connection-creation-rate metric to understand how many new connections you are creating. For details, see Broker Metrics and Global Connection Metrics. To reduce usage on this dimension, you can use longer-lived connections to the cluster.

Kafka REST Produce v3

Max throughput: 50 MBps

Max connection requests: up to 300 connection requests per second

Max streamed requests: 3000 requests per second

Client applications can connect over the REST API to Produce records directly to the Confluent Cloud cluster.

To learn more, see the Kafka REST concepts overview.

The max connection requests limit is shared between Produce and Admin v3. For example, if Produce is running at 100 connection requests per second, Admin can run at approximately 200 connection requests per second maximum.

To learn more about the Kafka REST Produce API streaming mode, see the examples and explanation of streaming mode in the concept docs, and the Produce example in the quick start.

A streamed request is a single record to be produced to Kafka. Concatenate the records to produce multiple records in the same request. Delivery reports are concatenated in the same order as the records are sent. For more information, see Produce Records.

Kafka REST Admin v3 Max connection requests: up to 300 connection requests per second This limit is shared between Produce and Admin v3. For example, if Produce is running at 100 connection requests per second, Admin can run at approximately 200 connection requests per second maximum.

Dedicated limits per cluster

Dedicated clusters use CKUs to govern some dimensions for cluster limits. Other dimensions are simple limits and do not change as you increase the number of CKUs. In the following table, per CKU means you must multiply the CKU limit for that dimension by the number of CKUs you purchased to determine the limit for your cluster. For more information, see Limits per CKU.

For more information about quotas and limits, see Service Quotas for Confluent Cloud and Configuration Reference for Topics in Confluent Cloud.

Dimension Capability Additional details
Ingress * 9,120 See Dimensions with a recommended guideline.
Egress * 27,360 See Dimensions with a recommended guideline.
Storage * Infinite See Dimensions with fixed limits.
Partitions * 100,000 Depends on number of CKUs, absolute max 100,000. See Dimensions with fixed limits.
Total client connections * 2,736,000 See Dimensions with a recommended guideline.
Connection attempts * 76,000 See Dimensions with fixed limits.
Requests * 2,280,000 See Dimensions with a recommended guideline.
Message size Max 20 MB For message size defaults at the topic level, see Configuration Reference for Topics in Confluent Cloud.
Client version Minimum 0.11.0 None
Request size Max 100 MB None
Fetch bytes Max 55 MB None
API keys Default limit 2,000 You can request an increase. For more information, see Service Quotas for Confluent Cloud.
Partition creation and deletion Max 5,000 per 5 minute period

The following occurs when partition creation and deletion rate limit is reached:

  • For clients < Kafka 2.7:

    The cluster always accepts and processes all the partition creates and deletes within a request, and then throttles the connection of the client until the rate of changes is below the quota.

  • For clients >= Kafka 2.7:

    The cluster only accepts and processes partition creates and deletes up to the quota. All other partition creates and deletes in the request are rejected with a THROTTLING_QUOTA_EXCEEDED error. By default, the admin client will automatically retry on that error until default.api.timeout.ms is reached. When the automatic retry is disabled by the client, the THROTTLING_QUOTA_EXCEEDED error is immediately returned to the client.

Connector tasks per Kafka cluster Max 250 None
ACLs Default limit 10,000 You can request an increase. For more information, see Service Quotas for Confluent Cloud.
Kafka REST Produce v3 Max message size: 20 MB For Kafka REST Produce v3, only maximum message size is a cluster limit. All other limits for Kafka REST Produce v3 (throughput, connection requests, and streamed requests) are per CKU. See Dimensions with fixed limits.

* Limit based on a Dedicated Kafka cluster with 152 CKU. For more information, see CKU limits per cluster and Confluent Unit for Kafka.

Dedicated limits per partition

You can add CKUs to a Dedicated cluster to meet the capacity for your high traffic workloads. However, the limits shown in this table will not change as you increase the number of CKUs.

The partition capabilities that follow are based on benchmarking and intended as practical guidelines for planning purposes. Performance per partition will vary depending on your individual configuration, and these benchmarks do not guarantee performance.

Dimension Capability
Ingress per Partition Max 12 MBps (aggregate producer throughput)
Egress per Partition Max 36 MBps (aggregate consumer throughput)
Storage per Partition Unlimited
Storage per Partition for Compacted Topics Unlimited

Dedicated provisioning time

Most clusters are provisioned in less than two hours. You receive an email when provisioning is complete.

Sometimes, due to cloud provider specific constraints, provisioning can take longer. Please check our status page for any ongoing service disruptions. Contact Confluent Support if provisioning takes longer than 6 hours.

Note that provisioning time is excluded from the Confluent SLA.

Dedicated resizing time

Resizing a cluster on average takes about 30-60 minutes per CKU. After you request a Kafka cluster resize, you cannot request another change until the original request completes.

When a cluster is under heavy load or has a high number of partitions to move, resizing can take longer than expected.

During a resize operation, your applications may see leader elections, but otherwise performance will not suffer. Supported Kafka clients will gracefully handle these changes. You will receive an email when the resize operation is complete.

Cluster Linking capabilities

Dedicated clusters can be a source or destination of a cluster link, dependent on the networking type and the other cluster involved. To learn more, see Supported cluster types in the Cluster Linking documentation.