Kafka cluster with 3 nodes on a single machine.

ABHISHEK KUMAR
2 min readJun 26, 2020

Kafka is a streaming platform has three key capabilities:

  • Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
  • Store streams of records in a fault-tolerant durable way.
  • Process streams of records as they occur.

Kafka is generally used for two broad classes of applications:

  • Building real-time streaming data pipelines that reliably get data between systems or applications
  • Building real-time streaming applications that transform or react to the streams of data

Let’s Create three Node cluster

Step 1: Download Zookeeper

1. go to: https://www.apache.org/dyn/closer.cgi/zookeeper/
2. select the mirror site for your download:
3. http://apachemirror.wuchna.com/zookeeper/
4. wget https://downloads.apache.org/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
5. In the zookeeper.connect configuration, set the Zookeeper address. If Zookeeper is running in a cluster, then give the address as a comma-separated list, i.e.:localhost:2181,localhost:2182.

Step 2: Download Kafka

This is for one Node
1. https://kafka.apache.org/downloads
# select binary of any scala version like
2. wget https://downloads.apache.org/kafka/2.5.0/kafka-2.5.0-src.tgz
3. Now go to the kafka folder that you have already downloaded
there you will see config folder select server.properties file.
4. check broker_id it should be different other config
5. remove # from listeners=PLAINTEXT://:9094
6. create log folder change the path according to you in log.dirs

Step 3: For multi Node

1. go to the kafka folder that you have already downloaded
there you will see config folder select server.properties and copy this file as server1.properties, server2.properties.
2. Now change the broker_id This is the id of the broker in a cluster. It must be unique for each broker.
3. remove # from listeners=PLAINTEXT://:9095 and make different port
4. create different log folder change the path accordingly log.dirs

Step 4: To run all nodes

1. For Node 1:
./bin/kafka-server-start.sh config/server.properties &
2. For Node 2:
./bin/kafka-server-start.sh config/server1.properties &
3. For Node 3:
./bin/kafka-server-start.sh config/server2.properties &

Step 5: Test your Node

1. To create topic
./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
2. To see logs
./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic test
3. To produce data
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
4. To consume all data from beginning from node 1
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
5. To consume all data from beginning from node 2
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --from-beginning
6. To consume all data from beginning from node 4
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9094 --topic test --from-beginning
7.To see broker list
./bin/kafka-console-producer.sh --broker-list localhost:9094 --topic test

--

--

ABHISHEK KUMAR

DevOps/Cloud | 2x AWS Certified | 1x Terraform Certified | 1x CKAD Certified | Gitlab