April flash sale

Kafka Interview Questions and Answers

Kafka is an open-source distributed streaming platform designed to handle large volumes of data in real time and is used in building real-time streaming data pipelines, powering analytics and machine learning applications, and supporting event-driven architectures. Prepare for your next Kafka interview with the top Apache Kafka interview questions and answers compiled by industry experts. These will help you crack your Kafka interview as a beginner, intermediate or expert DevOps professional. The following Apache Kafka interview questions discuss the key features of Kafka, how it differs from other messaging frameworks, partitions, broker and its usage, etc. With Kafka Interview Questions, you can be confident that you will be well-prepared for your next interview. So, if you are looking to advance your career in big data, this guide is the perfect resource for you. Prepare well and crack your interview with ease and confidence!

  • 4.5 Rating
  • 59 Question(s)
  • 30 Mins of Read
  • 9884 Reader(s)

Beginner

Kafka is a messaging framework developed by apache foundation, which is to create the create the messaging system along with can provide fault tolerant cluster along with the low latency system, to ensure end to end delivery.

Below are the bullet points:

  • Kafka is a messaging system, which has provided fault tolerant capability to prevent the message loss. 
  • Design on public-subscribe model.
  • Kafka cab support both Java and Scala.
  • Kafka was originated at LinkedIn and later became an open sourced Apache project in 2011
  • Work seamlessly with spark and other big data technology.
  • Support cluster mode operation
  • Kafka messaging system can be use for web service architecture or big data architecture. 
  • Kafka ease to code and configure as compare to other messaging framework.

      Kafka required other component such as the zookeeper to create a cluster and act as a coordination server

Kafka provide a reliable delivery for messages from sender to receiver apart from that it has other key features as well.

  • Kafka is designed for achieving high throughput and fault tolerant messaging services.
  • Kafka provides build in patriation called as a Topic.
  • Also provide the feature of replication.
  • Kafka provides a queue, which can handle the high volume of data and eventually transfer the message from one sender to receiver. 
  • Kafka also persisted the message in the disk along with has ability to replicate the messages across the cluster
  • Kafka work with zookeeper for coordination and synchronization with other services.
  • Kafka has good inbuilt support Apache Spark.

To utilize all this key feature, we need to configure the Kafka cluster properly along with the zookeeper configuration.

Now a days kafka is a key messaging framework, not because of its features even for reliable transmission of messages from sender to receiver, however, below are the key points which should consider.

  • Reliability − Kafka provides a reliable delivery from publisher to a subscriber with zero message loss..
  • Scalability −Kafka achieve this ability by using clustering along with the zookeeper coordination server
  • Durability −By using distributed log, the messages can persist on disk.
  • Performance − Kafka provides high throughput and low latency across the publish and subscribe application. 

Considering the above features Kafka is one of the best options to use in Bigdata Technologies to handle the large volume of messages for a smooth delivery.

This is one of the most frequently asked Apache Kafka interview questions for freshers in recent times.

There is plethora of use case, where Kafka fit into the real work application, however I listed below are the real work use case which is frequently using.

  • Metrics: Use for monitoring operation data, which can use for analysis or doing statistical operation on gather the data from distributed system 
  • Log Aggregation solution: can be used across an organization to collect logs from multiple services, which consume by consumer services to perform the analytical operation.
  • Stream Processing: Kafka’s strong durability is also very useful in the context of stream processing.
  • Asynchronous communication: In microservices, keeping this huge system synchronous is not desirable, because it can render the entire application unresponsive. Also, it can defeat the whole purpose of dividing into microservices in the first place. Hence, having Kafka at that time makes the whole data flow easier. Because it is distributed, highly fault-tolerant and it has constant monitoring of broker nodes through services like Zookeeper. So, it makes it efficient to work.
  • Chat bots: Chat bots is one of the popular use cases when we require reliable messaging services for a smooth delivery. 
  • Multi-tenant solution. Multi-tenancy is enabled by configuring which topics can produce or consume data. There are also operations support for quotas

Above are the use case where predominately require a Kafka framework, apart from that there are other cases which depends upon the requirement and design.

Let’s talk about some modern source of data now a days which is a data—transactional data such as orders, inventory, and shopping carts — is being augmented with things such as clicking, likes, recommendations and searches on a web page. All this data is deeply important to analyze the consumers behaviors, and it can feed a set of predictive analytics engines that can be the differentiator for companies. 

  • Support low latency message delivery.
  • Handling the real time traffic.
  • Assurance for fault tolerant.
  • Easy to integrate with Spark application to process a high volume of messaging data.
  • Has an ability to create a cluster of messaging container which monitor and supervise by coordination server like Zookeeper.

So, when we need to handle this kind of volume of data, we need Kafka to solve this problem. 

Kafka process diagram comprises the below essential component which is require to setup the messaging infrastructure. 

  • Topic
  • Broker
  • Zookeeper
  • Partition 
  • Producer
  • Consume

Kafka process diagram

Communication between the clients and the servers is done with a simple, high-performance, language agnostic TCP protocol. This protocol is versioned and maintains backwards compatibility with older version

Topic is a logical feed name to which records are published. Topics in Kafka supports multi-subscriber model, so that topic can have zero, one, or many consumers that subscribe to the data written to it.

  • Topic is a specific category which keep the stream of messages.
  • Topic split into partition.
  • For each Kafka, there at least one partition should be there.
  • Each partition contains message or payload in a non-modified ordered sequence. 
  • Each message with in a partition has an identifier, which is called as a offset.
  • A topic has a name, and it must be unique across the cluster. 
  • Producer need topic to publish the payload. 
  • Consumer pulled the same payload from the consumer. 
  • For every Topic the cluster maintain the log look like below.

Multi-subscriber model

Every partition has an ordered and immutable sequence of records which is continuously appended to—a structured commit log. The Kafka cluster durably persists all published records—whether they have been consumed—using a configurable retention period.

Kafka topic is shared into the partitions, which contains messages in an unmodifiable sequence.

  • Partition is a logical grouping of data.
  • Partitions allow you to parallelize a topic by splitting the data in a topic across multiple brokers.  
  • There are one or more than one partition can group in topic.
  • Partition allow to parallelize the topic by splitting a data in a multiple topic across the multiple cluster. 
  • Each partition has an identifier called offset.
  • Each partition can be placed on a separate machine to allow for multiple consumer to read the topic parallel. 

Partitions

This is one of the most frequently asked Apache Kafka interview questions and answers for freshers in recent times. 

The offset is a unique identifier of a record within a partition. It denotes the position of the consumer in the partition. Consumers can read messages starting from a specific offset and can read from any offset point they choose.

  • Partition offset has a unique sequence id called as offset.
  • Each partition should have a partition offset. 

Topic can also have multiple partition logs like the click-topic has in the image to the right. This allows for multiple consumers to read from a topic in parallel.

  • Broker are the system which is responsible to maintaining the publish data.
  • Each broker may have one or more than one partition. 
  • Kafka contain multiple broker to main the load balancer.
  • Kafka broker are stateless 
  • E.g.: Let’s say there are N partition in a topic and there is N broker, then each broker has 1 partition. 

Intermediate

The answer to this question encompasses two main aspects – Partitions in a topic and Consumer Groups.

A Kafka topic is divided into partitions. The message sent by the producer is distributed among the topic’s partitions based on the message key. Here we can assume that the key is such that messages would get equally distributed among the partitions.

Consumer Group is a way to bunch together consumers so as to increase the throughput of the consumer application. Each consumer in a group latches to a partition in the topic. i.e. if there are 4 partitions in the topic and 4 consumers in the group then each consumer would read from a single partition. However, if there are 6 partitions and 4 consumers, then the data would be read in parallel from 4 partitions only. Hence its ideal to maintain a 1 to 1 mapping of partition to the consumer in the group. 

Now in order to scale up processing at the consumer end, two things can be done:

  1. No of partitions in the topic can be increased (say from existing 1 to 4). 
  2. A consumer group can be created with 4 instances of the consumer attached to it.

Doing this would help read data from the topic in parallel and hence scale up the consumer from 2500 messages/sec to 10000 messages per second.

Don't be surprised if this question pops up as one of the top interview questions on Kafka in your next interview.

Dumb broker/Smart producer implies that the broker does not attempt to track which messages have been read by each consumer and only retain unread messages; rather, the broker retains all messages for a set amount of time, and consumers are responsible to track what all messages have been read.

Apache Kafka employs this model only wherein the broker does the work of storing messages for a   time (7 days by default), while consumers are responsible for keeping track of what all messages they have read using offsets.

The opposite of this is the Smart Broker/Dumb Consumer model wherein the broker is focused on the consistent delivery of messages to consumers. In such a case, consumers are dumb and consume at a roughly similar pace as the broker keeps track of consumer state. This model is followed by RabbitMQ.

Kafka is a distributed system wherein data is stored across multiple nodes in the cluster. There is a high probability that one or more nodes in the cluster might fail. Fault tolerance means that the data is the system is protected and available even when some of the nodes in the cluster fail.

One of the ways in which Kafka provides fault tolerance is by making a copy of the partitions. The default replication factor is 3 which means for every partition in a topic, two copies are maintained. In case one of the broker fails, data can be fetched from its replica. This way Kafka can withstand N-1 failures, N being the replication factor.

Kafka also follows the leader-follower model. For every partition, one broker is elected as the leader while others are designated, followers. A leader is responsible for interacting with the producer/consumer. If the leader node goes down, then one of the remaining followers is elected as a leader.

Kafka also maintains a list of In Sync replicas. Say the replication factor is 3. That means there will be a leader partition and two follower partitions. However, the followers may not be in sync with the leader. The ISR shows the list of replicas that are in sync with the leader.

As we already know, a Kafka topic is divided into partitions. The data inside each partition is ordered and can be accessed using an offset. Offset is a position within a partition for the next message to be sent by the consumer. There are two types of offsets maintained by Kafka:

Current Offset

  1. It is a pointer to the last record that Kafka has sent in the most recent poll. This offset thus ensures that the consumer does not get the same record twice.

Committed Offset

  1. It is a pointer to the last record that a consumer has successfully processed. It plays an important role in case of partition rebalancing – when a new consumer gets assigned to a partition – the new consumer can use committed offset to determine where to start reading records from

There are two ways to commit an offset:two ways to commit an offset

  1. Auto-commit: Enabled by default and can be turned off by setting property – enable.auto.commit - to false. Though convenient, it might cause duplicate records to get processed.
  2. Manual-commit: This implies that auto-commit has been turned off and offset will be manually committed when the record has been processed.

Prior to Kafka v0.9, Zookeeper was being used to store topic offset, however from v0.9 onwards, the information regarding offset on a topic’s partition is stored on a topic called _consumer_offsets.

An ack or acknowledgment is sent by a broker to the producer to acknowledge receipt of the message. Ack level can be set as a configuration parameter in the Producer and it defines the number of acknowledgments the producer requires the leader to have received before considering a request complete. The following settings are allowed:

  • acks=0

In this case, the producer doesn’t wait for any acknowledgment from the broker. No guarantee can be that the broker has received the record.

  • acks=1

In this case, the leader writes the record to its local log file and responds back without waiting for acknowledgment from all its followers. In this case, the message can get lost only if the leader fails just after acknowledging the record but before the followers have replicated it, then the record would be lost.

  • acks=all

In this case, a set leader waits for all entire sets of in sync replicas to acknowledge the record. This ensures that the record does not get lost as long as one replica is alive and provides the strongest possible guarantee. However it also considerably lessens the throughput as a leader must wait for all followers to acknowledge before responding back.

acks=1 is usually the preferred way of sending records as it ensures receipt of record by a leader, thereby ensuring high durability and at the same time ensures high throughput as well. For highest throughput set acks=0 and for highest durability set acks=all.

It is an open-source message broker system developed by LinkedIn and supported by Apache. The underlying technology being used in Kafka is Java and Scala. It supports the publisher-subscriber model of communication where publisher publishes messages and subscribers get notified when any new message gets published. It is categorised in distributed streaming messages software. Earlier we were using messaging queue and different enterprise messaging systems like RabbitMQ and many others for the same purpose but Kafka has become an industry leader in quick time. It is being used in building stream pipeline in high volume applications to reliably transfer/transform data between different systems. It has an inbuilt fault-tolerant system to store messages. It is distributed and supports partition as well. Kafka runs in a clustered environment which makes it. It is gaining popularity due to its high throughput of messages in a microservice architecture. The software component which publishes messages is called producer while consumers are the one to which messages are broadcasted. In the below diagram you can see different parts of Kafka system:

other alternatives to Kafka

There are some important components of any Kafka architecture. Please find below an overview of components:

  1. Topic: These are the message category which is declared and defined in Kafka.
  2. Producer: The producers are the system which is responsible for publishing messages to the specific topic defined in Kafka.
  3. Consumer: The system subscribed to different topic comes into this category.
  4. Kafka cluster: A group of server working in fault-tolerant mode.
  5. Broker: The system having the capability of storing published
  6. Consumer Group: The consumer systems which read data from a similar topic by leveraging
  7. Partition: The topic can be stored in a different partition and consumer from one consumer group can read data from a specific partition of the topic.
  8. ZooKeeper: This system facilitates cluster topology.

 The below diagram explains Producer, Topic, Partition consumer & consumer group.

components of Kafka

components of Kafka

ZooKeeper is an open-source system which helps in managing the cluster environment of Kafka. As we know that Kafka brokers work in a cluster environment where several servers process the incoming messages before broadcasting to subscribed consumers. It is an integral part of Kafka architecture. One can not use Kafka without ZooKeeper. The client servicing becomes unavailable once ZooKeeper is down. It facilitates communication between different nodes of the cluster. In a cluster environment, only one node works as a leader while others are followers system only. It is a ZooKeeper responsibility to choose a leader in a cluster node. Whenever any server added or removed in a cluster, topic added or removed, ZooKeeper sends that information to each node. This helps in better coordination between different nodes. Choosing leadership node, better synchronization between different nodes, configuration management are the key roles of ZooKeeper.

ZooKeeper in Kafka

One of the critical information which is required for any producer/consumer system to write /read messages in different partition. It is a unique identifier associated with each message in different partitions. This is a sequence number which goes on increasing as messages come in the partition. The ZooKeeper system keeps track of an offset associated with any specific topic stored in a specific partition. There is a class ConsumerRecord having method offset which helps consumers to get offset details related to a specific topic in a given partition.  Once consumer systems get to know about the offset, it helps to identify messages in topics. Kafka stores the messages published for some time depending upon configuration. Let's assume if the retention period is defined as 4 days then messages will be stored in Kafka for 4 days irrespective of messages read or not by consumer systems. The memory space will be freed up only after 4 days.  The offset is very important for record and it is maintained by every customer reading the record. Once it reads the messages, it increases the offset linearly. But a consumer can read the messages in any order. It can go to some old offset or move to new offset as per the requirement. Let's have a look at the below diagram which will give more clarity on offset concept:

offset in Kafka

One of the most frequently posed Kafka scenario based interview questions, be ready for this conceptual question. 

In a cluster, partitions are distributed across nodes where each server share the responsibility of request processing for individual partitions. Even partitions are replicated across several nodes configured in a Kafka environment.  This is the reason why Kafka is the system. The system or node which act as a primary server for each partition is termed as a leader while other systems where data gets replicated is called a follower. It is a leader's responsibility to read or write data on given partitions but followers system passively replicates data of leader’s system. In case of failure of the leader system, any of the follower systems becomes a leader. Now we can understand cluster environment architecture where each system playing the role of leader for some of the partitions while other system playing as a follower and making Kafka popular as a fault-tolerant system. This is the reason why Kafka has taken a centre stage in-stream messaging platforms. 

leader and follower in Kafka environment

It is one of the key concepts in Kafka. The replication ensures data is safe and secure even in case of system failure. The Kafka stores the published messages which are broadcasted to subscribed systems but what if Kafka server goes down, will published messages be available when the system goes down? The answer is yes and it is all possible due to replication behaviour where messages are replicated across multiple servers or nodes. There could be multiple reasons for system failure like program failure, system error or frequent software upgrades. The replication safeguard published data in case of any such failures. The fault-tolerant behaviour is one of the key reasons why Kafka has become a market leader in a very short period of time. ISR which stands for In sync replicas ensures sync between the leader and follower systems. If replicas are not in sync with ISR then it points that follower systems are lagging behind leaders and not catching up with leader activities. Please refer to the below diagram which will make your understanding more crystal clear :

replication critical in Kafka

As the name suggests this is the exception which occurs when producer systems sending more messages above and beyond the capacity of the broker system then brokers would not be able to handle the same. The queue gets full at broker end so no incoming request can be handled any more. As producer systems do not have any information on the capacity of the broker system results in such exceptions. The messages get overflowed at broker end. To avoid such a scenario we should have multiple systems working as a broker system so messages can be evenly distributed across multiple systems. The clusters environment where we have multiple nodes servicing the message processing avoid such exceptions to occur. The clustering, partitioning helps in avoiding any such exceptions.

Although there is a score of benefits of using Kafka, we can list down some key benefits which are making the tool more popular in-stream messaging platforms. The below diagram has summarized the key benefits : 

advantages of using Kafka

  1. High throughput: It facilitates handling of large volume and high-velocity data without the need of investing in hardware. The Kafka system can process thousands of messages in a second which make it a high throughput system.
  2. Low Latency: Not even it supports high volume data but at the same time low latency support processing high volume data in milliseconds makes it more suitable for big enterprises where we have to handle large volumes of data in quick time.
  3. Fault-tolerant: The clustering model of Kafka architecture make it fault-tolerant and it ensures data persist even in case of system failure.
  4. Durability:Once messages are written in Kafka, it gets replicated across different nodes so data durability gets increased and at the same ensure no data loss. 
  5. Scalability: The clustering model allows Kafka to add more nodes or remove nodes in case of load increases or reduces. We do not need any downtime to achieve the same. We can add nodes on the fly in caseload increases.

The leader and follower nodes serve the purpose of load balancing in Kafka. As we know that leader does the actual writing/reading of data in a given partition while follower systems do the same in passive mode. This ensures data gets replicated across different nodes. In case of any failure due to any reasoning system, software upgrade data remains available. If the leader system goes down for any reason then follower system which was working in the passive mode now becomes a leader and ensure data remains available to external system irrespective of any internal outage. The load balancer does the same thing, it distributes loads across multiple systems in caseload gets increased. In the same way,   balances load by replicating messages on different systems and when the leader system goes down, the other follower system becomes the leader and ensure data is available to subscribers system.

A staple in Kafka technical interview questions and answers, be prepared to answer this one using your hands-on experience.

Advanced

  • Kafka cluster is a group of more than one broker.
  • Kafka cluster has a zero downtime, when we do the expansion of cluster.
  • This cluster use to manage the persistence and replication of message data.
  • This cluster offer’s strong durability due to cluster centric design.
  • In the Kafka cluster, one of the brokers serves as the controller, which is responsible for managing the states of partitions and replicas and for performing administrative tasks like reassigning partitions.

Producer is a client who send or publish the record. Producer applications write data to topics and consumer applications read from topics. 

  • Producer is a publisher to publish the message in one or more Kafka topic.
  • Producer sends data to the broker service.
  • Whenever the producer publishes the message, the broker just appends the message to the last segment of the partition.
  • Producer can send the message as per the desire topic as well.

Messages sent by a producer to a topic partition will be appended in the order they are sent. That is, if a record M1 is sent by the same producer as a record M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the log.

Consumer is a subscriber who consume the messages which predominantly stores in a partition. Consumer is a separate process and can be separate application altogether which run in individual machine.

  • Consumer can subscribe one and more than one topic.
  • Consumer also maintain the counter for message as per the offset value. 
  • If consumer acknowledge a specific message offset, that means it consume all the previous message.
  • Consumer work on asynchronous pull request to the broker to ready with byte or data for consumption.
  • Consumer offset value is notified by zookeeper.

If all the consumer falls into the same consumer group, then by using load balancer the message will be distributed over the consumer instances, if consumer instances falls in different group, than each message will be broadcast to all consumer group. 

The working principle of Kafka follows the below order.

  • Producers send message to a topic at regular intervals.
  • Broker in kafka responsible to  stores the messages which is available in  partitions configured for that topic. 
  • Kafka ensure that if producer publish the two messages, than both the message should be accept by consumer.
  • Consumer pull the message from the allocated topic.
  • Once consumer digest the topic than Kafka push the offset value to the zookeeper.
  • Consumer continuously sending the signal to Kafka approx every 100ms, waiting for the messages. 
  • Consumer send the acknowledgement ,when message get received.
  • When Kafka receives an acknowledgement, it modified the offset value to the new value and send to the  Zookeeper. Zookeeper maintain this offset value so that consumer can read next message correctly even during server outrages.
  • This flow is continuing repeating until the request will be live.

Apart from other benefits, below are the key advantages of using Kafka messaging framework.

  • Low Latency.
  • High throughput.
  • Fault tolerant.
  • Durability.
  • Scalability.
  • Support for real time streaming
  • High concurrency.
  • Message broker capabilities. 
  • Persistent capability. 

Considering all the above advantages, Kafka is one of the most popular frameworks utilize in Micro service architecture, Big Data architecture, Enterprise Integration architecture, publish-subscribe architecture.

Expect to come across this, one of the most important Kafka interview questions for experienced professionals in data management, in your next interviews. 

Considering the advantages, to setup and configure the Kafka ecosystem is bit difficult and one needs a good knowledge to implement, apart from that I listed some more use case.

  • Lack of monitoring tool.
  • Wildcard option is not available to select topic.
  • For coordinating between the cluster, we need third party services called Zookeeper.
  • Need deep understanding to handle the cluster-based infrastructure of Kafka along with Zookeeper.

Zookeeper is a distributed open source configuration, synchronization service along with the naming registry for distributed application.

 Architecture of Zookeeper

Zookeeper is a separate component, which is not a mandatory component to implement with Kafka, however when we need to implement cluster, we have to setup as a coordination server.

  • Selecting a controller
  • Cluster management
  • Topic configurator
  • Quotas
  • Who is allowing to read and write Topic?

 Zookeeper plays a significant role when it comes to cluster management like fault tolerant and identify when one cluster down its replicate the messages to other cluster.

groupId = org.apache.spark
artifactId = spark-streaming-kafka-0-10_2.11
version = 2.2.0 
groupId = org.apache.zookeeper
artifactId = zookeeper
version=3.4.5

This dependency comes with child dependency which will download and add to the application as a part of parent dependency.

Here are the packages which need to import in Java/Scala.

  • import org.apache.kafka.clients.consumer.ConsumerRecord
  • import org.apache.kafka.common.serialization.StringDeserializer
  • import org.apache.spark.streaming.kafka010._
  • import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
  • import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe

A must-know for anyone looking for top Kafka interview questions, this is one of the frequently asked interview questions on Kafka.

Kafka follows a pub-sub mechanism wherein producer writes to a topic and one or more consumers read from that topic. However, Reads in Kafka always lag behind Writes as there is always some delay between the moment a message is written and the moment it is consumed. This delta between Latest Offset and Consumer Offset is called Consumer Lag

There are various open source tools available to measure consumer lag e.g. LinkedIn Burrow. Confluent Kafka comes with out of the box tools to measure lag.

With Kafka messaging system, three different types of semantics can be achieved. 

  • At max once: Wherein a messaging system will never duplicate a message but might miss out on some messages occasionally.
  • At least once: Wherein a messaging system will never miss a message but might duplicate some messages occasionally.
  • Exactly once: Where in it will deliver all the messages without any duplication.

Kafka transactions help achieve exactly once semantic between Kafka brokers and clients. In order to achieve this we need to set below properties at producer end – enable.idempotence=true and transactional.id=<some unique id>. We also need to call initTransaction to prepare the producer to use transactions. With these properties set, if the producer (characterized by producer id> accidentally sends the same message to Kafka more than once, then Kafka broker detects and de-duplicates it. 

Kafka is a durable, distributed and scalable messaging system designed to support high volume transactions. Use cases that require a publish-subscribe mechanism at high throughput are a good fit for Kafka. In case you need a point to point or request/reply type communication then other messaging queues like RabbitMQ can be considered. 

Kafka is a good fit for real-time stream processing. It uses a dumb broker smart consumer model with the broker merely acting as a message store. So a scenario wherein the consumer cannot be smart and requires a broker to smart instead is not a good fit for Kafka. In such a case, RabbitMQ can be used which uses a smart broker model with the broker responsible for consistent delivery of messages at a roughly similar pace. 

Also in cases where protocols like AMQP, MQTT, and features like message routing are needed, in those cases, RabbitMQ is a better alternative over Kafka.

This is a common yet one of the most tricky Kafka interview questions and answers for experienced professionals, don't miss this one.

A producer publishes messages to one or more Kafka topics. The message contains information related to what topic and partition should the message be published to.

There are three different types of producer APIs –

three different types of producer APIs

  1. Fire and forget – The simplest approach, it involves calling send() method of producer API to send the message to the key. In this case, the application doesn’t care whether the message is successfully received by the broker or not.
  2. Synchronous producer – In this method, the calling application waits until it gets a response. In the case of success, we get a RecordMetadata object, and in the event of failure, we get an exception. However, note that this will limit your throughput because you are waiting for every message to get acknowledged.
  3. Asynchronous producer – A better and faster way of sending messages to Kafka, this involves providing a callback function to receive the acknowledgment. The application doesn’t wait for success/failure and the callback function is invoked when the message is successfully acknowledged or in case of a failure.

Kafka messages are key-value pairs. The key is used for partitioning messages being sent to the topic. When writing a message to a topic, the producer has an option to provide the message key. This key determines which partition of the topic the message goes to. If the key is not specified, then the messages are sent to partitions of the topic in round robin fashion.

Note that Kafka orders messages only inside a partition, hence choosing the right partition key is an important factor in application design.

Kafka supports data replication within the cluster to ensure high availability. But enterprises often need data availability guarantees to span the entire cluster and even withstand site failures.

The solution to this is Mirror Maker – a utility that helps replicate data between two Kafka clusters within the same or different data centers.

MirrorMaker is essentially a Kafka consumer and producer hooked together. The origin and destination clusters are completely different entities and can have a different number of partitions and offsets, however, the topic names should be the same between source and a destination cluster. The MirrorMaker process also retains and uses the partition key so that ordering is maintained within the partition.

Both belong to the same league of messages streaming platform. RabbitMQ belong to the traditional league of the messaging platform which supports several protocols. It is an open-source message broker platform with a reasonable number of features. It supports the AMQP messaging protocol with the routing feature. 

Kafka was written in Scala and first introduced in LinkedIn to facilitate intrasystem communication. Now Kafka is being developed under the umbrella of Apache software and more suitable in an event-driven ecosystem. Now let's compare both the platform. Kafka is a distributed, scalable and high throughput system as compared to rabbitMQ. In terms of performance as well as Kafka scores much better. The RabbitMQ can process only 20000 messages per second while Kafka can process 5 times more messages.

Please find below diagram detailing out key differences.

Apache Kafka is different then rabbitMQ

Source

There are four core API which is available in Kafka. Please find below an overview of the core API  :

  1. The Producer API: These API help producer systems to publish messages on one or more topics. For any streaming platform, the first task starts from publishing data on brokers so we can say that producer API’s are the first one to being consumed in Kafka.
  2. The Consumer API: The API belonging to this group help subscribers system in receiving messages belonging to one more topic. At the same time, it helps in the processing of data.
  3. The streams API: These API helps application to act as a processor for input stream belonging to one or more topics and resulting in the output stream.
  4. The connector API: The connector API helps in building consumer and producer application that facilitate connecting topics to existing systems. For example, if we need to capture any change in the existing RDBMS  table then we can leverage connector API.

core API in Kafka

The geo-replication enables replication across different data or different clusters. The Kafka mirror maker enables geo-replication. This process is called mirroring. The mirroring process is different from replication across different nodes in the same cluster. Kafka’s mirror maker ensure messages from topic belonging to one or more Kafka clusters are replicated to destination cluster with same topic names. 

geo-replication in Kafka

We should use at least one mirror maker to replicate one source cluster. We can have multiple mirror maker processes to mirror topics within the same consumer groups. This enables high throughput and enable the system. If one of the mirror maker processes goes down, the other can take over the additional load. One thing is very important here as the source and destination clusters are independent of each other having different partition and offsets.

Kafka is dependent on ZooKeeper so we must first start ZooKeeper before starting Kafka server. Please find below the step by step process to start the Kafka server :

1:Starting the ZooKeeper by typing the following command in a terminal :

Starting the ZooKeeper by typing the following command in a terminal

2: Once ZooKeeper starts running, we can start the Kafka server by running the following command 

start the Kafka server

3: The next step is checking the services running in backend by checking below commands : 

Services running in backend commands

4: Once the Kafka server starts running, we can create a topic by running below command : 

5: We can check the available topic by triggering below command in terminal : 

 We can check the available topic by triggering below command in terminal

A staple in Kafka advanced interview questions with answers, be prepared to answer this one using your hands-on experience. This is also one of the top interview questions to ask a Kafka developer.

As we know that producers publish messages to a different partition of the topic. Messages consist of chunks of data. Along with the data producer system also send one key. This key is called a partition key. The data which comes with unique key always gets stored in the same partition. Consider a real-world system where we have to track the user’s activity while using the application. We can store the user’s data using the partition key in the same partition. So basically the user’s data being tagged with key helps us in achieving this objective. Let's say if we have to store user u0 data into partition p0 then we can tag u0 data with some unique key which will ensure that user’s data always gets stored in partition p0. But it does not mean that p0 partition can not store other user’s data. To summarize partition key is used to validate messages and knowing destination partition where messages will be stored. Let's have a look at the below diagram which clearly explains the usage of partition key in Kafka architecture. 

partition key in Kafka

The series stands for serialization and deserialization concept. A SerDe is a combination of a Serializer and Deserializer (hence, Ser-De). Every application must offer support for serialization and deserialization for record key and values so when materialization is needed, it can be achieved easily. The serialization which involves converting messages into a stream of bytes for transmission over networks. The array of bytes then get stored on the Kafka queue. The deserialization is just reverse of serialization which ensures an array of streams of data get converted into meaningful data. When the producer system sends meaningful data to the broker, the serialization ensures transmission and storage of data in the form of a byte of an array. The consumer system which read data from the topic in the form of a byte of an array but at consumer end that byte of an array must need to be deserialized successfully to convert into meaningful data. 

Configuring SerDes:

import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsConfig;
Properties settings = new Properties();
// Default serde for keys of data records (here: built-in serde for String type)
settings.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
// Default serde for values of data records (here: built-in serde for Long type)
settings.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.Long().getClass().getName());
StreamsConfig config = new StreamsConfig(settings);

Kafka stores messages in topics which in turn gets stored in different partitions. The partition is an immutable sequence of ordered messages which is continuously appended to. A message is uniquely identified in the partition by a sequential number called offset. The FIFO behaviour can only be achieved inside the partitions. We can achieve FIFO behaviour by following the steps below  :

1: We need to first set enable auto-commit property false :

                       Set enable.auto.commit=false

2: Once messages get processed, we should not make a call to consumer.commitSync();

3: We can then call to “subscribe” and ensure the registry of consumer system to the topic.

4: The Listener consumerRebalance should be implemented and within a listener, we should call consumer.seek(topicPartition, offset).

5: Once the message gets processed, the offset associated with the message should be stored along with the processed message.

6: We should also ensure idempotent as a safety measure.

Kafka system by default does not handle the large size of data. The data max size is 1 MB but there are ways to increase that size. We should also ensure to increase the network buffers as well for our consumers and producers system. We need to adjust a few properties to achieve the same :

  1. The consumer system which read messages from a topic can read the largest size of a message-driven by property fetch.message.max.bytes. If we would like the consumer system to read large size of data accordingly we can set the property.
  2. The Kafka broker has also one property replica .fetch.max.bytes which drive message sizes replicated across clusters. For the message to be replicated in the correct size, we need to make sure we have a size not defined too small for this property otherwise messages will be not be committed successfully resulting in non-availability of messages for consumer systems.
  3. There is another property on broker side message.max.bytes which determines the maximum size of data that Kafka broker can receive from the producer system.
  4. The broker side has max .message.bytes property which validates the maximum size of message one can append to the topic. The size is pre-compression in size. This size applies to the topic only.

Now it is clear that if we would like to send large Kafka messages then it can be easily achieved by tweaking few properties explained above. The broker related config can be found at $KAFKA_HOME/config/server.properties while consumer-related config found at $KAFKA_HOME/config/consumer.properties

A staple in Kafka advanced interview questions with answers, be prepared to answer this one using your hands-on experience. This is also one of the top interview questions to ask a Kafka developer.

Kafka and flume both are offerings from Apache software only but there are some key differences. Please find below an overview of both to understand the differences :

Kafka 

  1. Kafka belongs to distributed publisher-subscriber model of the messaging system. The Kafka system enables subscribers of reading precisely the messages they are keen on. The subscriber's system subscribes to the topic(different category of messages)they are interested in.
  2.  The Kafka system allows even late entrant consumers to read the messages as messages are persisted for the time until they get expired. This is the reason why it is termed a pull framework.
  3.  Kafka persists information for some time depending upon configuration, this allows information would certainly be reprocessed any number of times, by any number of consumer groups, yet above all, make the rate of those events won't over-burden the databases or the procedures attempting to get information into databases. 
  4.  It can be utilized for any framework to associate with different frameworks that require organization-level messaging (website action following, operational measurements, stream handling and so on) It's a broadly useful publisher-subscriber model framework, and can work with any subscriber system or producer system.
  5. Kafka is truly adaptable and salable. One of the key advantages of Kafka is that it is anything but difficult to include a huge number of consumers without influencing execution and without downtime. 
  6.  High availability ensures recoverable if there should be an occurrence of downtime.

Flume 

  1. Flume has been formed to ingest information into Hadoop. It is firmly coordinated with Hadoop's observing framework, file system framework, record configurations, and utilities. For the most part Flume advancement is to make it compatible with Hadoop. 
  2. Flume is a push framework which infers information loss when consumers can't keep up. Its primary purpose includes sending messages to HDFS & HBase. 
  3.  Flume isn't as versatile as Kafka as adding more consumers to Flume means changing the topology of Flume pipeline configuration, reproducing the channel to convey the messages to another sink. It isn't generally a versatile arrangement when you have an immense number of consumers. Additionally, since the flume topology should be transformed, it requires some downtime. 
  4. Flume does not recreate events– if there should be an occurrence of flume-agent failure, you will lose events in the channel 

At the point when to utilize: 

     1. Flume: When working with non-social information sources, for example, log documents which are to be gushed into Hadoop. Kafka: When needing a very dependable and versatile enterprise-level framework to interface numerous various frameworks (Including Hadoop)

     2. Kafka for Hadoop: Kafka resembles a pipeline that gathers information continuously and pushes to Hadoop. Hadoop forms it inside and after that according to the prerequisite either serve to different consumers(Dashboards, BI, and so on) or stores it for further handling.

Kafka
Flume
Apache Kafka is multiple producers-consumers general-purpose tool.
Apache Flume is a special-purpose tool for specific applications.
It replicates the events.
It does not replicate the events.
Kafka support data streams for multiple applications
Flume is specific for Hadoop and big data analysis.
Apache Kafka can process and monitor data in distributed systems.
Apache Flume gathers data from distributed systems to a centralized data store.
Kafka supports large sets of publishers, subscribers and multiple applications.
Flume supports a large set of source and destination types to land data on Hadoop.

One can easily follow the below steps to install Kafka :

Step 1: Ensuring java is installed on the machine by running below command in CMD :

$ java -version

You will be able to see a version of java if it is installed. In case Java is not installed we can follow below steps to install java successfully:

1: Download the latest JDK by visiting below link: JDK

2: Extract the executables and then move to Opt directory.

Kafka installation steps

Kafka installation steps3: Next step is setting the local path for the JAVA_HOME variable. We can set this by running below command in  

            ~/.bashrc file.

4: Ensure above changes are in sync in the running system along with making changes in java alternative  by invoking 

                 command

Kafka installation steps

Step 2: Next step is ZooKeeper framework installation by visiting the below link: ZooKeeper

1: Once the files have been extracted we need to modify the config file before starting ZooKeeper server. We can follow below command to open “conf/zoo.cfg”

Kafka installation steps

After making the changes ensure config file get saved before executing the following command to start the server :

$bin/zkServer.sh start

Once you execute above command below response can be seen:

$JMX enabled by default
$Using config: /Users/../zookeeper-3.4.6/bin/../conf/zoo.cfg
$Starting zookeeper ...STARTED

 2: Next step is starting CLI

$ bin /zkCli.sh
The above command will ensure we connect to zookeeper and below response will come:

Connecting to localhost:2181

……………………

……………………

…………………….

Welcome to ZooKeeper!

……………………

……………………

WATCHER::

WatchedEvent state:SyncConnected type: None path:null

[zk: localhost:2181(CONNECTED) 0]

We can also stop the ZooKeeper server after doing all the basic validations :

Kafka installation steps

Step 3: Now we can move to apache Kafka installation by visiting the below link: Kafka

       Once Kafka is downloaded locally we can extract the files by running the command :

Kafka installation steps

The above command will ensure Kafka installation. After Kafka installation we need to start Kafka server:

  • > bin/kafka-server-start.sh config/server.properties
  • [2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
  • [2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576
  • (kafka.utils.VerifiableProperties)...

Shared message Queue

A shared message framework takes into account a surge of messages from a producer to serve a single customer. Each message pushed to the framework is perused just once and just by one customer. The consumers pull messages from the queue end only. Queuing frameworks at that point expel the message from the line once pulled effectively.

Shared message Queue

Downsides: 

  1. When one consumer pulls a message, it is eradicated from the queue. 
  2. Shared messages are more qualified to basic programming, where the messages are much similar to commands to customers belonging to the same domain, than event-driven programming, where a solitary event can prompt various activities from the consumer's end, differing in the domain. 
  3. While numerous customer may interface with a shared queue, they should all fall in the equivalent coherent space and execute similar usefulness. Accordingly, the versatility or scalability of preparing in a shared message queue is restricted by a solitary area for utilization. 

Traditional Publisher Subscribe Systems

The publisher-subscriber model considers various publishers to distribute messages to subjects facilitated by message brokers which can be subscribed by different endorsers. A message is in this way communicated to every one of the supporters of a subject. 

Traditional Publisher Subscribe Systems

Downsides: 

  1. The coherent isolation of the publisher from the consumer considers an approximately loose coupled engineering, yet with restricted scale. Versatility is restricted as every endorser must buy into each partition to get to the messages from all segments. In this way, while traditional publisher-subscriber models work for little systems, the insecurity increments with the development in nodes. 
  2. The symptom of the decoupling additionally appears in the lack of quality around message delivery. 
  3. As each message is communicated to all subscribers, scaling the preparing of the streams is troublesome as the subscribers are not in a state of harmony with each other.

Let's first understand the concept of the consumer in Kafka architecture. The consumers are the system or process which subscribe to topics created at the Kafka broker. The producer's system sends messages to topics and once messages are committed successfully then only subscribers systems are allowed to read the messages. The consumer group is tagging of consumers system in such a way to make it multi-threaded or multi-machine system. 

consumer group in Kafka

As we can see in the above diagram, two consumers 1 & 2 are being tagged in the same group. Also, we can see that individual customers reading data from different partition of topics. Some common characteristic of consumer groups are as follows:

  1. Consumers system can join the consumer group by having the same group.id.
  2. The consumer group supports multiple processing at the same time by endorsing parallelism, one can have a maximum number of consumers similar to several partitions. So each partition gets mapped to one instance of the consumer from the same group.
  3. The consumer is assigned to a single partition of the topic by Kafka broker to ensure only particular consumer consumes messages belonging to that partition.
  4. It also ensures that messages are read from a single consumer only.
  5. Messages are ordered in Kafka and it appears in the same order they are committed.

The recommendation for the consumer group suggests having a similar number of consumer instances in line with several partitions. In case if we will go with a greater number of consumers then it will result in excess customers sitting idle resulting in wasting resources. In the case of partitions numbers greater then it will result in the same consumers reading from more than one partition. This should not be an issue until the time ordering of messages is not important for the use case. Kafka does not have inbuilt support for the ordering of messages across different partitions.

This is the reason why Kafka recommends to have the same number of consumers in line with partitions to maintain the ordering of messages.

The core part of Kafka producer API is “KafkaProducer” class. Once we instantiate this class, it allows the option to connect to Kafka broker inside its constructor. It has the method “send” which allows the producer system to send messages to topic asynchronously: 

  • ProducerRecord- This class represent streams of records to be sent
  • Callback- This function is called when the server acknowledges the message.

The Kafka producer has one flush method which is used to ensure previously sent messages are cleared from the buffer.

  • The Producer API- The core class of this API is the “Producer” class. This class also has a send method to send messages to single or multiple topics :
public void send(KeyedMessaget<k,v> message) 
- sends the data to a single topic,par-titioned by key using either sync or async producer.
public void send(List<KeyedMessage<k,v>>messages)
- sends data to multiple topics.
Properties prop = new Properties();
prop.put(producer.type,”async”)
ProducerConfig config = new ProducerConfig(prop);

The producer is broadly classified into two types: Sync & Async

A message is sent directly to the broker in sync producer while it passes through in the background in case of an async producer. Async producer is used in case we need high throughput

The following are the configuration settings listed in producer API :

S.NoConfiguration Settings and Description
1

client.id

identifies producer application

2

producer.type

either sync or async

3

acks

The acks config controls the criteria under producer requests are con-sidered complete.

4

retries

If producer request fails, then automatically retry with specific value.

5

bootstrap.servers

bootstrapping list of brokers.

6

linger.ms

if you want to reduce the number of requests you can set linger.ms to something greater than some value.

7

key.serializer

Key for the serializer interface.

8

value.serializer

value for the serializer interface.

9

batch.size

Buffer size.

10

buffer.memory

controls the total amount of memory available to the producer for buff-ering.

  • The Produce Record API: This  API is used for sending key-value pair to cluster. This class has three different constructors:
public ProducerRecord (string topic, int partition, k key, v value)
  • Topic − user defined topic name that will appended to record.
  • Partition − partition count
  • Key − The key that will be included in the record.
  • Value − Record contents
public ProducerRecord (string topic, k key, v value)

ProducerRecord class constructor is used to create a record with key, value pairs and without partition.

  • Topic − Create a topic to assign record.
  • Key − key for the record.
  • Value − record contents.
public ProducerRecord (string topic, v value)

ProducerRecord class creates a record without partition and key.

  • Topic − create a topic.
  • Value − record contents.

Regular micro services arrangements will have many microservices collaborating, and that is a colossal issue if not taken care of appropriately. It isn't practical for each service to have an immediate association with each service that it needs to converse with for 2 reasons: First, the number of such associations would develop quickly; Second, the services being called might be down or may have moved to another server.

On the off chance that you have 2 services, at that point, there are up to 2 direct associations. With 3 services, there are 6. With 4 services, there are 12, etc. As it were, such associations can be seen as the coupling between the objects in an OO program. You have to cooperate with different objects yet the lesser the coupling between their classes, the more sensible your program is.

Message Brokers are a method for decoupling the sending and accepting services through the idea of Publish and Subscribe. The sending service (maker) posts it message/load on the message queue and the accepting service (consumer), which is tuning in for messages, will get it. Message Broking is one of the key use cases for Kafka.

Something else Message Brokers do is a queue or hold the message till the time consumer lifts it. On the off chance that the customer service is down or occupied when the sender sends the message, it can generally take it up later. The result of this is the producer services doesn't need to stress over checking if the message has gone through, retry on failure, and so on.

Kafka is incredible because it enables us to have both Pub-Sub just as queuing highlights (generally, it is possible that either was upheld by such intermediaries). It additionally ensures that the request of the messages is kept up and not expose to arrange idleness or different elements. Kafka likewise enables us to "communicate" messages to different consumers, if necessary. Kafka importance can be understood in building reliable, scalable microservices solution with minimum configuration.

The Kafka which has established itself as a market leader in stream processing platform. It is one of the popular message broker platforms. It works on the publisher-subscriber model of messaging. It provides decoupling between producer and consumer system. They are unaware of each other and work independently. The consumer system has no information on the source system which has pushed the messages into Kafka system. The producer systems publish messages on the topic(tagging of messages in a group called topic) and messages are broadcasted to consumer systems which are subscribed to those topics. It is event-driven architecture and solves most of the problems faced by the traditional messaging platform. The key features like data partitioning, scalability, low latency and high throughput are the reason why it has become a top choice for any real-time data integration and data processing needs. 

integration and data processing in kafka

The topic is a very important feature of Kafka architecture. The messages are grouped into a topic. The producer system sends messages to a specific topic while consumer system read messages from a specific topic only. Further messages in the topic are distributed into several partitions. The partition ensures same topic data is replicated across multiple brokers. The individual partition can reside on an individual machine which allows message reading from same topic parallel. The multiple subscriber systems can process data from multiple partitions which result in high messaging throughput. The unique identifier is tagged with each message within a partition which is called offset. The offset is sequentially incremented to ensure ordering of messages. The subscriber system can read data from the specified offset but at the same time, they are allowed to read data from any other offset point as well. 

anatomy of the Kafka

Multi-tenancy system allows multiple client service at the same time. There is inbuilt support on multi-tenancy if we are not concerned with isolation and security. So Kafka is already a multi-tenant system as everyone can read/write data to Kafka broker. But in the real multi-tenant system should provide isolation and security to provide multiple client servicing. The security and isolation can be achieved by doing below set up :

  1. Authentication- The Kafka system should have an authentication mechanism to not allow anonymous users to login into the Kafka broker. So authentication set up is the first step for achieving multi-tenancy.
  2. Authorization- The users/system should be authorized to read/write from the topic. Once users are users are validated against access on the topic before messages are read/write.
  3. Manage quotas- Restricting message quotas to avoid network saturation is also required for multi-tenancy. As we know that Kafka can produce/consume very high volumes of data so to support multi-tenancy managing quota is a mandatory step. We should have quotas set up per user, per consumer group or use group.

The two way SSL can be used for authentication/authorization. We can also use token-based identity provider for the same purpose. We can also set up role-based access to the topic using ACLs.

The first step for any consumer to join any consumer group is raising a request to the group coordinator. There is a group leader in a consumer group which is usually the first member of the group. The group leader gets the list of all members from co-ordinator. It keeps track of all the consumers which have recently contributed in the group are considered alive while other members are off tracked from the system. It is the responsibility of the group leader to assign partitions to individual consumers. It implements PartitionAssignor to assign partitions. 

There is an in-built partition policy to assign a partition to consumers. Once the partition is done, group leader sends that information to group co-ordinator which in turn inform respective consumers about their assignments. Individual consumers have only knowledge of respective assignments while group leader keeps track of all assignments. This whole process is called partition rebalancing. This happens whenever any new consumer joins the groups or exits the group. This step is very critical to performance and high throughput of messages.

As we know that consumer system subscribes to topics in Kafka but it is Pooling loop which informs consumers if any new data has arrived or not. It is poll loop responsibility to handle coordination, partition rebalances, heartbeats, and data fetching. It is the core function in consumer API which keeps polling the server for any new data. Let's try to understand polling look in Kafka :

try {
    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);
        for (ConsumerRecord<String, String> record : records)
        {
            log.debug("topic = %s, partition = %d, offset = %d,"
                customer = %s, country = %s\n",
                record.topic(), record.partition(), record.offset(),
                record.key(), record.value());
            int updatedCount = 1;
            if (custCountryMap.countainsValue(record.value())) {
                updatedCount = custCountryMap.get(record.value()) + 1;
            }
            custCountryMap.put(record.value(), updatedCount)
            JSONObject json = new JSONObject(custCountryMap);
            System.out.println(json.toString(4))
  1. This section is an infinite loop. Consumers keep pooling Kafka for new data.
  2. Consumers.Poll(100): This section is very critical for the consumer as this section determine time interval(milliseconds)consumer should wait for data to arrive from the Kafka broker. If any consumer will not keep polling the data, the assigned partition usually goes to another consumer as they will be considered not alive.  If we pass 0 as parameters the function will return immediately.
  3. The second section returns the result set.  Individual results will be having data related to the topic and partition it belongs along with offset of record. We also get key and value pairs of record. Now we iterate through the result set and does our custom processing.  
  4. Once processing is completed, it writes a result in a data store. This will ensure the running count of customers from each country by updating a hashtable.   
  5. The ideal way for the consumer is calling a close() function before exiting. This ensures that it closes the active network connections and sockets. This function also triggers rebalancing at the same time rather than waiting for consumer group co-ordinator to find the same and assign partitions to other consumers. 


Let’s consider a scenario where we need to read data from the Kafka topic and only after some custom validation, we can add data into some data storage system. To achieve this we would develop some consumer application which will subscribe to the topic. This ensures that our application will start receiving messages from the topic on which data validation and storage process would run eventually. Now we come across a scenario where messages publishing rate to topic exceed the rate at which it is consumed by our consumer application. 

If we go with a single consumer then we may fall behind keeping our system updated with incoming messages. The solution to this problem is by adding more consumers.  This will scale up the consumption of topics. This can be easily achieved by creating a consumer group, the consortium under which similar behaviour consumers would reside which can read messages from the same topic by splitting the workload. Consumers from the same group usually get their partition of the topic which eventually scales up message consumption and throughput. In case if we have a single consumer for a given topic with 4 partitions then it will read messages from all partitions : 

individual partition

The ideal architecture for the above scenario is as below when we have four consumers reading messages from individual partition : 

architecture design

Even in the case of more consumers then partition results in consumer sitting idle, which is also not good architecture design: 

consumer groups

There is another scenario as well where we can have more than one consumer groups subscribed to the same topic: 

consumer groups

This, along with other Kafka basic questions for freshers, is a regular feature in Kafka interviews, be ready to tackle it with the approach mentioned.

Description

Apache Kafka is an open-source stream-processing software program developed by Linkedin and donated to the Apache Software Foundation.

The increase in popularity of Apache Kafka has led to an extensive increase in demand for professionals who are certified in the field of Apache Kafka. It is a highly appealing option for data integration as it contributes various unique attributes like unifies, low-latency, high-throughput platform to handle real-time data feeds. Other features such as scalability, low latency, data partitioning and its ability to handle numerous diverse consumers makes it more desirable for cases related to data integration. To mention, Apache Kafka has a market share of about 9.1%. It is the best opportunity to move ahead in your career. This is also the best time to enroll for Apache Kafka training

There are many companies who use Apache Kafka. According to cwiki.apache.org, the top companies that use Kafka are LinkedIn, Yahoo, Twitter, Netflix, etc.

According to indeed.com, the average salary for apache kafka architect for Senior Technical Lead ranges from $101,298 per year to $148,718 per year for Enterprise Architect.  

With a lot of research, we have brought you a few apache kafka interview questions that you might encounter in your upcoming interview. These apache kafka interview questions and answers for experienced and freshers alone will help you crack the apache kafka interview and give you an edge over your competitors. So, in order to succeed in the interview, you need to read, re-read and practice these apache kafka interview questions as much as possible. You can do this with ease in you are enrolled in a big data training program.

If you wish to make a career and have Apache Kafka interviews lined up, then you need not fret. Take a look at the set of Apache Kafka interview questions assembled by experts. These kafka interview questions for experienced as well as freshers with detailed answers will guide you in a whole new manner to crack the Apache Kafka interviews. Stay focused on the essential interview questions on Kafka and prepare well to get acquainted with the types of questions that you may come across in your interview on Apache Kafka.

Hope these Kafka Interview Questions will help you to crack the interview. All the best!

Read More
Levels