This blog post deals with inter process locking mechanism and its advantages. It also includes an implementation of a distributed locking mechanism architecture using Apache Curator.
The idea of this blog is to maintain a list of all my learnings & resources wrt Machine Learning Concepts. This includes my kaggle notebooks and some useful links which have helped me develop a good understanding of the concepts.
In my previous post, we discussed about how different Kafka components work together through a hands on. But if you are looking for in depth understanding of Kafka, knowledge about its storage is very important. So, what do I mean by storage? Well, when I was doing a hands on in my previous post, I was curious to find out how Kafka stores & retrieves messages of a topic. So in this blog post, I tried to the explain the storage internals of Apache Kafka in a simple and practical way.
If you had a chance to go through my previous post, you should have developed a good understanding of Kafka Architecture. But, how good is understanding a technology without a hands-on? So, we take a step forward and practically see how these architectural components interact & work under the hood. The hands on will cover the following points
Apache Kafka is a distributed streaming platform. Well known for its scalability & fault tolerance, Apache Kafka is extensively used to build real time data pipelines and streaming applications. It was originally developed by LinkedIn and was later open sourced through Apache Foundation. Apache Kafka is widely used in production by well known companies like Uber, Netflix, Twitter, Spotify, LinkedIn etc. You can find the complete list here. In a recent article published by LinkedIn, more than 7 trillion messages are processed per day which serves as a testament to Kafka’s scale.
Given a binary tree, check whether it is a valid binary search tree.