We’ve just shared with you monthly trending Ruby on Rails repositories digest. It’s clear that in recent years, designing software as a collection of services has become a popular way to build applications. In this post, we’ll learn the basics of service-oriented architecture with Rails and Kafka and how its event-driven process can be used to power your Rails services.
Kafka provides fault-tolerant communication between producers, which generate events, and consumers, which read those events. There can be multiple producers and consumers in any single app. In Kafka, every event is persisted for a configured length of time, so multiple consumers can read the same event over and over. A Kafka cluster is comprised of several brokers, which is just a fancy name for any instance running Kafka.
Some of the properties that make Kafka valuable for event pipeline systems also make it a pretty interesting fault tolerant replacement for RPC between services. One challenge with this setup is that the upstream service is responsible for monitoring the downstream availability. If the email system is having a really bad day, the upstream service is responsible for knowing whether that email service is available. And if it isn’t available, it also needs to be in charge of retrying any failing requests.
If you’re curious to learn more, click here.
And read more about some great tips to improve Rails performance.