Sunday, May 7, 2023

Microservice with Kotlin - Cloud Native Microservices

 Microservices principles 

Defining microservices principles will allow us to build scalable, easy-to-maintain enterprise applications. We will focus on benefits and downsides when we review them. We understand that sometimes there could be some disagreement in some of them; however, we encourage you to review them all. Finally, we know that there are probably dozens or more principles that could be included, but we chose the ones that made most sense in this context.

Defining design principles

 We need to choose a set of principles when we design microservices; each of them will have their own advantage that will be reviewed later on, but defining them will also allow us to have a consistent approach for different kinds of problems, and will help others understand our architecture. The key principles that we are going to define are: 

 Modelled around business capabilities 

 Loosely couple

 Single responsibility 

 Hiding implementation 

 Isolation 

 Independently deployable Build for failure Scalability 

 Automation

Modelled around business capabilities 

A well-designed microservice should be modeled around the business capabilities that are meant to be implemented. Designing software has a component of abstraction and we are used to getting requirements and implementing them, but we must consider how everyone, including us, will understand the solution, now and in the future. When we need to update, or even modify our microservices, we need to abstract back to the original concept that defined it. In that process, we could realize that something was not as we originally understood, or that our design could not evolve. We may even discover that we have to break the boundaries of our business domain and we don't implement the original capability anymore, or that actually it is implemented across a set of different non-related microservices. We could end up coupling our microservices together, and that is something that we want to avoid. The domain experts of these business capabilities have a clear understanding of how they operate and how those capabilities are combined and used. Working with them could make our microservices understandable for everyone, including our future selves, and will move our services to become not just an abstraction, but a mapping of the original business capability. Work as closely as you can with your domain experts, it will always benefit you.

Loosely couple 

No microservice exists on its own, as any system needs to interact with others, including other microservices, but we need to do it as loosely coupled as we can. Let's say that we are designing a microservice that returns the available offers for a giving customer, we may need a relation to our customer, for example, a customer ID, and that should be the maximum coupling that we may accept. Imagine a scenario that for a component that uses our offers, the microservice needs to display the customer name when it displays those offers. We could modify our implementation to use the customer microservice to add that information to our response, but in doing so, we are coupling with the customer microservice. In that case, if the customer name field changes, for example, to become not just a name but is separated into surname and forename, we need to change the output of our microservice. That type of coupling needs to be avoided; our microservices should just return what information that is really under their domain. Remember that our domain experts could help us in understanding if a business capability owns a function; probably the experts in customer offers will know that the customer name is something that is a handle in another business capability. We need to take care of how we are coupling, not only between microservices, but with everything in our architecture, including external systems. That is one of the reasons why every microservice should own its own data, this including even a database where the data is persisted.

Single responsibility 

Every microservice should have responsibility over a single part of the functionality provided by the application, and that responsibility should be entirely encapsulated by the microservice. The design of the microservice should be narrowly aligned with that responsibility. We could adopt Robert C. Martin's definition of the principle applied to OOP that said: "A class should have only one reason to change"; for this principle, we can say: a microservice should have only one reason to change. If we realize that when we need to change a business function within our application, it modifies several microservices, or that a change cascades into non-related microservices, it is time that we reconsider how we design them. This does not mean that we get to make microservices that do only one operation. Probably it is a good idea to have a microservice that handles the customer operations, like create, find, delete, but probably shouldn't handle operations like adding offers to a customer.

Hiding implementation

 Microservices usually have a clear and easy to understand interface that must hide the implementation details. We shouldn't expose the internal details, neither technical implementation nor the business rules that drive it. Applying this principle, we reduce the coupling to others, and that any change in our details affect them. We will prevent the technical changes or improvements that impact the overall architecture. We should always be able to change when needed, from where we persist our business model, to the programming languages or frameworks that we use. But we also need to be able to modify our logic and rules, to adapt to any change within our domain without affecting the overall application. Helping to handle change is one of the benefits of a well-designed microservice architecture.

Isolation

 A microservice should be physically and/or logically isolated from the infrastructure that uses the systems that it depends on. If we use a database, it must be our database, if we are running in a server, it should be in our server, and so on. With this, we guarantee that nothing external is affecting us and neither are we affecting anything external. This will help from deployments to performance or monitoring, or even in building our continuous delivery pipeline. It will facilitate how we can be controlled and scaled independently, and will help the ops functions within our team to manage our microservices. We should move away from the days when a failure in some parts of the architecture was affecting others. Containers are one of the key architectures to effectively archive this principle.

Independently deployable

 Microservices should be independently deployable; if not, it probably means that there is some kind of coupling within our architecture that needs to be solved. If we could meet other principles but we fail at this, we are probably decrementing the benefits of this architecture. Having the ability to deliver constantly is one of the advantages of the microservices architecture; any constraints should be removed, as much as we remove bugs from our applications. We should take care of deployments from the beginning of the design of our microservices and architecture; finding a constraint on this area at late stages could have a big impact on the overall application.

Build for failure 

It doesn't matter how many tests we do in our microservice, how many controls are in place, how many alerts could be triggered; if our microservice is going to fail, we need to design for that failure, to handle it as gracefully as possible, and define how we could recover from it. "Anything that can go wrong will go wrong." – Murphy When we approach the initial design of a microservice, we need to start working on the more basic errors that we need to handle. As the design grows, we should think of all the edge scenarios, and finally what could go really wrong. Then, we need to assess how we are going to notify, monitor, and control those situations, how we could recover, and if we have the right information and tools for solving them. Think of these areas when you design a microservice:

 Upstream - Upstream is understanding how we are going to, or if we are not going to, notify errors to our consumers, but remembering always to avoid coupling. 

 Downstream - Downstream refers to how we are going to handle, if something that we depend on fails, as another microservice, or even a system, like a database.

 Logging - Logging is about taking care of how we are going to log any failure, thinking if we are doing it too often or too infrequently, the amount of information, and how this   can be accessed. We should take special care about sensitive information and performance implications.

 Monitoring - Monitoring needs to be designed thoughtfully. It is a very problematic situation to handle a failure without the right information in the monitoring systems; we should consider what elements of the application have meaningful information.

 Alerting - Alerting is to understand what the signals are that could indicate that something is going wrong, its link to our monitoring and probably to our logging, but for any good design application, it is not enough to just alert on anything strange. We require a deeper analysis on the signals and how they are related.

 Recovery - Recovery is designing how we are going to act on those failures to get back to a normal state. Automatic recovery should be our target, but manual recovery should not be avoided since automatic recovery could fail.

 Fallbacks - Think about how, even if our microservices are failing, we can still respond to whoever uses them. For example, if we design a microservice that retrieves the offers from a customer but encounters a problem acceding to the data layer, maybe it could return a default set of offers that allows the application to at least have some meaningful information. In the same way, if we consume an external service, we may have a fallback mechanism if that service is not available. Fallbacks are a common pattern to prevent a problem within your architecture affecting other parts of the system. If we have a good fallback, our application could work until that problem is fixed.

Scalability 

Microservices should be designed to be independently scalable. If we need to increase how many requests we can handle or how many records we can hold, we should do it in isolation. We should avoid that, due to a coupling on the architecture; the only way to scale our application is scaling several components together or through the system as a whole. Let's go back to the original SoA application example and handle a scenario where we need to scale our offers capability:



Example of scaling a coupled SoA application 

Even if what we need to scale is our offer capability, due to the coupling of the system, we need to do it as whole. We will increase how many instances of the presentation and business layer we have, and we increase our database either with more instances or with a bigger database. Probably,

we may need to also update some of those servers as the resources that they require will increase. In a microservices architecture, we could just scale the elements that are needed. Let's view how we could scale the same application using microservices:



We have just increased what was required for the offers' capability and to keep the rest of the architecture intact, we need to consider that in microservices, those servers are

smaller and don't need as many resources due to their limited scope. In a well-designed microservice architecture, we could effectively have more capacity with less infrastructure since it could be optimized for more accurate use and be scaled independently.

Automation

 Our microservices should be designed with automization in mind, from building or testing to deployment and monitoring. Since our services are going to be small and they are isolated, the cost to automatize them should be low and the benefits should be high. With this principle, we benefit the agility of our application and we prevent unnecessary manual tasks having an impact on the system. For those reasons, Continuous Integration and Continuous Delivery should be designed from the beginning of our architecture.

Domain-Driven Design 

Using Domain-Driven Design (DDD) in our microservices will help us meet our microservices principles, but DDD will also help organize our products teams in ways that will benefit the value that we give from this type of architecture. But first, we need to understand what DDD actually is, and how we could apply it to create a clean architecture.

What is Domain-Driven Design Domain-Driven Design is a software development approach to connect to an evolving complex model bounding into a core domain. The term, Domain-Driven Design, was created by Eric Evans in his book with the same title. When we approach a complex system, we usually abstract it to a model that describes the different selected aspects of the system, and how we could use it to solve problems. When multiple models are in play, and the code base of different models is combined, the software becomes buggy, unreliable, and difficult to understand. It is often unclear in what context a model should not be applied. The domain is the sphere of knowledge that the users of our system understand, and what they use to interact with our software; they are the domain experts. In DDD, we define the context within which a model applies; explicitly set boundaries in terms of team organization, usage within specific parts of the application, and physical manifestations such as code bases and database schemas, keeping the model strictly consistent within these bounds.

Ubiquitous language 

In DDD, we should build a common and rigorous language between developers and users. This language should be based on the domain model and will help us have a ubiquitous and fluid conversation with the domain experts, and will prove to be essential when approaching testing. Since our domain model is part of our software, we should be precise to avoid any ambiguity and evolve both model and language as our knowledge as the domain grows. But when creating software, the usage of the ubiquitous language should not be only in our domain model, but also in our domain logic and even architecture. It will allow a ubiquitous understanding by any team member. Creating tests that use the domain language help any team member to understand our domain logic.

Bounded context 

When a domain model grows, it becomes complicated to have a unified domain model. Sometimes, we face a situation when we see two different representations of a concept, for example, let's examine the concept of family in a large model. In a shopping platform, we may have the concept of products families, for example, our fabulous 32" LCD screen and the classical 24" CRT screen are part of the screen family. On the other hand, our speed offers and last day offers are part of our limited timed-offer family. We could see that family may not be exactly the same thing on products and offers, probably they both have a unique name on their model, but in each context they may have a totally different model and logic. In DDD, we separated them in to bounded contexts, a boundary that surrounds a model. This keeps the knowledge inside the boundary consistent, ignoring the outside world so we could still have our ubiquitous language for our domain model.

Context mapping 

In a large application designed for several bounded contexts, we can lose sight of the global view. It is inevitable that the various bounded contexts will need to share or communicate data between each other. A context map is a global view of the system as a whole, showing how our bounded contexts should communicate with each other.



This is an oversimplified example that shows three bounded contexts and how they are mapped. In the product context, we have our product and the family that it belongs to. Here, we will have all the operations for this domain context in it and it does not have a direct relation dependency to any other context. Our offers bounded context has a dependency on the product domain context, but this is a weak relation that should purely reflect the ID of the product that a particular offer belongs to. This context will define the operations that contain the domain logic for this context. In our shopping bound context, we have a weak relation with the product that belongs to a shopping list and will have the operations for this context. Finally, both offers and shopping concept have a relation with the customer that probably belongs to a separated bounding context.

Using DDD in microservices

Now that we have a clearer understanding of what DDD is, we need to review how we are going to apply it to our microservices. We could start with the following points: Bounded Context: We should never create a microservice that includes more than one bounded context: it is better if we can map that whole context to a single microservice, something that indicates that our context is really bounded 

Ubiquitous Language: We need to ensure that the language that our microservice speaks with is ubiquitous, so the operations and interfaces that are exposed are expressed in the context domain language

Context Model: The model that our microservice uses should be defined within the bounded context and use the ubiquitous language, even for entities that are not exposed in any of the interfaces that the microservices provide 

Context Mapping: Finally, we need to review the context mapping of the whole system to understand the dependencies and coupling of our microservices After reviewing these points, we will notice that we are in fact fulfilling the main principles defined before. Our microservices are modelled around business capabilities, our context domains, are loosely coupled as our context mapping shows, and have a single responsibility as a bound context should. Microservices that implement a bounded context could easily hide their implementation, and they will be nature isolated, so we could deploy them independently. Having those principles in place will make it easy to build for failure, having scalability and automation. Finally, having a microservice architecture that follows DDD will deliver a clean architecture that any team member could understand. The ubiquitous language of a well-designed bounded context will make many tasks easy in a microservice life cycle, from working with the domain experts to tests or any tasks for the ops function of our team.

Reactive microservices

 Reactive programming is currently a trend topic. This is mainly because of the benefits to implementing software using this new paradigm. Spring Framework 5.0 included numerous changes to give the advantage of this programming model and many new components of the Spring family have evolved to support it. In fact new Spring libraries have been created to add additional support to applications interested in what is called the reactive revolution. Additionally, Spring has rewritten the core of the framework, using reactive technologies that will allow a better technology for the applications that use them. In this section, we will understand the basics and principles of reactive programming and how we could apply it to create reactive microservices.

Reactive programming

We are quite familiar with imperative programming: in our software, we ask to do something and expect a result and meanwhile, we wait, our action is blocked expecting a result. 

Consider this small pseudo code as an example: var someVariable = getData()

print(someVariable)

 In this couple of instructions, we will set the content of a variable from the output of a function that will return data; when the data is available, we will print it out. The really important part of this small piece of code is that our program stops until we completely get the data, this is what is called a blocking operation. There is a way to bypass this limitation that has been used extensively in the past. For example, we could create a separate thread to get our data, but effectively, that thread will be blocked until completion, and if different requests need to handle this, we end up in the situation of creating a thread for each one, possibly using a pool, but finally we will reach a limit of how many of those threads we can handle. This is how most traditional applications work, but now, this could be improved. Let's see some pseudo code for this in reactive programming: subscribe(::getData).whenDone(::print) What we are trying to do here is to subscribe to an operation and when that operation is complete, send the result to another operation. In this example, when we get the data, we will print the results; the important part of this is that after that sentence our program continues, so it could process other things; this is what is called a non-blocking operation. But this could be applied not just to a single result, we could subscribe into a reactive stream of data, and when the stream starts to flow, it will be calling our function that will be progressively printing the data that we receive. A reactive stream is a collection of data that will continuously flow as soon as it is ready, so imagine that instead of querying a database for some results, the database starts sending results as often as it is ready. Many modern database drivers support these concepts. This new programming model allows us to have high-performance applications to process way more requests than in a more traditional blocking model. This approach utilizes resources more effectively and this could reduce the amount of infrastructure required for our applications. But now we need to understand what are the real principles of reactive programming.

Responsive

Modern applications should respond in a timely manner, but not only to the users of the system but also to the responsiveness of our problems and errors; we are long way away from those days where our applications would freeze when something was either taking longer to answer or failing for unknown reasons. We need to help the users have a seamless and predictable experience so they could progressively work in our system, and doing so with consistent quality, so they will be encouraged to use the system and remove our past stigma with randomized user experiences.

Resilient 

We cover much of this topic under our build for failure and isolation principles, but the Manifesto indicates as well that if we fail to have resilience, we tend to affect our responsiveness, something that we should handle. Some of these issues could be handled, as well as applying correctly our scalability principle, since we could archive resilience by replication and replication, depends on our scalability.

Elastic 

Reactive systems should be elastic, so they effectively apply the scalability principle to stay responsive under varying workloads, but more internally, the system itself may have the capability of increasing or decreasing the resources that allocate. In the older architecture, planning resources was part of our architecture; we design thread pools to handle our request with certain capacity, and we prepare our servers to be able to manage this. In reactive systems, our services could dynamically fetch more resources if required and free them when they are not needed.

Message-driven 

Reactive systems use asynchronous messaging to flow information through the different components with very loosely coupling, that allows us to interconnect those systems in isolation. We could think of this as if we are connecting streams through pipes, one service could subscribe to another to get some information and the second service could be subscribed to a couple of additional services to combine the data and return it back to the original service.



Each of those services does not know why or how that information is used, so they have little information about the dependencies. This allows us to replace those pieces easily, but as well as handling errors in case of failure, we could simply just create a stream of errors with other receivers that will handle and process them. But the manifesto speaks about applying back pressure; we need to understand that concept further.

Back pressure 

Back pressure is produced when a reactive system is published at a rate higher than the subscriber could handle, in other words, this is how a consumer of a reactive service says: Please, I am not able to deal with the demand at the moment, stop sending data and do not waste resources (for example, buffer memory). There is a range of mechanisms for handling this, and they are usually close to the reactive implementation, from batching the messages to dropping them, but right now, we don't need to get into the details, just understand that any reactive system must deal with back pressure.

Reactive frameworks 

There are several reactive frameworks that we could use to create reactive applications. Let's list the more important frameworks:

Reactive Extensions (ReactiveX or Rx) 

Project Reactor Java Reactive Streams 

Akka

Reactive Extensions 

Reactive Extensions is probably one of the most popular frameworks to create reactive systems and support a wider set of platforms and programming languages, from JavaScript using RxJS, to Java using RxJava or even in .Net platforms using Rx.Net. It uses the observable pattern to perform no blocking operations; most of the major reactive systems have been built using Rx.

Project Reactor is a JVM reactive library that follows the reactive streams specification and provides a high-level library to easily create reactive applications. Spring Framework 5.0 uses Project Reactor extensively. More details can be found at https://projectreactor.io/. Reactive Stream is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure. You can refer to http://www.reactive-streams.org/.

Java reactive streams Since Java 9, we now have an implementation of reactive streams in the Java platform, some projects are migrating existing Rx code into the new Java 9 libraries. More details can be found in: https://community.oracle.com/docs/DOC-1006738.

Akka Akka was created by Jonas Bonér, one of the main authors of the Reactive Manifesto, to create a toolkit in the JVM, using Scala, to create concurrent and distributed applications. Akka emphasizes in the actor-base model and has been proven to support high scalable distributed applications. More details can be found in: https://akka.io/.

Reactive microservices 

Now that we have a better understanding of reactive systems, we need to consider why we should create reactive microservices. If we look at microservices and remember what drove SoA into microservices, we could view what we need to create more complex applications and produce a better system for our users driving the architecture. With the new reactive programming model, we could create fast and non-blocking software that will utilize better the resources of our infrastructure. We could provide better responsiveness, and we could simplify our development to create highly reusable services that could be connected loosely with each other. And considering how aligned the reactive systems are with our principles and the extensive support of frameworks that they have, we conclude that the way forward for modern microservices is to become reactive.


1 comment:

Anonymous said...

Just awesome article.