What Are Microservices and Why Do Developers Love Them?
Microservices are a hot topic. Due to some success stories, microservices architecture (not to be confused with endpoints) have proven to be a great way to build, deliver & deploy autonomous, independent and scalable features.
A simple way to understand microservices is to think of them as independent pieces of a puzzle, working together to deliver a major feature or functionality.
The purpose of this approach is to achieve the following:
Maximize the autonomy of multiple teams.
Each microservice encapsulates all logic and data required, which means they can be developed independently from other components or features. This creates a work environment where teams can get more done without having to coordinate with other teams.
Optimize development speed.
Microservices architecture empowers teams to build powerful services easily and quickly by removing complexity, time to QA and deployment, dependencies and alignment/grooming discussion.
Focus on automation.
People make mistakes. More systems to operate means more room for error. So, how do you minimize risk? Automate everything. This way, small pieces can easily reach the deployment pipeline where they can service any other team or feature. In addition, reduced complexity implies faster automation testing for UI/API/integration etc.
Provide flexibility without compromising consistency.
Give teams the freedom to do what is right for their services, but have a set of standardized building blocks to keep things organized in the end. Aside from the development benefits, smaller pieces of code are easier to test and cover with unit or automation tests. Consequentially, they’re easier to change and adapt.
They are built for resilience.
Systems can fail for a number of reasons. A distributed system introduces a whole set of new failure scenarios. However, error management and any failure mechanism are easier to implement, test and maintain. This ensures measures are in place to minimize impact. Please note that multiple points of failure require a stronger focus on integration tests, because smoke tests for deployments become less relevant.
With a microservices architecture, instead of one codebase, you will have many. Have guidelines and tools in place to ensure consistency. If development is strongly guided by the same standards and practices, any team can easily pick up a microservice and change it because of the smaller, simplified codebase and encapsulated logic operations.
Due to the specialized nature of microservices, problem identification should theoretically be easier because we can pinpoint what service is the one responsible for the problem. Logging and data separation are mandatory to facilitate this feature.
It sounds almost too good to be true, and it certainly makes you wonder why companies are not implementing microservices architecture into all of their projects. But, while it sounds great on paper, like any other architectural pattern, microservices architecture comes with its own set of limitations that need to be taken into account.
Will You Benefit from Microservices Based on Your Business Domain?
One of the main requirements of microservices architecture is the possibility to separate the application business domain into atomic subdomains, each with its own data and business logic. This takes small applications directly out of the equation.
I remember an interview several years ago for a Software Architect position, in which the focus of the interviewer was how I could implement the backend for a “Tic-Tac-Toe” game using microservices. It was clear this type of application would never benefit from any advantages of a microservice, but we still lost two hours theoretically thinking about how it could be done.
On the other hand, large scale applications usually come with a high degree of complexity in terms of features, data flows and data volume. As such, it might be difficult to create a clear separation of the domain in a way that facilitates the microservices architecture. Usually, to ensure a high degree of control, companies opt for a Component/Module based approach and migrate Components to microservices later on.
A financial client I worked with in the past had a monolith application that was very difficult to change and maintain. While we considered a migration to micro-services, we chose a Component approach simply because coordinating development required precise knowledge of each feature, but there was no documentation and no one with a decent overview of the system. It would have worked great once implemented but would have taken years of development to do it correctly.
In this context, medium sized applications are the most likely candidates for this pattern as long as the domain complexity is low enough to allow subdomain separation and data flow or the data volume is manageable enough to fit the microservices.
What Are Developers Saying About the Microservices Architecture?
Although, microservices architecture is a trend on the rise and throngs of people are talking about it, very few Romanian companies have relevant success stories while working with this pattern. In turn, most developers do not possess working knowledge of this pattern and their implementations, no matter how diligently done, may fall short of the standards and requirements of microservices.
If you’re curious enough to read a microservice guide, you might be surprised to see the rules and practices are generic and surprisingly lax, leaving it open to interpretation (this is reflected in the multitude of frameworks and approaches that facilitate microservices development).
In this context, and beyond the overall architecture of any system, individual developers need to be able to choose the correct approach, level of separation, communication channels among many other factors – which may severely affect the overall development processes and desired result.
In my experience, I have seen teams introduce up to several dozens of technologies and frameworks (each excellent in their own way) in a handful of micro-services leading to a “ball of mud” of dependencies and overlapping functionalities that could never be maintained by a single team. Furthermore, the effort to synchronize all these technologies and the required resources to run them all made the products unusable. For example, (leaning towards a Delta Architecture approach) a few microservices would accommodate SQL and NoSQL databases alongside a distributed cache, two different messaging systems, synchronous and asynchronous processing and several AWS providing services just to calculate and update some product prices. It took hours.
A unified understanding of the pattern established within the team and a more rigid set of practices are required for a successful implementation of this architectural pattern. Software Architecture analysis and System Design need to go hand in hand each iteration to make sure only the required implementations make it into the code base. Finally, focusing on smaller and more manageable commits and facilitating code review is key to keeping things on the right path.
How to Manage the Challenge?
Another typical challenge is the number of teams that participate in the development process. Ideally, having multiple teams allows separation of the development of each feature, ensuring parallel development of smaller pieces of code that together build the required feature.
Unfortunately, practice has shown that varying degrees of experience in teams leads to the constant appearance of bottlenecks during delivery. One team might move slower than expected, leading to the other team waiting or mocking the dependencies and having to revisit their code after the slower team delivers.
In order to manage this challenge, management needs to be more involved in the delivery process. The Agile ceremonies become even more important to the success of the product and constant grooming is required to align the process between teams. As stated by Conway’s Law, “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations” and as such, microservice communication would mimic the issues that appear in the communication between teams, leading to a strong necessity to improve and maintain a high level of communication.
Testing the Data
Monolith or isolated systems are much easier to test than microservices. No external dependencies, encapsulated business logic, a certain beneficial rigidity of the system maintained by sufficient unit test coverage make standard applications less challenging in terms of testing.
Microservices, on the other hand imply external dependencies by definition. Even if unit tests ensure the correct intent of individual implementations of services, business value can only be achieved through service communication.
As such, great effort is required to automate integration tests, ensuring that the collaboration of services, seen as a single feature, works as expected. This not only implies having automation QA specialists in each team, but also developing separate applications just to test the applications you are developing.
Infrastructure needs to (at least) accommodate Continuous Delivery with automated execution of integrated tests and a notification system that is able to identify emerging problems after each microservice is deployed.
Another challenge related to automation testing is the correctness of data and generating a sufficient volume of correct data so the automation tests are relevant. From my experience, this usually takes the same amount of time, if not more than the actual implementation of the microservice leading to an increase in development time and obviously, higher development costs. In worst-case scenarios, integration testing becomes a bottleneck for delivery.
It’s a Matter of Resources
Let’s assume we have an application that is a perfect fit for the microservices approach and the teams are confident they can get it done within the time constraints. The next thing to take into account is the cost of the infrastructure.
We assumed, at some point in the system design process, that some parts of the system would need to accommodate a high volume of data and requests while others would be rarely used. As such, we try to separate these features based on usage, and design a deployment plan that would facilitate more instances of specific micro-services. Existing tools, like Kubernetes, really make it simple to orchestrate deployments but the trick is to have enough resources for your needs.
On a number of occasions, in order to improve the performance of some features, I have seen resource costs increase by 10x, making us drop the whole thing and accept that the response time for some requests would be seconds or minutes as long as costs are manageable. For example, due to the nature of the product, batch processing was not possible – leading to sequential processing (requiring a service call for each article). Scaling to improve performance meant having hundreds of instances (to accommodate hundreds of articles and hundreds of parallel clients) which was not cost effective.
Diving in the Microservices Ocean
It is true, that in some cases this pattern has proven to deliver great results and has greatly improved the quality and performance of various services and products.
In addition, it is easy to buy into the hype of microservices. But, the truth is that not all projects need this and for some of them only makes work harder or more expensive.
Before diving in the microservices ocean, take the necessary time to understand the costs and benefits of the microservices architecture, make sure your teams understand what you want to achieve and make sure they know how to do it right.