DZone

We all know that creating big complicated monoliths is a bad idea, and creating microservices is the way to go, but why? Why are monoliths ‘bad’? Why are microservices ‘better’? Is there an optimum number of services that we should aim for when creating a system? This article takes a mathematical journey into complexity to uncover why microservices make sense, why monoliths are the great entropy monsters we think they are and what happens if we try to minimise complexity in our applications.

Microservices are a great way to structure software — they support Domain-Driven Design and allow us to align our deployment model with our delivered business value. Large monoliths, on the other hand, can be very complex, with a high degree of coupling between the parts. Sometimes we find it can be better to split them apart, creating subsystems, or, if we continue down this path — microservices. These smaller parts are easier to manage, fix and deploy. Practically speaking the smaller parts seem less complex overall and are less difficult to handle.

Let’s try to turn these imprecise statements into something more concrete. Is there a way in which we might compute the complexity of our applications, and can we see what happens to that complexity ‘value’ when we divide a monolithic application up into smaller parts?

Source: DZone