Microservice Madness



This is an op-ed piece referring to "The Death of Microservice Madness in 2018".


Let's call the reader’s attention to the section in the above article that lists some of the challenges with microservices. All too often, when people talk about why they have decided to move to microservices, they inevitability leave out the discussion of the cost. Let's take a look at a few of the most common costs people encounter.


The cost of communication over the network is not trivial.


People often talk about moving to microservices to increase scale. However, they often gloss over the fact it's most likely moved from a function call on the JVM (nanoseconds) to a network call (sometimes seconds).

For example, on one system our team saw a single request produce an average of 12 messages on a message queue and take 5 to 6 seconds for a response. If that request was an HTTP call into a single server that then queried a datastore and sent back a response, the call would be on the order of tens or possibly hundreds of milliseconds. Scaling this system was really diffucult and required large scale monetary resources.

Do you know how big the largest AWS instance is that doesn’t require special approval from Amazon before you can spin it up?


The cost of development goes up.


At Cambium, we love the challenges of coordination in highly distributed systems and the particular challenges of coordinating state and reporting errors when requests involving numerous services and machines are involved. Never mind cluster management, failover, versioning, and pagination of data.

Do any of thse things sound like they are essential to meeting the needs of your customers, or do they just want software that works?

It’s hard to trace the thread of execution in a distributed system.

Features often span multiple services. That’s the point, right? Well, even if you use a unified logging service like Papertrail, it can be difficult to trace a request flowing through all of the services. This can be mitigated with logging standards enforced across all services which include a user-id and a request-id, however, even when you see what is going on, it can be really hard to know what to do. Errors are notoriously hard to handle across microservices.



How should we design software systems? Should we go back to writing monoliths? One big server and one large codebase to rule them all? What is the right level of modularity for the problem you are trying to solve?

We’ll continue soon with, “Microservices v.s. Monoliths: Is there a middle ground?” We’ll discuss how well various software architecture patterns match various problems, so you can start to get a feel for the right solution to fit your needs.



Humbly yours,

El Glaus'

Daniel Glauser