Microservices - Good Things Come In Small Packages

There is an old adage that the best way to solve a problem is by breaking it down in to smaller problems, this may seem like an obvious approach and humans have probably being applying it since we lived in caves, but just do a web search and you’ll find many self-styled “Tech Gurus” and “Life-Hackers” explaining this as though it were some kind of new Alchemy.

It is also not a new concept in software engineering - even when writing software in bedrooms on a ZX-81 at the age of 13, we were using ZX BASIC `GOSUB` commands to create reusable functions. 

As we move through the history of third-gen programming languages, the introduction of OO concepts. Service-Oriented Architectures, functional programming and more, the goal of separating software into its logical components for one reason or another has been reinvented more than any other concept in modern computing.

So, here we find ourselves looking at the newest reinvention an old-idea, microservices; the latest attempt to bring abstraction and reuse to enterprise software.

So-long Monolith 

2001-monolith-apes.jpg

Traditionally, software was built and deployed in its entirety to a single server and scaled horizontally, whether that server was a simple web-server or a full-featured application server, even when software had been “modularised” or service-based, those modules or services interacted with each other on a more-or-less one-to-one basis and were deployed as a single deployment.

Of course there has always been some separation, queueing servers or messaging servers were often separated, but this was really as far as it went.

In the microservice world these single deployment large applications are known, somewhat pejoratively as “Monoliths”, single indivisible tablets of stone that, presumably to extend the analogy - needed armies of workers to move them

Hello Micro

The concept of microservices is to sub-divide the separate modules or services into their own stand-alone autonomous nodes within the application. This has benefits from both an architectural and performance point of view but also introduces risks and complexity to the project.

The benefits of a microservice based architecture are largely the same has been touted before in that it supports abstraction and encapsulation of design and implementation of software services. The ethos of “do one job and do it well” is embraced and each microservice is self-contained, responsible for its own area of the application and its own domain of data. Services generally should not share databases and should rarely be inter-dependent. Each microservice communicates with other microservices using network based technologies such as HTTP, and where necessary will hand-off work for which it is not responsible to the other microservices in the network.

Small but Mighty

This approach has several strengths that go beyond the logical modularisation that was striven for in the past, whilst it is true that the physical separation of these services enforces a more rigid approach to design, architecture and development, there are more tangible benefits that come from the physical separation employed.

MightyMouse001CovCLima.jpg

As microservices are deployed separately, each can be scaled independently, indeed this can often be done dynamically in response to fluctuating changes in load. For example when a major marketing campaign is due the messaging services can be briefly scaled up to provide greater capacity and throughput. This is of course especially useful in cloud-based hosting where there is an abundance of computing resources but can be very expensive to be paying for unused capacity.

Microservices do not need to share an architecture, in fact legacy services can be wrapped up and containerised to work within a mixed-architecture, communicating with more modern systems seamlessly.

The separation of responsibilities across the service architecture means that communications between the services can be asynchronous where possible and this can have a positive impact on overall service latency.

Communication can be facilitated by a queue system and “handed-off” to be processed at a later date, for example data loading for reporting can be processed in near-real-time without this putting any strain on the core systems.

Because each service is properly self contained, it is much easier for development teams to work in parallel on software that has had its dependencies removed, unit testing and functional testing is also much easier to achieve.

To a man with a hammer…

As Mark Twain (allegedly) said “to a man with a hammer - everything looks like a nail” - microservices need to be applied as part of an overall design and planned architecture, they are not the architecture, they are one tool available to developers and DevOps staff when designing and building a system.

iu.jpeg

Microservices are in no way a panacea, nor are they without their own pitfalls and bear-traps. 

One of the biggest mistakes that teams fall into with microservices is a system that is far too “chatty”. This can easily happen when the ethos leads the design and services are created in a too granular or distributed manner meaning that a simple business operation is distributed across many nodes that all depend on one another, this causes a huge amount of network traffic between the nodes and causes major latency issues. 

Perhaps the most well-known example of this is Dell Computers migration project - this was more a failure of design and management than of technology, but it is a sobering example of how project myopia can crate major problems.

You've spent months re-architecting your monolith into the new microservices vision. Everyone gathers around to flip the switch. You navigate to the first pa...

Building towards a Microservice future

Of course, as with any architecture, it is much easier to adopt on a “green-field” project than it is on a live system. However as described above, there is no reason that this transition cannot still be achieved. 

New services can be built in the micro service architecture and can talk to the existing monolith system. In fact this is sometimes a better way to introduce the team to a new technology such as this, where they can use the new project as a “sandbox” to understand the new architectures, failing fast and early and truly understanding the concepts, the rules and when those rules should be broken.

A successful migration will probably take a year or more, but as with Agile, deliveries should be rapid and iterative and as with any aspect of architectural change, strong management and good communication are the keys to success.

Also published on LinkedIn.