Micro Services on the Client?

The Micro Service Architecture is all about splitting up your application's (or group of applications') functionality into separate services that communicate with each other in some standardized way, e.g. JSON over HTTP. While this approach is rapidly gaining adoption in cloud environments, I wonder: is there also a case to be made for micro services on the client? Does it make any sense to apply this pattern in browser-based applications, and native (or less native) desktop applications?

To answer this question, let's take a step back and consider the reasons to implement micro services in general:

Micro services...

  • Decrease coupling — by making all dependencies and calls very explicit, it's relatively expensive to expose and call another service (as opposed to write and calling a method on some library), encouraging the grouping of related logic and more strictly separating components at the service level.
  • Encourage continuous refactoring and rewriting — by enforcing stricter decoupling, it becomes easier to internally refactor or rewrite a service as long as it doesn't impact its API in a backwards incompatible manner.
  • Lower dependency on a single technology — if services communicate over some technology independent medium (like HTTP or ZeroMQ), it doesn't matter what technologies are used to implement a service internally. This opens up the ability for teams to pick their own technologies and play with new ones.
  • Scale Independently — if one service is receiving too much load to handle, it can be scaled up independently from the rest of the infrastructure.
  • Localize failures — if one service crashes, for whatever reason, it doesn't take down the rest of the system (when built with resilience in mind, see next point). Independent monitoring of services also can make it easier to localize problems and bugs.
  • Are designed for failure — because services may live far away across a network, a whole new set of new failure scenarios opens up, forcing "resilience thinking" . Services that depend on other services therefore have to be resilient to such failures.
  • Can be released on independent release cycles — rather than having a combined release cycle of the whole system, teams can release their own services as frequently as they like.
  • Require zero-downtime deployment — not strictly related to micro services, but especially important with independent release cycles, you don't want to introduce down-time whenever somebody updates a service.

As we moved from thin clients (basic "HTML 1.0" web pages) to thick-clients with a lot of logic running on the client (whether native on the desktop, in the browser, or some hybrid like Electron) — do any of the listed properties of micro services make sense in the context of the client too?

  • Decrease coupling — sure
  • Encourage continuous refactoring and rewriting — yep
  • Lower dependency on a single technology — on the desktop this can be valuable (part written in C++, part in Python, part in Objective C), in the browser perhaps to some extent, although interoperability is probably easier because everything is JS in the end (although it may simplify things like some WebAssembly component communicating with an Elm codebase)
  • Scale Independently — this doesn't make much sense in the client, probably
  • Localize failures — yep, if your C++ service segfaults, it doesn't have to take down the whole system. In the context of a browser: when your web worker (assuming services are deployed as independent web workers) crashes, or blocks its event loop, it doesn't impact all other services.
  • Are designed for failure — yep.
  • Can be released on independent release cycles — can make sense, perhaps you iterate certain parts of the systems more often, and updates can be shipped just for one service.
  • Require zero-downtime deployment — probably less of an issue on the desktop, but for applications that are long running and always in use, it may be favorable to not have to restart an app for every software update. Also, by being to actively push updates, you avoid the problems of having to support legacy client versions.

So, how could this work in practice?

Let's look at two contexts:

  1. a native desktop app written in C++ and Python; and
  2. a browser-based app built using JavaScript

Native desktop

Service implementation: services can run as separate OS-level processes.

Communication: ZeroMQ, or even HTTP can be used, messages can be sent encoded in JSON, msgpack, protobufs or thrift. Both Python and C++ have excellent support for all of these.

Service manager: A single "parent" process has the job of managing the lifetimes and discovery of all services in the system.

Browser

Service implementation: services run as separate Web Workers

Communication: postMessage message passing between web workers and the main browser thread.

Service manager: The main browser thread manages the lifetimes and message passing between all services in the system.

So what would this service manager do?

  • It manages service discovery — either by performing DNS-like service discovery, letting services know how to talk to other services, or using some sort of simple service bus, itself relaying messages between services.
  • It manages service life times — when the system starts, it boots up all services, restarts them when they crash, it also performs health checks and restarts if they stop responding.
  • It manages service updates — polling some update server to see if there are any updates available, if so, downloading them transparently. Spawning a new version of a service, waiting for it to come up, then gracefully shutting down the old version once all ongoing requests to it have ended.

That sounds cool. Everything comes at a cost, though. Here's the issues I see:

  • Added communication overhead — rather than making method calls, now you send messages between process, which is obviously much more expensive — even on light-weight protocols like ZeroMQ and fast message encodings like protobufs — this is going to be much slower. If the volume of these messages, or the payload of these messages is high this may become a bottleneck.
  • Added memory overhead — rather than having one process run, you now boot up a whole slew of processes with possibly their own copies of libraries.
  • Extra complexity — what was calling a method, or some function, now involves sending an RPC call.
  • What about UI? Client apps are often UI heavy, how to support micro services there? The "Building Microservices" book has some good ideas on how to handle this though.

So, is anybody doing this today? On twitter, people pointed me to OpenFin, which is a toolset to build desktop apps for the financial industry. It seems to be based on small, independently deployable services as well, all built on HTML/JS on top of Electron.

Know of any other examples? Let me know.

What about Egnyte? The whole reason of me thinking and writing about this, is that we are talking about this in the context of future versions of our desktop applications. Sounds interesting? We're hiring.