This article was a collaboration between Antoine and Kevin. Photo by Hidde Schalm on Unsplash.


Define Sidecar
Distributed systems require careful attention to overall architectures. Fortunately, there are popular design patterns that can be followed so that we don’t have to reinvent the wheel. In particular, the sidecar (or sidekick) pattern involves deploying components of an application into a separate process or container in order to provide isolation and encapsulation. Similar to a motorcycle and a sidecar, the child (component) shares the same lifecycle as the parent application.

Related functionality vs Different services
In the sidecar pattern, tasks like logging, networking services, and configuration can run in the same process as the parent application, making use of shared resources. This can also introduce vulnerabilities. Its purpose is to provide better isolation so that a component outage will not jeopardize an entire application or other components. This decomposition pattern allows us to separate our application into separate services which can individually be built using different languages and technologies. Although more flexibility can be obtained by implementing this method, it also means that each component will have its own dependencies and require language-specific libraries to access the underlying platform or resources shared with the parent application. Unfortunately, this can add latency and complexity for hosting or deployment.

How to approach this dichotomy


We can directly approach this issue by co-locating a cohesive set of tasks with the primary application, but place them inside their own process or container, providing a homogeneous interface for platform services across languages. The sidecar pattern is often used with containers and referred to as a sidecar container or sidekick container. This way the sidecar service is not truly part of the application, however, it remains loosely coupled to it. Sidecars act as supporting processes or services that are deployed with the primary application.

Advantages:
No significant latency when communicating between the sidecar and primary application because of its proximity.

A sidecar can monitor its own system resources in addition to the primary application’s.

Applications that don’t provide extensibility can use a sidecar to extend functionality by attaching it as its own process in the same host or sub-container as the primary application.

Potential pitfalls
Before putting functionality into a sidecar it’s important to consider whether the service you’re seeking to isolate would work better as a traditional daemon. Alternatively, the functionality could be implemented as a library or using a traditional extension mechanism. Keep in mind that language-specific libraries may have a deeper level of integration and less network overhead. Perhaps your app will benefit from less complexity, in some cases it’s better to bake in the functionality you’re looking for.

Identifying the best time to use this pattern
This design approach can be used when a component is owned by a remote team. If you need a service that shares the overall lifecycle of your main application, but can be independently updated the sidecar pattern is also recommended.

It’s not advisable to use this pattern when the service needs to scale differently than or independently from the main applications. If so, it may be better to deploy the feature as a separate service.

A real-world example courtesy of Dave’s Two Cents:

Say you have a web application that uses HTTP to host it and you want to secure it with SSL. You could dig into the application and make the necessary changes; perhaps resources were hardcoded. That may be too much work, or maybe you don’t even have the source code. What can you do then? First, host the web application in a container and expose a local host endpoint (127.0.0.1). Now host the NGINX service in a Sidecar container. Configure NGINX to terminate the SSL connection and forward the requests to the web application. Because the Sidecar is on the same host, it can access the local endpoint. The web application didn’t have to change to support this new functionality.”

With this common distributed system design pattern you’re decoupling your system in different parts. Each part has its own responsibilities, and each solves a different problem.

At Whiteblock, we’re actively working on providing the best tools to test and optimize distributed systems. Join us for our official launch on January 15th and stay tuned on all of our upcoming updates via telegram.

Get Free Early Access To The Beta

Get Free Early Access To The Beta

You have Successfully Subscribed!