The State of the Service Mesh, Part 1
Some of the loudest buzz in the IT industry during 2018 was around the concept of service mesh. Here at NGINX we began to experiment with implementing a service mesh using the proven capabilities of NGINX Plus to facilitate and secure interprocess communication among containerized applications. Innovation and commercialization of service mesh solutions has rapidly advanced since we first addressed the question What Is a Service Mesh? in April 2018. In this post, we summarize and highlight some of the major new developments as 2019 begins.
Only a small fraction of Infrastructure & Operations and DevOps professionals in enterprises have experimented with service mesh so far. They are typically the trusted strategists and technology innovators responsible for delivering public and enterprise apps rapidly, securely, scalably, and with fault tolerance. Meeting service‑level objectives for availability, performance, and utilization while providing more value than the competition is a primary strategic motivation for these innovators, even as they envision how service mesh might develop as a platform for improved DevOps processes yet to come.
Since our first post, some technologies related to service mesh have emerged as dominant – Docker for containers and Kubernetes for orchestration, for example. Istio, an open source service mesh project backed by Google, IBM, and Lyft, continues to garner interest among enterprise architects. However, Istio is complex, which can make it hard to use, and it requires a substantial infrastructure footprint. As a result, a variety of alternative early service mesh experiments and projects have been introduced, enhanced, or consolidated, including these:
- Solo.io introduced SuperGloo as a “service mesh orchestrator” to enable organizations to use multiple service meshes.
- HashiCorp repackaged Consul as a service mesh, bundling its Consul Connect tool for managing sidecar proxies with the well‑known service registry tool.
- Conduit merged with Linkerd, and the Cloud Native Computing Foundation (CNCF) has adopted Linkerd 2.0 as an official project.
At the same time, some early‑stage experiments designed to test service mesh capabilities and conceptual use cases ran their course or consolidated paths with other projects. For example, we are no longer actively developing our nginMesh project, and have handed off development and support to the open source community. At our annual NGINX Conf conference last October, we announced that we are pursuing a bold new direction: delivering enterprise‑grade service mesh solutions based on the NGINX Application Platform.
We anticipate exciting ongoing rapid development and commercialization of the service mesh and look forward to helping you succeed in delivering the service mesh capabilities that your business demands. In future posts in this series, we will explore several service mesh topics. First up, we consider the tipping point – how to tell when the benefits of a service mesh outweigh the costs and risks, and also how to improve your app delivery infrastructure even before you reach that point. Then we’ll look at critical aspects of service mesh functionality in more detail: availability, performance, security, flexibility, scalability, policy management, and ease of use. We hope you’ll join us on the journey!
The post The State of the Service Mesh, Part 1 appeared first on NGINX.