Site icon 지락문화예술공작단

Introducing NGINX Service Mesh

Introducing NGINX Service Mesh

We are pleased to introduce a development release of NGINX Service Mesh (NSM), a fully integrated lightweight service mesh that leverages a data plane powered by NGINX Plus to manage container traffic in Kubernetes environments. NSM is available for free download. We hope you will try it out in your development and test environments, and look forward to your feedback on GitHub.

Adopting microservices methodologies comes with challenges as deployments scale and become more complex. Communication between the services is intricate, debugging problems can be harder, and more services imply more resources to manage.

NSM addresses these challenges by enabling you to centrally provision:

NSM secures applications in a zero‑trust environment by seamlessly applying encryption and authentication to container traffic. It delivers observability and insights into transactions to help organizations deploy and troubleshoot problems quickly and accurately. It also provides fine‑grained traffic control, enabling DevOps teams to deploy and optimize application components while empowering Dev teams to build and easily connect their distributed applications.

What Is NGINX Service Mesh?

NSM consists of a unified data plane for east‑west (service-to-service) traffic and a native integration of the NGINX Plus Ingress Controller for north‑south traffic, managed by a single control plane.

The control plane is designed and optimized for the NGINX Plus data plane and defines the traffic management rules that are distributed to the NGINX Plus sidecars.

With NSM, sidecar proxies are deployed alongside every service in the mesh. They integrate with the following open source solutions:

Features and Components

NGINX Plus as the data plane spans the sidecar proxy (east‑west traffic) and the Ingress controller (north‑south traffic) while intercepting and managing container traffic between services. Features include:

Getting Started with NGINX Service Mesh

To get started with NSM, you first need:

To deploy NSM with default settings, run the following command. During the deployment process, the trace confirms successful deployment of mesh components and finally that NSM is running in its own namespace:

$ DOCKER_REGISTRY=your-Docker-registry ; MESH_VER=0.6.0 ; 
 ./nginx-meshctl deploy  
  --nginx-mesh-api-image "${DOCKER_REGISTRY}/nginx-mesh-api:${MESH_VER}" 
  --nginx-mesh-sidecar-image "${DOCKER_REGISTRY}/nginx-mesh-sidecar:${MESH_VER}" 
  --nginx-mesh-init-image "${DOCKER_REGISTRY}/nginx-mesh-init:${MESH_VER}" 
  --nginx-mesh-metrics-image "${DOCKER_REGISTRY}/nginx-mesh-metrics:${MESH_VER}"
Created namespace "nginx-mesh".
Created SpiffeID CRD.
Waiting for Spire pods to be running...done.
Deployed Spire.
Deployed NATS server.
Created traffic policy CRDs.
Deployed Mesh API.
Deployed Metrics API Server.
Deployed Prometheus Server nginx-mesh/prometheus-server.
Deployed Grafana nginx-mesh/grafana.
Deployed tracing server nginx-mesh/zipkin.
All resources created. Testing the connection to the Service Mesh API Server...
 
Connected to the NGINX Service Mesh API successfully.
NGINX Service Mesh is running.

For additional command options, including nondefault settings, run:

$ nginx-meshctl deploy –h

To verify that the NSM control plane is running properly in the nginx-mesh namespace, run:

$ kubectl get pods –n nginx-mesh
NAME                                 READY   STATUS    RESTARTS   AGE
grafana-6cc6958cd9-dccj6             1/1     Running   0          2d19h
mesh-api-6b95576c46-8npkb            1/1     Running   0          2d19h
nats-server-6d5c57f894-225qn         1/1     Running   0          2d19h
prometheus-server-65c95b788b-zkt95   1/1     Running   0          2d19h
smi-metrics-5986dfb8d5-q6gfj         1/1     Running   0          2d19h
spire-agent-5cf87                    1/1     Running   0          2d19h
spire-agent-rr2tt                    1/1     Running   0          2d19h
spire-agent-vwjbv                    1/1     Running   0          2d19h
spire-server-0                       2/2     Running   0          2d19h
zipkin-6f7cbf5467-ns6wc              1/1     Running   0          2d19h

Depending on deployment options which set the policies for manual or auto‑injection, NGINX sidecar proxies are injected into deployed applications by default. To learn how to disable auto injection, see our documentation.

For example, if we deploy the sleep application in the default namespace and then examine the Pod, we see that two containers are running – the sleep application and the associated NGINX Plus sidecar:

$ kubectl apply –f sleep.yaml
$ kubectl get pods –n default
NAME                     READY   STATUS    RESTARTS   AGE
sleep-674f75ff4d-gxjf2   2/2     Running   0          5h23m

You can also monitor the sleep application with the native NGINX Plus dashboard by running this command to expose the sidecar to your local machine:

$ kubectl port-forward sleep-674f75ff4d-gxjf2 8080:8886

Then navigate to http://localhost:8080/dashboard.html in your browser. You can also connect to the Prometheus server to monitor the sleep application.

You can use custom resources in Kubernetes to configure traffic policies like access control, rate limiting, and circuit breaking. For more information, see the documentation.

Summary

NGINX Service Mesh is freely available for download at the F5 portal. Please try it out in your development and test environments, and submit your feedback on GitHub.

To try out the NGINX Plus Ingress Controller, start your free 30-day trial today or contact us to discuss your use cases.

The post Introducing NGINX Service Mesh appeared first on NGINX.

Source: Introducing NGINX Service Mesh

Exit mobile version