Multi-Mesh
Last updated
Was this helpful?
Last updated
Was this helpful?
Multiple meshes can be connected to each other by configuring Grey Matter API objects. This document walks through different methods of configuring multi-mesh, as well as related identity propagation in requests work flows across multiple meshes.
To get services in one mesh to talk to services in another, a cluster should be created that points to the Host/IP(s) of the ingress edge. The following is an example of a cluster configuration that could be applied in Mesh A which points to the location of Mesh B. It also tells any proxies that try to route to this cluster what certs it should have on disk in order to connect. Note that if the ingress edge you're pointing to is behind some kind of load-balancer, you will also need to so that the proxy can direct traffic to the correct upstream host.
We can then tell the mesh how to route to this cluster using a shared_rules
object. The light
array contains a list of clusters to which requests will be sent. In this simple case we want all traffic routed to the Mesh B cluster and can link to it using the cluster_key
:
Once the Mesh B cluster has been created, routes can point to it just like any other service within the mesh, as long as those service sidecars have the correct certs on disk. The following is an example of a route configuration for a service called service-a
that uses the traffic rule shown above (mesh-b-shared-rules
) to route to Mesh B. If a request with the path /mesh-b/
is made to service-a
's proxy, it will rewrite /mesh-b/
to a forward slash /
and send the request along.
If you wanted service-a
in Mesh A to only route to a specific service in Mesh B, let's call it service-b
, you could use prefix_rewrite
to point directly to it:
Another way to achieve a multimesh setup is to stand up a dedicated egress edge that handles cross-mesh traffic. Instead of pointing each service to the ingress edge of the other mesh like in the above example, only the egress proxy knows about the second mesh and all services route to it instead. This is beneficial for security since the egress proxy is the only entity with the necessary credentials to connect to Mesh B and prevents the need for every sidecar to have those certs on disk. It's also a good pattern to choose if you want to monitor cross-mesh traffic.
To achieve this setup you can deploy a standalone proxy like any other service. Then add a route which points to the Mesh B cluster:
Then update your service routes to point to the egress proxy's shared_rules:
The Grey Matter inheaders
(Ingress Headers) filter should be enabled on each mesh's ingress edge in order to correctly propagate user and service identity throughout the mesh. This is configurable on the proxy object by adding it to the active_proxy_filters
array. gm_inheaders
also has a debug option which is helpful when looking at the proxy logs:
User and service identity flows through the meshes as follows:
The client uses a PKI or oAuth token to hit the Mesh A edge
The inheaders
filter on the Mesh A edge edge proxy grabs the USER_DN
from the incoming headers as well as the DN
from the SSL certificate
service-a
proxy propagates the headers as the request flows through
When the request exits Mesh A and hits the edge proxy of Mesh B, the inheaders
filter will check what is already set. USER_DN
already exists so it will keep passing it along, however, it will rewrite the EXTERNAL_SYS_DN
and SSL_CLIENT_S_DN
headers to reflect the DN
of the last certificates in the chain. In this example, it would be the DN
of our server.crt
that we configured for cluster-mesh-b
.
service-b
finally receives the request. It has the USER_DN
of the client that first initiated the request and the identity of the service that last touched the request in Mesh A.