Route Forwarding
Route forwarding pertains to how a sidecar should handle traffic to get to the next top. In Grey Matter, every incoming request that "matches" a route based on its path matching and any configured routing rules is forwarded to a specified cluster.
Table of Contents
Configuration Reference
Attribute
Description
Type
Default
shared_rules.default
Defines the types of traffic the shared rules will serve to the specified upstream clusters.
JSON
map of constraintType : constraint
{}
Routing Rules
The Grey Matter shared rules object has a required default
field that specifies a cluster constraint. The values configured in that constraint serve as the default cluster forwarding rules for any route with shared_rules_key
linking to that shared rules object. Thus, if no cluster constraints are configured in the routing rules, all requests matched to the route will be forwarded using this default
cluster constraint.
Each routing rule set on a Grey Matter route has an optional field constraints
. This field takes a cluster constraint object identical to the one for default
in the shared rules object. If a rule is configured with a specified constraint
, any request matching this route will be forwarded to the clusters configured in this field. See the traffic splitting documentation for information on more complex traffic configurations.
Response Data
Adds a set of response headers and/or cookies to be added to responses when the given weighted cluster is sent requests.
Response data can be configured directly on the Grey Matter route object, or in the response_data
field in any cluster constraint object set on a shared rules or routing rules object. All response data set in any of the above ways is merged to generate the response data added to the weighted cluster.
In both routes and shared_rules theresponse_data
field has the following values:
Attribute
Description
Type
Default
response_data.headers
a list of header response data added to headers of response
Array of responseData
[]
response_data.cookies
a list of cookie response data added to cookies of the response
Array of responseData
[]
Metadata Matching
Envoy can be configured with load balancer subsets to divide hosts within an upstream cluster into subsets based on the metadata attached to the hosts. Routes can also be configured with metadata matchers to determine which instance within the cluster to send traffic to. This will take the route matching one step further, and allow the mesh to specify not only which incoming requests should route to a specific cluster, but which incoming requests route to a specific instance of a cluster.
When an incoming request matches a route, the route will only send its requests to the instances within the specified cluster that contain the same required metadata.
Metadata matching is configured on the cluster constraint used for forwarding a route. As described above, this is either configured using the rules
set on a Grey Matter route, or the rules
set on a Grey Matter shared rules, or (by default) the default
field set on the shared rules object.
Metadata Match In Cluster Constraint
With metadata matching configured, the cluster constraint will look something like the following:
{
"constraint_key": "",
"cluster_key": "example-service",
"metadata": [
{
"key": "stage",
"value": "prod"
}
],
"properties": null,
"response_data": {},
"weight": 1
}
This configuration is telling any route using this cluster forwarding constraint to only route to instances in the cluster "example-service"
that contain the metadata key "stage"
with value "prod"
.
Subset load balancing will be enabled on the cluster by setting the load balancer subset config on the upstream cluster with key matching cluster_key
in the constraint. The load balancer subset config on the cluster will have a fallback policy of "ANY_ENDPOINT"
, which means that if there is no host in the cluster meeting the metadata match criteria, any other host endpoint will be used. With subset load balancing enabled, the subset config keeps track of which hosts in the cluster contain what metadata, and groups them accordingly by their keys.
When a request is routed to the cluster with a metadata matcher configured, the load balancer looks for hosts in the subset with keys matching that of the metadata matcher on the request, compares their values, and routes accordingly. The load balancer subset config created for the cluster with key example-service
will look like the following:
"lb_subset_config": {
"fallback_policy": "ANY_ENDPOINT",
"subset_selectors": [
{
"keys": [
"stage"
]
}
]
}
Metadata matching will then also be configured on this route. The metadata match criteria set on the weighted cluster created by this constraint will now set the envoy.lb
filter to look for metadata with key stage
and value prod
when considering upstream hosts for load balancing.
Note that in the implementation, because the fallback policy on the upstream cluster is
ANY_ENDPOINT
, this configuration will not actually block requests from being sent to an instance that doesn't match the metadata criteria.
Last updated
Was this helpful?