# Load Balancing

Load balancing dictates how requests are distributed across multiple instances of any given service. This document shows how to configure load balancing policies in Grey Matter.

## Table of Contents

* [Configuration Reference](#configuration-reference)
* [Detailed Configuration and Usage](#detailed-configuration-and-usage)
  * [`lb_policy` Options](#lb_policy-options)
* [Example Cluster](#example-cluster)
* [External Resources](#external-resources)

## Configuration Reference

To configure load balancing, the `lb_policy` field of the `cluster` object must be configured.

| Attribute   | Description                                               | Type     | Default         |
| ----------- | --------------------------------------------------------- | -------- | --------------- |
| `lb_policy` | How requests are distributed to instances of the cluster. | `string` | `least_request` |

## Detailed Configuration and Usage

### `lb_policy` Options

| Value             | Description                                                                                                                                                                                                                                                                                                                          |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `"least_request"` | New requests will be sent to the instance with the fewest active requests.                                                                                                                                                                                                                                                           |
| `"round_robin"`   | Requests will cycle through each "next" host in the available instances                                                                                                                                                                                                                                                              |
| `"ring_hash"`     | Ring Hash is one of the load balancer policies that does consistent hashing to upstream hosts, provided requests are made in a way that supplies a value to hash on. See the full explanation in the Envoy [docs](https://www.envoyproxy.io/docs/envoy/v1.15.0/intro/arch_overview/upstream/load_balancing/load_balancers#ring-hash) |
| `"random"`        | New requests will be sent randomly to a new host.                                                                                                                                                                                                                                                                                    |
| `"maglev"`        | Maglev is another load balancer that performs consistent hashing to upstreams, but is dependent on hashable information available in the request. See more in the Envoy [documentation](https://www.envoyproxy.io/docs/envoy/v1.15.0/intro/arch_overview/upstream/load_balancing/load_balancers#maglev).                             |

## Example `cluster`

```javascript
{
  "zone_key": "default-zone",
  "cluster_key": "edge-service",
  "name": "edge",
  "instances": [],
  "circuit_breakers": {
    "max_connections": 500,
    "max_requests": 500
  },
  "lb_policy": "random",
  "outlier_detection": null,
  "health_checks": []
}
```

## External Resources

* [Envoy Load Balancers](https://www.envoyproxy.io/docs/envoy/v1.15.0/intro/arch_overview/upstream/load_balancing/load_balancing)
