Getting Envoy configured locally is a great way to start learning about all of its parts. A minimal setup is fast, especially with a few default configurations.
Send your first request through the proxy to a service running on your laptop, and learn how to connect more services to Envoy.
Stuck? Found an edge-case issue? Here's where to go to get answers, and the information you’ll need to dig into any issue.
RDS lets you move routing configuration out of static configs and into a type-safe API, making typos and merge conflicts a thing of the past.
Envoy’s logs contain a lot of data that you can’t get from the auto-generated summaries. Decide what’s useful to you, configure LDS to emit these logs, parse them and forward them to an appropriate consumer.
Information on one Envoy is great, but information about your entire environment is better. Use the Envoy primitives to aggregate information about clusters/services, domains/routes, and servers/nodes.
Unlike traditional serving layers, Envoy’s behavior can change without human intervention. Use the admin interface to inspect the current state of an Envoy instance, including backend health and other runtime metrics.
How do you get traffic into your service from the internet? Envoy can take care of terminating SSL, translating traffic from HTTP1.1 to HTTP/2, and more.
By co-locating Envoy with your code, you can let Envoy handle the complexities of the network. This makes service-to-service communication safer and more reliable, while alleviating the need to re-implement this functionality within each service.
Use your existing deployment and production tooling to get Envoy on your infrastructure. There’s no need to reinvent the wheel to put Envoy in place as a front proxy or sidecar.
Fail quickly and apply back pressure downstream for your connections. Start with 1,000% of your max expected load on most services.
Finely-tune retries to ensure that hiccups don't lead to downtime, without making things worse when backends are unhealthy.
Identify unhealthy hosts within services and automatically remove them from the load balancing rotation until they become healthy again.
Separate deploy from release. First, deploy new versions without taking traffic. Then, shift 1% of traffic over to the new version and check the metrics. If everything looks good, try 10%, 50%, 100%. It takes all of the stress out of releasing.
Stop trying to run all 5,000 microservices on your laptop. You don’t have enough RAM. You’ll never have enough RAM.
Moving to Kubernetes? Going multi-cloud? Deconstructing your monolith? Run traffic to both versions simultaneously and capture metrics as you build.