Intro to Load Balancing on AWS for Software Engineers

Avni Patel
2 min readJan 1, 2021

Introduction

This is a really basic explanation of routing traffic to applications in AWS.

In order for traffic to enter your application, you need to have an endpoint. The endpoint is a specific IP address a user uses to access an application. However, applications and their endpoints can be ephemeral and a specific IP may not always point to a specific application. Applications can go down and come back up at any time and may come back up with a different endpoint. Most applications are also not running as a single instance, but actually a number of instances with a ton of different endpoints. Each instance of an application has different endpoints and that endpoint can change at any time.

So how does a user know what endpoints points to an application? Enter load balancers. Load balancers route traffic from a single endpoint to the endpoints of an application. They are also able to distribute traffic using different policies so that a single application isn’t hit with a ton of requests or so that a user hits an instance of an application that is closer to them.

Load balancers in AWS

Load balancers are all over AWS. Some AWS services require more manual configuration while other services can be set as the target to a load balancer.

In AWS, load balancers route traffic using listeners. The listeners listen to target groups. Target groups are groups of one or more instance.

Here’s how some AWS services use load balancers:

EC2

ECS

Lambda

Conclusion

In each case, the user does not need to worry about the underlying endpoint of each instance of an application. Here, we spoke about load balancing at the application layer, however, load balancing can also be done at other layers of the OSI model.

--

--