Today's distributed systems can consist of hundreds or even thousands of servers, and getting them to work together efficiently is a challenge. Load balancing is a multifaceted concept whose many techniques can help SREs face this challenge.
In this course, you'll explore how front-end load balancing works and its associated techniques, concepts, and capabilities. You'll examine the characteristics of load balancers, their use in application delivery and security, and the use of DNS load balancers. You'll outline strategies for virtual IP load balancing, cloud load balancing, and handling overload. Finally, you'll learn how the Google Front End Service, Andromeda virtualization stack, Maglev network load balancing service, and the Envoy edge and service proxy are used for load balancing-related tasks.
define what is meant by front-end load balancing, recognize how it improves performance, classify load balancer types, and describe three load balancer algorithms
list considerations when implementing load balancing and outline several techniques to achieve it
name possible uses of the concepts associated with front-end load balancing
outline how to balance loads using DNS load balancers
outline how to balance loads using virtual IP load balancers
describe how load balancing should be performed if working with virtualization, the cloud, and containers
describe the features of load balancers and their use in application delivery and security
outline methods for managing and handling overload
relate how the Google Front End Service is used to manage loads
indicate how the Andromeda Virtualization Stack is used as a software-defined network (SDN)
describe the architecture and components of the Maglev network load balancing server and how it's used for high availability
relate how the Envoy edge and service proxy works and recognize the benefits of its use