Load Balancers
The Brightbox Cloud Load Balancer service distributes traffic between a pool of your Cloud Servers, allowing you to scale your systems and have automatic fault tolerance. The balancer runs continuous health checks and removes any unresponsive servers from the pool until they recover.
Load Balancers are configurable via our API, and therefore also via our CLI tool and GUI. For a step-by-step guide on using the CLI to manage Load Balancers, see the Load Balancer CLI Guide.
Load Balancers require a Cloud IP to be reachable and support both IPv4 and IPv6.
Listeners
Each Load Balancer consists of one or more listeners which control the ports and protocols it responds to.
Listeners have five attributes:
-
in-port
is the port that the Load Balancer listens on for incoming connections -
out-port
is the port the load balancer will connect to on your back end servers - this will usually be the same asin-port
-
type
is the protocol the listener should use, can behttp
,https
ortcp
(see below) -
proxy_protocol
is the version of the proxy protocol the listener should use to talk to the back end. It can bev1
,v2
,v2-ssl
orv2-ssl-cn
. If not specified, proxy protocol is switched off. -
timeout
is the time (in milliseconds) after which inactive connections will be closed. It defaults to 50 seconds if not specified.
So, if your back-end servers have a web service running on port 3000
but
you want the load balancer to serve requests on port 80
, you would use a
listener with an in
port of 80
and an out
port of 3000
.
There are three listener types:
TCP
If the type is set to tcp
, the load balancer makes a straight
unmodified tcp connection to the back-end servers. Useful for load
balancing non-http services such as SMTP or SSH.
HTTP
If the listener type is http
the load balancer parses the request and adds a
X-Forwarded-For
header, so your back-end servers can see the IP address of
the client.
The X-Forwarded-Proto
header is explicitly removed from HTTP requests.
HTTPS
The https
type supports secure connections using TLS 1.0, 1.1, 1.2 and 1.3.
To prevent the use of older TLS versions, you can specify a minimum supported
version to be required using the ssl_minimum_version
parameter.
The ciphersuite is specified as per Mozilla’s own recommendations.
The load balancer adds a X-Forwarded-Proto: https
header to requests so that
backend servers can distinguish them as having been encrypted.
All HTTPS listeners on a load balancer will use the one same x509 certificate and private key.
There are two ways to specify certificates, manual and automatic. See the certificates section below for more details.
WebSockets
With http
and https
listener types, Load Balancers can accept both
standard HTTP and
WebSocket connections over
the same port. The timeout
(either default or specified) is applied
to standard HTTP connections and WebSocket connections are given a
fixed timeout of 1 day.
Timeout
The timeout setting determines how long inactive connections remain
open before they are closed by the load balancer. The timeout is
specified in milliseconds and must be between 5000
and 86400000
(one day). By default the timeout is 50000
milliseconds (50 seconds).
Proxy Protocol Support
The Proxy Protocol
passes details of the Client connection to the back end
in a way similar to the X-Forwarded-For
and X-Forwarded-Proto
headers in HTTP,
but does so in a protocol agnostic manner. This allows you to pass details to other protocols like SMTP,
but it works just as well with HTTP connections. The software at the backend has to support
the Proxy Protocol, and have that support switched on. Once you have done
that, set the version of the proxy protocol supported by the backend software in the Load Balancer configuration.
Version | Description |
---|---|
v1 |
sends the version 1 text based protocol to the back end. |
v2 |
sends the version 2 binary based protocol to the back end. |
v2-ssl |
adds the SSL information extension to the v2 protocol. |
v2-ssl-cn |
adds the SSL information extension to the v2 protocol, along with the Common Name from the subject of the client certificate (if any). |
Bear in mind that the proxy protocol, if activated, will be added to health checks tests as well.
HTTPS Certificates
Load Balancers support two different means of serving HTTPS certificates.
Automated certificate management
Automatic certificate management uses the Let’s Encrypt certificate authority to generate and renew free certificates for you without the need for you to create your own key or CSR.
Just provide a list of domain names and then a certificate will be generated for you automatically and kept renewed.
For the certificate to be generated successfully, each domain name you specify must resolve to a Cloud IP mapped to the Load Balancer.
If you specify domains on a Load Balancer with no Cloud IPs mapped, a certificate will not be generated until you map a Cloud IP to it. This allows you to seamlessly transition from a Cloud Server or another Load Balancer.
Each domain associated with a Load Balancer has a status.
pending
status means the domain is awaiting a certificate; either the
generation request is in progress or the Load Balancer does not yet have a Cloud IP
mapped.
valid
status means that a certificate has been successfully created for the
domain.
invalid
status means that there was an error creating the certificate for the
domain. It could mean the domain doesn’t resolve to a Cloud IP mapped to the
Load Balancer, or it may not resolve or exist at all.
Invalid domains are dropped from the certificate, leaving only the valid domains. If you rectify the problem with the domains, then re-add it to the Load Balancer and the certificate will be updated.
If a valid domain later becomes invalid, it will remain on the certificate until you update the domain list or until the automatic renewal time, which is currently every 90 days. You will receive a notification by email of the invalid domains before the renewal period, giving you time to rectify the problem.
If you don’t rectify an invalid domain before the renewal time, it will be dropped from the certificate leaving only the valid domains.
Manual certificate management
Certificate purchased elsewhere can also be manually provided in PEM format, along with the key. Any intermediate certificates should go after the main certificate in the certificate PEM file.
HTTPS Redirect
For Load Balancers with HTTPS configured, the https_redirect
option can be set
which will automatically redirect all HTTP requests to HTTPS and issue a HSTS
header to browsers for future requests.
HTTP/2
For increased performance over HTTPS connections Load Balancers will use HTTP/2 where clients support it.
Health Checks
Each Load Balancer can have one health check. A health check defines how the Load Balancer detects problems with your back-end servers. The Load Balancer will not send requests to unhealthy back-end servers until they recover.
Port and timings
The health check has several options. The port
is the tcp port that the
Load Balancer will attempt to connect to on each back-end server.
timeout
is how long in milliseconds the Load Balancer will wait for the
connection to complete before deciding the health check failed. interval
is how long in milliseconds between each health check.
The health checks are run for each listener, so if you specify a 20 second interval and have 2 listeners you will actually see checks every 10 seconds on the back end servers.
Types
The type
option specifies whether the health check is a standard tcp
connect attempt or a more detailed HTTP check. It can be set to tcp
or http
.
When type
is set to http
, the request
option defines the path to be
used by the Load Balancer when making the HTTP health check request.
When type
is set to tcp
, the request
option is ignored.
Thresholds
There are two “thresholds” associated with the health check which are used to control when back-end servers are considered unhealthy.
threshold_down
sets the number of consecutive health checks that must
fail for the server to be considered unhealthy. This can help prevent a
transient error with one of your servers causing it to immediately be
considered unhealthy.
threshold_up
sets the number of consecutive health checks that must
succeed for an unhealthy back-end server to be considered healthy again
and ready for new requests. This can help when a back-end server isn’t
completely unhealthy and some health checks are succeeding. In this case
you usually do not want the server to start receiving requests.
Balancing Policies
The Load Balancer policy defines how requests are distributed between servers.
Round Robin
When the policy
is set to round-robin
, the Load Balancer simply passes
each new request to each back-end server in turn. So request 1 goes to
server A, request 2 to server B, request 3 to server C and request 4 to
server A again.
This policy is best when your requests tend to take about the same amount of time to complete.
Least Connections
When the policy
is set to least-connections
, the Load Balancer passes
each new request to the back-end server with the least number of connections
currently open to it. This policy is good for when the amount of time your
requests take to complete varies a lot.
Source Address
When the policy
is set to source-address
, the Load Balancer tries to send
requests from the same source IP address to the same back-end server. This
provides a kind of best-effort session stickiness, which may help with
performance if you have some localised caching in your app.
IPv6
IPv6 connections are accepted by load balancer and turned into IPv4 connections to the backend. This allows you to easily IPv6-enable your deployments without modification. Just advertise your Cloud IP’s IPv6 address in your DNS.
For http
listeners, client IPv6 source addresses will provided in the
X-Forwarded-For
header.