HOWTO
WWW
HTTPS ROUTING WITH HAPROXY

Published: 20210105

Tested on:
* Ubuntu Server 20.04.1 on a VMware VM.
* Ubuntu for IoT 20.04.1 (arm64) on a Raspberry Pi 4 Model B 4GB.

-

INDEX
01. Intro
02. Setup HTTP
03. Adding HTTPS
04. IPv6 traffic bypassing the loadbalancer
05. Load balancer routes encrypted IPv4 traffic (aka "SNI Routing")
06. TLS termination on load balancer with re-encryption
07. Testing with curl

BONUS - Using the load balancer as intended
08. Symmetrical IPv6/IPv4 setup with SNI routing
09. Symmetrical IPv6/IPv4 setup with mixed SNI routing and TLS termination

  1. INTRO

    The idea is the following...

    You have two different websites, which you want to serve from two different web servers. You have many IPv6 addresses, but you have only one IPv4 address. You also want this to work with both HTTP (with 301 redirect) and HTTPS (with HSTS).

    The setup is the following:

  1. Setup HTTP

    There are 5 configurations to be made:

    1. Configure HAProxy on the load balancer
    2. Configure HTTP on webserver1 port 80
    3. Configure HTTP on webserver1 port 8080
    4. Configure HTTP on webserver2 port 80
    5. Configure HTTP on webserver2 port 8080

    On loadbalancer1:

    $ sudo apt install haproxy
    $ sudo vi /etc/haproxy/haproxy.cfg
    

    Add these lines to the bottom:

    ### HTTP ###
     
    frontend fe_http
      mode http
      bind *:80
      acl host_site1 hdr(host) -i site1.example.com
      acl host_site2 hdr(host) -i site2.example.com
      use_backend be_webserver1_8080 if host_site1
      use_backend be_webserver2_8080 if host_site2
      default_backend be_webserver1
     
    backend be_webserver1_8080
      mode http
      server webserver1 [2001:db8:2::2]:8080 check send-proxy
     
    backend be_webserver2_8080
      mode http
      server webserver2 [2001:db8:2::3]:8080 check send-proxy
    

    It is not necessary to name your frontend fe_name, or your backend be_name, but it is useful.

    Restart HAProxy:

    $ sudo systemctl restart haproxy.service
    

    On webserver1:

    $ sudo apt -y install apache2
    $ sudo mkdir -p /home/www/site1.example.com
    $ sudo vi /etc/apache2/ports.conf
    

    Make sure that these two lines are in ports.conf:

    Listen 80
    Listen 8080
    

    Continue (on webserver1):

    $ sudo vi /etc/apache2/sites-available/site1.example.com-80.conf
    

    Put this inside the file:

    <VirtualHost *:80>
      ServerAdmin   webmaster@example.com
      DocumentRoot  /home/www/site1.example.com
      ServerName    site1.example.com
      ErrorLog      /var/log/apache2/site1.example.com-error.log
      CustomLog     /var/log/apache2/site1.example.com-access.log combined
     
      <Directory /home/www/site1.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    Also create this file:

    $ sudo vi /etc/apache2/sites-available/site1.example.com-8080.conf
    

    Put this inside the file:

    <VirtualHost *:8080>
      ServerAdmin   webmaster@example.com
      DocumentRoot  /home/www/site1.example.com
      ServerName    site1.example.com
      ErrorLog      /var/log/apache2/site1.example.com-error.log
      CustomLog     /var/log/apache2/site1.example.com-access.log combined
     
      # requires remoteip and apache 2.4.31
      RemoteIPProxyProtocol on
     
      <Directory /home/www/site1.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    Enable the RemoteIP module:

    $ sudo a2enmod remoteip
    $ sudo systemctl restart apache2
    

    Enable the two configurations:

    $ sudo a2ensite site1.example.com-80.conf
    $ sudo a2ensite site1.example.com-8080.conf
    $ sudo systemctl reload apache2
    

    Both these configurations are setup so that the dir /home/www/site1.example.com/ is the website http://site1.example.com/

    The first configuration sets up the homepage on port 80. The second configuration sets up the homepage on port 8080.

    This distinction is important, because the configuration on port 8080 has one extra line: RemoteIPProxyProtocol on

    Together with the word send-proxy in the haproxy.cfg file on the load balancer:

    backend be_webserver1_8080
      mode http
      server webserver1 [2001:db8:2::2]:8080 check send-proxy
    

    This makes sure, that when an HTTP request is routed to the load balancer...
    HAProxy passes along where the request came from, to Apache, on port 8080...
    Apache picks up the request and logs the original address that requested the page. This way, Apache wont log everything that went through the load balancer, with the address of the load balancer. The logs will be nice!

    As you probably already have guessed.
    The configuration of webserver2 is just as the configuration on webserver1.
    Change all "site1" -> "site2" in the Apache conf files and you're good.

  1. Adding HTTPS

    This article will not go into detail on how to setup HTTPS, 301 redirect, HSTS, keys and certificates. I've already written an article (recently), on how to install apache https.

    However, there are some details you need to know when encryption gets added into the load balancer.

    The network topology is almost identical, only the ports are changed:

    pic_3.0.svg

    However... The configuration gets more complicated with the added encryption. With this topology, there are many possible cases.

    This article will cover the following cases:

    There are more cases than the ones I mentioned, i.e. TLS/SSL offloading on the load balancer. At this time I will not write about it. There are already so many abominations on this topic. (Yes, I know there also are valid cases.)

  1. IPv6 traffic bypassing the load balancer

    HTTP client connects directly to the web server

    There's not much to say. The client has a connection with E2EE (End-to-end-encryption) directly to the right webserver. The webserver already has HTTPS configured for the webpage in question.

    pic_4.0.svg

    It might be a good idea to have a separate dir for your HTTPS site (i.e. /home/wwws/site1.example.com). This way it will be easier to have 301 redirection on your HTTP site, and HSTS settings on your HTTPS site.

  1. Load balancer routes encrypted IPv4 traffic (aka "SNI Routing")

    This one is interesting. The client maintains the E2EE, while the load balancer looks at the SNI (Server Name Indication) data to discern which server to send the request/data.

    pic_5.0.svg

    The configuration is very similar to the HTTP example above.

    There are 5 configurations to be made:

    1. Configure HAProxy on the load balancer
    2. Configure HTTP on webserver1 port 443
    3. Configure HTTP on webserver1 port 8443
    4. Configure HTTP on webserver2 port 443
    5. Configure HTTP on webserver2 port 8443

    On loadbalancer1:

    $ sudo vi /etc/haproxy/haproxy.cfg
    

    Add these lines to the bottom:

    ### HTTPS ###
     
    frontend fe_https
      mode tcp
      bind *:443
      tcp-request inspect-delay 5s
      tcp-request content accept if { req_ssl_hello_type 1 }
      use_backend be_webserver1_8443  if { req_ssl_sni -i site1.example.com }
      use_backend be_webserver2_8443  if { req_ssl_sni -i site2.example.com }
      default_backend be_webserver1_8443
     
    backend be_webserver1_8443
      mode tcp
      server webserver1 [2001:db8:2::2]:8443 check send-proxy
     
    backend be_webserver2_8443
      mode tcp
      server webserver2 [2001:db8:2::3]:8443 check send-proxy
    

    It is not necessary to name your frontend fe_name, or your backend be_name, but it is useful.

    Restart HAProxy:

    $ sudo systemctl restart haproxy.service
    

    On webserver1:

    $ sudo vi /etc/apache2/ports.conf
    

    Make sure that these lines are in ports.conf (they probably already are):

    <IfModule ssl_module>
      Listen 443
      Listen 8443
    </IfModule>
     
    <IfModule mod_gnutls.c>
      Listen 443
      Listen 8443
    </IfModule>
    

    Continue (on webserver1):

    $ sudo vi /etc/apache2/sites-available/site1.example.com-443.conf
    

    Put this inside the file:

    <VirtualHost *:443>
      ServerAdmin   webmaster@example.com
      DocumentRoot  /home/wwws/site1.example.com
      ServerName    site1.example.com
     
      SSLEngine on
      SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
      SSLCertificateFile      /etc/letsencrypt/live/site1.example.com/cert.pem
      SSLCertificateKeyFile   /etc/letsencrypt/live/site1.example.com/privkey.pem
      SSLCACertificateFile    /etc/letsencrypt/live/site1.example.com/chain.pem
     
      ErrorLog      /var/log/apache2/site1.example.com-error.log
      CustomLog     /var/log/apache2/site1.example.com-access.log combined
      <Directory    /home/wwws/site1.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    Also create this file:

    $ sudo vi /etc/apache2/sites-available/site1.example.com-8443.conf
    

    Put this inside the file:

    <VirtualHost *:8443>
      ServerAdmin   webmaster@example.com
      DocumentRoot  /home/wwws/site1.example.com
      ServerName    site1.example.com
     
      # HAProxy with send-proxy to retrieve original IP-address, Apache 2.4.31
      # sudo a2enmod remoteip
      RemoteIPProxyProtocol on
     
      SSLEngine on
      SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
      SSLCertificateFile      /etc/letsencrypt/live/site1.example.com/cert.pem
      SSLCertificateKeyFile   /etc/letsencrypt/live/site1.example.com/privkey.pem
      SSLCACertificateFile    /etc/letsencrypt/live/site1.example.com/chain.pem
     
      ErrorLog      /var/log/apache2/site1.example.com-error.log
      CustomLog     /var/log/apache2/site1.example.com-access.log combined
      <Directory    /home/wwws/site1.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    (If you haven't already, you need to enable the SSL module and the RemoteIP module)

    $ sudo a2enmod ssl
    $ sudo a2enmod remoteip
    $ sudo systemctl restart apache2
    

    Don't forget the new dir where the site is stored:

    $ sudo mkdir -p /home/wwws/site1.example.com
    

    Enable the two configurations:

    $ sudo a2ensite site1.example.com-443.conf
    $ sudo a2ensite site1.example.com-8443.conf
    $ sudo systemctl reload apache2
    

    Both these configurations are setup so that the dir /home/wwws/site1.example.com/ is the website https://site1.example.com/

    The first configuration sets up the homepage on port 443. The second configuration sets up the homepage on port 8443.

    The configuration of webserver2 is just as the configuration on webserver1.
    Change all "site1" -> "site2" in the Apache conf files and you're good.

  1. TLS termination on load balancer with re-encryption, mixed with SNI routing

    Sometimes a site (site1.example.com) needs to terminate the TLS connection on the load balancer itself, but you also need the traffic to be encrypted when it leaves the load balancer:

    pic_6.0.svg

    But you still have IPv6 traffic bypassing the load balancer:

    pic_6.1.svg

    And you still have another site (site2.example.com) that uses SNI routing, with E2EE, when going over IPv4:

    pic_6.2.svg

    But you still have IPv6 traffic bypassing the load balancer:

    pic_6.3.svg

    The solution is to route all HTTPS traffic through the SNI "sorter" and then setup another port that has TLS termination for the server name that has to terminate at the load balancer.

    The configuration will look like this.

    On loadbalancer:

    $ sudo vi /etc/haproxy/haproxy.cfg
    

    In the HTTPS section, in frontend fe_https, change one line.
    Then also add one new frontend and two new backends:

    ### HTTPS ###
    frontend fe_https
      mode tcp
      bind *:443
      tcp-request inspect-delay 5s
      tcp-request content accept if { req_ssl_hello_type 1 }
      # use_backend be_webserver1_8443   if { req_ssl_sni -i site1.example.com } # remove this line
      use_backend be_loopback_site1_8008 if { req_ssl_sni -i site1.example.com } # add this line
      use_backend be_webserver2_8443     if { req_ssl_sni -i site2.example.com }
      default_backend be_webserver1_8443
     
    backend be_webserver1_8443
      mode tcp
      server webserver1 [2001:db8:2::2]:8443 check send-proxy
     
    backend be_webserver2_8443
      mode tcp
      server webserver2 [2001:db8:2::3]:8443 check send-proxy
     
    # new backend, specific for site1 
    backend be_loopback_site1_8008
      mode tcp
      server fe_loopback_site1_8008 [::1]:8008 check send-proxy
     
    # new (internal) frontend, specific for site1
    frontend fe_loopback_site1_8008
      mode tcp
      bind [::1]:8008 accept-proxy ssl crt /etc/haproxy/keys/site1.example.com/haproxy.pem
      default_backend be_webserver1_8443_local_ca
     
    # new backend for webserver1, alternative setup with local CA
    backend be_webserver1_8443_local_ca
      mode tcp
      server webserver1 [2001:db8:2::2]:8443 check send-proxy verify required sni str(site1.example.com) ca-file /etc/ssl/certs/ca-certificates.crt
    

     

    It might seem counter intuitive to have an alternative backend pointing to webserver1 on port 8443, but with the local CA instead. But it works, because on the webserver end, which key and certificate to use, are set for each site configuration, not per port.

    Here's the flow of the request (as described by the configuration above):

    1. The request for site1.example.com is catched by the frontend fe_https
    2. fe_https then routes the request to a backend called be_loopback_site1_8008
    3. be_loopback_site1_8008 then routes the request to a new frontend, fe_loopback_site1_8008, that is only reachable via a loopback IPv6 address.
    4. fe_loopback_site1_8008 terminates the TLS connection with a certificate/pemfile. This pemfile has three items (simple concatenation): the private key, the public certificate and the intermediate certificate (aka "fullchain.pem").
    5. fe_loopback_site1_8008 then sends the request (now un-encrypted) to the new backend called be_webserver1_8443_local_ca
    6. be_webserver1_8443_local_ca then reencrypts the request, by "calling" webserver1, but with a local CA certificate. This works by specifying where the intermediate certificate for the local CA, can be found.

    As you can see in the configuration, when you have requests that goes from HAProxy to HAProxy, the backend "send-proxy" is catched in the frontend with "accept-proxy".

    Also, the last line of the configuration is quite specific:

    server webserver1 [2001:db8:2::2]:8443 check send-proxy verify required sni str(site1.example.com) ca-file /etc/ssl/certs/ca-certificates.crt
    

    Let's break it down:

    check checks that the backend webserver1 is available.
    send-proxy sends the original IPv4 address to the webserver logs.
    verify required forces verification of the certificate of webserver1.
    sni str(site1.example.com) forces verification of a specific SNI.
    ca-file /etc/ssl/certs/ca-certificates.crt tells which file holds the intermediate certificate for the backend local CA.

    And no, HAProxy doesn't allow a long line to be broken up into a multi-line configuration.

    Also, as you might've picked up by now. You more or less need to setup a backend and a frontend for each site that TLS terminates on the load balancer. So each site will take up a port. The good news are that you can use IPv6 internally, so you don't need to sacrifice a lot of ports, you could just as well let each site have it's own, local, IPv6 address, on the loopback interface.

    My suggestion is that you keep port 8008 for loopback stuff. It is an IANA registered port for http-alt. Also it will help you not to mix it up with 8080 and 8443.

    I also suggest that you allocate a ULA IPv6 range on your loopback device (See RFC 4193).

    Also, concerning keys...

    There are three places where you need to have a private key (and certificate):

    1. On webserver1 port 443
    2. On HAProxy port 443
    3. On webserver1 port 8443

    In this article I use Let's Encrypt, but almost any other SSL/TLS certificate vendor should be fine.

    What I do is that I run Let's Encrypt's certbot on webserver1. Since certbot does the "ACME standard" challenge over port 80, which was already defined previously, in HAProxy, to reach the correct backend webserver. Then I have a crontab script that copies the key and certificate over to HAProxy (i.e. scp or rsync) and reuse they key there.

    I don't think this is a "beautiful" way of solving TLS termination with re-encryption, but it is a way.

    Here's my crontab script (just make sure the dir /etc/haproxy/keys exists on loadbalancer1):

    #!/bin/sh
     
    certbot renew
     
    rm /etc/letsencrypt/live/site1.example.com/haproxy.pem
    touch /etc/letsencrypt/live/site1.example.com/haproxy.pem
    chmod 0600 /etc/letsencrypt/live/site1.example.com/haproxy.pem
    cat /etc/letsencrypt/live/site1.example.com/privkey.pem >> /etc/letsencrypt/live/site1.example.com/haproxy.pem
    cat /etc/letsencrypt/live/site1.example.com/fullchain.pem >> /etc/letsencrypt/live/site1.example.com/haproxy.pem
     
    rsync -e 'ssh -i ~/.ssh/my_keyfile.key' -vac /etc/letsencrypt/live/site1.example.com/haproxy.pem root@loadbalancer1:/etc/haproxy/keys/site1.example.com/
    

    The apache configuration for port 443 (on webserver 1) is:

    <VirtualHost *:443>
     ServerAdmin   webmaster@example.com
     DocumentRoot  /home/wwws/site1.example.com
     ServerName    site1.example.com
     
     SSLEngine on
     SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
     SSLCertificateFile      /etc/letsencrypt/live/site1.example.com/cert.pem
     SSLCertificateKeyFile   /etc/letsencrypt/live/site1.example.com/privkey.pem
     SSLCACertificateFile    /etc/letsencrypt/live/site1.example.com/chain.pem
     
      ErrorLog      /var/log/apache2/site1.example.com-error.log
      CustomLog     /var/log/apache2/site1.example.com-access.log combined
      <Directory    /home/wwws/site1.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    As you can see, this configuration hasn't changed. However, the apache configuration on port 8443 is different:

    <VirtualHost *:8443>
      ServerAdmin   webmaster@example.com
      DocumentRoot  /home/wwws/site1.example.com
      ServerName    site1.example.com
     
      # HAProxy with send-proxy to retrieve original IP-address, Apache 2.4.31
      # sudo a2enmod remoteip
      RemoteIPProxyProtocol on
     
      SSLEngine on
      SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
      SSLCertificateFile      /etc/apache2/ssl/site1.example.com/site1.example.com.crt
      SSLCertificateKeyFile   /etc/apache2/ssl/site1.example.com/site1.example.com.key
      SSLCACertificateFile    /etc/apache2/ssl/site1.example.com/myCA.crt
     
      ErrorLog      /var/log/apache2/site1.example.com-error.log
      CustomLog     /var/log/apache2/site1.example.com-access.log combined
      <Directory    /home/wwws/site1.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    The private key and certificate come from a local CA. This article will not go into detail on how to create a CA. If you want to know more, you can read my article on how to create your own CA.

    You also need to install the CA certificate on loadbalancer1.

    First, put your myCA.crt in /usr/local/share/ca-certificates/ on loadbalancer1. Then run this command:

    $ sudo update-ca-certificates
    

    You're finished!

    But...
    You probably want to test this...
    You might even need to debug this... A lot...

    Read the next section!

  1. Testing with curl

    You're probably going to run into problems and you'll need good tools so you can debug. Curl is very handy, so is choosing proper "testpoints", servers that are inside and outside your network.

    In this picture I have chosen 3 points that are good places to test from. (They are the machines marked orange.) As you can see, testpoint2 and testpoint3 are basically the same. But if you are on the same network as the hosted servers, it can be good to be clear on what is a proper test.

    pic_7.0.svg

    On testpoint1 it makes sense to see if the site is working as intended. This is a good point to test from, since the DNS records will point to the public IP-addresses. You should test if you can reach port 80 and port 443, both on IPv6 and IPv4.

    $ curl http://site1.example.com/ -6
    $ curl http://site1.example.com/ -4
    $ curl https://site1.example.com/ -6
    $ curl https://site1.example.com/ -4
    

    To see extensive information on the TLS handshakes and the certificate data, use curl with -v

    $ curl https://site1.example.com/ -6 -v
    $ curl https://site1.example.com/ -4 -v
    

    On testpoint2 (the loadbalancer itself) you should test:

    1. webserver1 works on port 80, without proxy protocol.
    2. webserver1 works on port 8080, with proxy protocol.
    3. webserver1 works on port 443, without proxy protocol.
    4. webserver1 has a proper global certificate on port 443.
    5. loadbalancer1 has the intermediate certificate needed to validate the webserver1 certificate, which comes from the local CA.
    6. webserver1 works on port 8443, with proxy protocol.
    7. webserver1 has a proper certificate on port 8443 from the local CA.

    However, when using curl towards webserver1 you have to add more flags, because "calling" webserver1 with the site FQDN, will give you external IP-addresses, and calling webserver1 with the internal IP-address, lacks the SNI needed to get the right page and certificate.

    Also, you need to adjust curl when you are connecting to an apache server that has RemoteIPProxyProtocol enabled. The proxy protocol is not a factor testing externally, but it is when testing internally (in this particular setup, when testing against port 8080 and port 8443).

    Here are some examples.

    Example 1:

    $ curl https://site1.example.com:8443 --haproxy-protocol
    

    --haproxy-protocol is added when you curl a page that has apache RemoteIPProxyProtocol. It will not work without it.

    Example 2:

    $ curl https://site1.example.com:8443 --resolve site1.example.com:8443:[2001:db:2::2]
    

    --resolve is like a oneliner /etc/hosts

    You write the normal curl oneliner (with the proper url, as it sets SNI).
    Then you add --resolve with some special values, like this:

    $ curl https://<url_of_the_site>:<port_of_the_site> --resolve <hostname_to_translated>:<port_of_the_site>:<address_you_are_connecting_to>
    

    Example 3:

    So lets say you're at testpoint2 or testpoint3 and you want to see if webserver1 is properly setup for site1.example.com on port 8443, the port that we have setup with our HAProxy.

    $ curl https://site1.example.com:8443 --haproxy-protocol --resolve site1.example.com:8443:[2001:db8:2::2]
    

    Which means that you are trying to reach the site site1.example.com on port 8443, with the proxy protocol active, and you're telling the curl resolver that for this particular command, "site1.example.com:8443" can be reached at 2001:db8:2::2

BONUS - using the load balancer as intended

The following examples are lacking some details, like which packages to install, etc.
Please read the previous sections for a more complete walkthrough.

  1. Symmetrical IPv6/IPv4 setup with SNI routing

    You can of course skip the assymmetrical setup and have both IPv6 and IPv4 take the same route through the load balancer. It can make the key & cert management more simple. You can also use the load balancer more for it's intended purpose, balancing traffic to a group of nodes. Also, you can still have additional IPv6 addresses on each web server: One for uplink (pulling down updates, etc) and a "private" one between the load balancer and the web server (See "Local IPv6 Unicast Addresses" in RFC 4193 Section 3).

    The setup is the following:

  1. Symmetrical IPv6/IPv4 setup with mixed SNI routing and TLS termination

    In this last example both site1.example.com and site2.example.com have symmetrical setups, meaning that both AAAA and A points to the load balancer.

    Then on the load balancer the sites are different.

    site1.example.com goes through the "SNI sorter" on loadbalancer1 and gets routed to an "internal frontend" on port 8008. This "internal frontend" holds the key and certificate for site1.example.com and does TLS termination. Then the request get re-encrypted and routed to webserver1.

    The flow is explained in more detail in 06 - TLS termination on load balancer with re-encryption.

    pic_9.0.svg

    site2.example.com goes through the "SNI sorter" and gets routed ("SNI routed") to webserver2.

    pic_9.1.svg

    Configure HAProxy:

    $ sudo vi /etc/haproxy/haproxy.cfg
    

    Make sure these lines are there:

    ### HTTP ###
     
    frontend fe_http
      mode http
      bind *:80
      acl host_site1 hdr(host) -i site1.example.com
      acl host_site2 hdr(host) -i site2.example.com
      use_backend be_webserver1_8080 if host_site1
      use_backend be_webserver2_8080 if host_site2
      default_backend be_webserver1
     
    backend be_webserver1_8080
      mode http
      server webserver1 [2001:db8:2::2]:8080 check send-proxy
     
    backend be_webserver2_8080
      mode http
      server webserver2 [2001:db8:2::3]:8080 check send-proxy
     
    ### HTTPS ###
     
    frontend fe_https
      mode tcp
      bind *:443
      tcp-request inspect-delay 5s
      tcp-request content accept if { req_ssl_hello_type 1 }
      use_backend be_loopback_site1_8008 if { req_ssl_sni -i site1.example.com }
      use_backend be_webserver2_8443     if { req_ssl_sni -i site2.example.com }
      default_backend be_webserver1_8443
     
    backend be_webserver1_8443
      mode tcp
      server webserver1 [2001:db8:2::2]:8443 check send-proxy
     
    backend be_webserver2_8443
      mode tcp
      server webserver2 [2001:db8:2::3]:8443 check send-proxy
     
    backend be_loopback_site1_8008
      mode tcp
      server fe_loopback_site1_8008 [::1]:8008 check send-proxy
     
    frontend fe_loopback_site1_8008
      mode tcp
      bind [::1]:8008 accept-proxy ssl crt /etc/haproxy/keys/site1.example.com/haproxy.pem
      default_backend be_webserver1_8443_local_ca
     
    backend be_webserver1_8443_local_ca
      mode tcp
      server webserver1 [2001:db8:2::2]:8443 check send-proxy verify required sni str(site1.example.com) ca-file /etc/ssl/certs/ca-certificates.crt
    

    Configure site1 on webserver1:

    $ sudo vi /etc/apache2/sites-available/site1.example.com-8080.conf
    

    Put this in the file:

    <VirtualHost *:8080>
      ServerAdmin   webmaster@example.com
      DocumentRoot  /home/www/site1.example.com
      ServerName    site1.example.com
      ErrorLog      /var/log/apache2/site1.example.com-error.log
      CustomLog     /var/log/apache2/site1.example.com-access.log combined
     
      # requires remoteip and apache 2.4.31
      RemoteIPProxyProtocol on
     
     <Directory /home/www/site1.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    Also edit this file on webserver1 (which uses local CA):

    <VirtualHost *:8443>
      ServerAdmin   webmaster@example.com
      DocumentRoot  /home/wwws/site1.example.com
      ServerName    site1.example.com
     
      # HAProxy with send-proxy to retrieve original IP-address, Apache 2.4.31
      # sudo a2enmod remoteip
      RemoteIPProxyProtocol on
     
      SSLEngine on
      SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
      SSLCertificateFile      /etc/apache2/ssl/site1.example.com/site1.example.com.crt
      SSLCertificateKeyFile   /etc/apache2/ssl/site1.example.com/site1.example.com.key
      SSLCACertificateFile    /etc/apache2/ssl/site1.example.com/myCA.crt
     
      ErrorLog      /var/log/apache2/site1.example.com-error.log
      CustomLog     /var/log/apache2/site1.example.com-access.log combined
      <Directory    /home/wwws/site1.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    Don't forget to enable the site configuration on webserver1:

    $ sudo a2ensite site1.example.com-8080.conf
    $ sudo a2ensite site1.example.com-8443.conf
    $ sudo systemctl reload apache2
    

    Configure site2 on webserver2:

    $ sudo vi /etc/apache2/sites-available/site2.example.com-8080.conf
    

    Put this in the file:

    <VirtualHost *:8080>
      ServerAdmin   webmaster@example.com
      DocumentRoot  /home/www/site2.example.com
      ServerName    site2.example.com
      ErrorLog      /var/log/apache2/site2.example.com-error.log
      CustomLog     /var/log/apache2/site2.example.com-access.log combined
     
      # requires remoteip and apache 2.4.31
      RemoteIPProxyProtocol on
     
      <Directory /home/www/site2.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    Also edit this file on webserver2 (which uses Let's Encrypt):

    $ sudo vi /etc/apache2/sites-available/site2.example.com-8443.conf
    

    Put this in the file:

    <VirtualHost *:8443>
      ServerAdmin   webmaster@example.com
      DocumentRoot  /home/wwws/site2.example.com
      ServerName    site1.example.com
     
      # HAProxy with send-proxy to retrieve original IP-address, Apache 2.4.31
      # sudo a2enmod remoteip
      RemoteIPProxyProtocol on
     
      SSLEngine on
      SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
      SSLCertificateFile      /etc/letsencrypt/live/site2.example.com/cert.pem
      SSLCertificateKeyFile   /etc/letsencrypt/live/site2.example.com/privkey.pem
      SSLCACertificateFile    /etc/letsencrypt/live/site2.example.com/chain.pem
     
      ErrorLog      /var/log/apache2/site2.example.com-error.log
      CustomLog     /var/log/apache2/site2.example.com-access.log combined
      <Directory    /home/wwws/site2.example.com>
        Require all granted
      </Directory>
    </VirtualHost>
    

    Don't forget to enable the site configuration on server2:

    $ sudo a2ensite site2.example.com-8080.conf
    $ sudo a2ensite site2.example.com-8443.conf
    $ sudo systemctl reload apache2
    

    If you need more info, you should carefully read:
    06 - TLS termination on load balancer with re-encryption and
    07 - Testing with curl.

    However, the parts with the keys become easier because of the symmetry, because there's only one path for each site (IPv6 and IPv4 have the same path).