LTM clone pools & SSL Bridging
Security teams place intrusion detection devices (IDS) on the network in order to monitor network traffic for signs of intrusion. Often this is accomplished with network taps and/or switch monitor ports, such setups provide the security team all packets on the network. As companies migrate all sites to use SSL payloads are encrypted, and attack traffic looks the same as legitimate traffic. What the security teams really need are the unencrypted traffic flows.
F5 devices are high performance SSL platforms and often act as a central decryption/encryption point for applications. As such, it makes for a great place to strip off ssl and send the decrypted flow to an IDS appliance. Fortunately F5 makes such configurations easy with the clone pool feature. Clone pools can be configured on a per virtual server basis, they copy the traffic from the proxies client-side or server-side. This works great when you’re not running any SSL or the F5 is performing SSL offload. In the of ssl offload you simply assign a clone-pool to the server-side of the proxy, and clear-text traffic is sent to the IDS systems.
However, if the F5 is performing SSL bridging things get slightly more complex. This is due to how the F5’s proxy chain or HUD chain works. The clone pool action is performed very early on the client-side and very late on the server-side, in-fact it happens prior to ssl decryption on the client-side of the proxy and after ssl encryption on the server-side. Because of this, we need to setup two VIPs, the first VIP will terminate ssl from the client, then send the cleartext http to another vip. The second vip will then re-encrypt ssl and send the http request to servers. We can then place a clone-pool on the server-side of the first vip and mirror unencrypted to our IDS system.
For more details on clone pools see: F5’s Configuring Configuring the BIG-IP system to send traffic to an intrusion detection system
Configuration example
Let’s walk though how to create a configuration using VIP targeting VIP for cloning cleartext traffic while the F5 performs SSL bridging.
First create pools, one for the server-side traffic, the other for our IDS system that we’ll clone traffic to:
ltm pool /Common/sitefoo.bar.com-clone-pool {
members {
/Common/ids-system:12345 {
address 10.2.200.200
}
}
monitor /Common/tcp_half_open
}
ltm pool /Common/sitefoo.bar.com-server-pool {
members {
/Common/venkman:443 {
address 10.2.1.51
}
}
monitor /Common/tcp_half_open
}
Next let’s create a server-side VIP. This VIP will use a pool with the back-end web servers, and will have server-ssl but no client-ssl. It will accept clear text http on it’s client-side and send ssl encrypted http to the servers on it’s server-side.
NOTE: You should secure this virtual server such that it can-not be accessed by end-users, you can do this with AFM rules, or by using a non-routable destination address.
ltm virtual /Common/sitefoo.bar.com-server-side-vip {
destination /Common/10.1.50.160:8443
ip-protocol tcp
mask 255.255.255.255
pool /Common/sitefoo.bar.com-server-pool
profiles {
/Common/http { }
/Common/serverssl-insecure-compatible {
context serverside
}
/Common/tcp { }
}
source 0.0.0.0/0
source-address-translation {
type automap
}
translate-address enabled
translate-port enabled
}
Next let’s create an irule that targets the server-side vip from the client-side vip:
ltm rule /Common/sitefoo.bar.com-vip-target-vip-ir {
when HTTP_REQUEST {
virtual sitefoo.bar.com-server-side-vip
}
}
Finally, create a client-side VIP. This VIP has a clientssl on the client-side of the proxy that terminates ssl, the clone-pool on the server-side of the proxy and finally the iRule that tells heh F5 to send traffic to the other virtual server. Notably it does not have a default pool.
ltm virtual /Common/sitefoo.bar.com-client-side-vip {
clone-pools {
/Common/sitefoo.bar.com-clone-pool {
context serverside
}
}
destination /Common/10.1.50.160:443
ip-protocol tcp
mask 255.255.255.255
profiles {
/Common/clientssl {
context clientside
}
/Common/http { }
/Common/tcp { }
}
rules {
/Common/sitefoo.bar.com-vip-target-vip-ir
}
source 0.0.0.0/0
translate-address enabled
translate-port enabled
}
GTM Translation addresses for Generic Host objects
A reading of the F5’s Configuring BIG-IP GTM server objects for BIG-IP devices that reside behind a firewall NAT might lead one to believe that translation addresses for non BIG-IP server objects would have a similar effect as they do for BIG-IP objects. However, translation addresses are ONLY supported by BIG-IP objects when using iQuery or the “bigip” health monitor in GTM. The translation-address and translation-port configuration options have no impact on generic host or any other non-iquery monitored server or virtual server object.
This can be frustrating, the user experience for this configuration is honestly pretty awful in the F5 web interface. One would think that if translation addresses did nothing for non-iquery monitored server objects then the GUI & configuration objects would not allow their configuration which only causes further confusion.
For an overview of split DNS configuration on the GTM see: Spit DNS with GTM
Broken Configuration
The configuration below does not work as one might expect.
One likely expects the GTM to issue the following health monitors:
-
gateway_icmp to 172.20.1.31 the server object address.
-
tcp_half_open to 172.20.1.31:80 for the “GenericServer01-internal” virtual server object.
-
tcp_half_open to 172.20.1.31:80 for the “GenericServer01-external” virtual server object, due to the inclusion of the translation-address & translation-port.
gtm server /Common/GenericServer01 {
addresses {
172.20.1.31 {
device-name GenericServer01
}
}
datacenter /Common/DataCenter01
monitor /Common/gateway_icmp
virtual-servers {
GenericServer01-external {
destination 192.0.2.31:80
monitor /Common/tcp_half_open
translation-address 172.20.1.31
}
GenericServer01-internal {
destination 172.20.1.31:80
monitor /Common/tcp_half_open
}
}
}
However what actually happens is as follows:
-
gateway_icmp to 172.20.1.31 the server object address.
-
tcp_half_open to 172.20.1.31:80 for the “GenericServer01-internal” virtual server object.
-
tcp_half_open to 192.0.2.31:80 for the “GenericServer01-external” virtual server object, because translation addresses are ignored for non-iquery or bigip monitored objects.
This often leaves the public or external virtual server object marked down as many organizations do not configure their firewalls for Hairpin nat.
Working Configuration
First, a few considerations, I’d argue the best possible fix is to setup public IP addresses on your LTMs and let them act as your primary edge device. This puts all the management and configuration for your public presence on a highly secure, highly scalable, best of breed system.
Failing using F5 LTM for all external IPs the next best fix is to setup or enable hairpin NAT on the firewall or router performing the external NAT. In an ideal world the F5 would monitor via the public IP address so that if the device performing NAT goes down the F5 correctly marks the system as down.
If all that is not possible and you must monitor the private address while handing out the public address then continue reading. The way to configure a generic host server/virtual server object for NAT translation is to use alias addresses a health monitor assigned to the public virtual server object. This should sound familiar from LTM, an alias address on a health monitor allows us to apply that health check to any arbitrary virtual server or server object in the GTM, while actually having the health check go to a different address/port.
The disadvantage of this configuration is that you’ll end up creating and maintaining a lot of very specific GTM monitors with address aliases for all of your NATed generic objects.
gtm monitor tcp-half-open /Common/GenericServer01-tcp-half-open {
defaults-from /Common/tcp_half_open
destination 172.20.1.31:80
interval 30
probe-attempts 3
probe-interval 1
probe-timeout 5
timeout 120
}
gtm server /Common/GenericServer01 {
addresses {
172.20.1.31 {
device-name GenericServer01
}
}
datacenter /Common/DataCenter01
monitor /Common/gateway_icmp
virtual-servers {
GenericServer01-external {
destination 192.0.2.31:80
monitor /Common/GenericServer01-tcp-half-open
}
GenericServer01-internal {
destination 172.20.1.31:80
monitor /Common/tcp_half_open
}
}
}
- Note: Unfortunately, GTM monitors require a destination IP and port if a specific IP is used. Ideally we could configure a monitor with a specific destination address and a wildcard port.