I swear people have had bad luck or I am just lucky. Maybe I am not testing my ACE or using them to their full capacity. So far! Phwoar. What a device. I’ve not hulked, raged or got angry at it. I’ve seen people cuss and curse and even grow grey hairs before my eyes. I myself have not done any of it! Medal? Maybe. Serious load? Maybe not 🙂
On the heels of my previous article regarding Cisco ACE load balancers I am following up now with a basic configuration and getting your ACE servicing. Now that we have established the concept of the load balancers role in the network and how it works to deliver increased uptime and performance.
** Disclaimer – I have worked on these in a lab environment and dealt with a handful in a production space. I do not profess to be a rock star and below explanations have been made with my best efforts and understandings. Feel free to point out any major no no’s or inconsistencies. **
Remember to allow connectivity first.
In the admin context – change this after if you want to disable http/https or other access methods.
access-list ALL line 8 extended permit ip any any access-list RMT-MGMT-ACL line 8 extended permit ip any any access-list RMT-MGMT-ACL line 16 extended permit icmp any any
The virtualized environment is divided into objects called contexts. Each context behaves like an independent ACE appliance with its own policies, interfaces, domains, server farms, real servers, and administrators. While the server load balancing design doesn’t require multiple contexts for successful implementation, the ACE 4710 appliance is provisioned with one user context on top of the default Admin context. This approach provides better implementation flexibility in the future. One of the possible features that such setup makes available is active/active implementation with load sharing between redundant appliances. Active/active mode of operation requires multiple user contexts to be provisioned and started, therefore this option is left for potential expansion in the future. Each user context is initially defined in the Admin context, which contains the basic settings for each virtual device or context. Each context has a number of SVIs associated with it for communication.
ACE4710-01/Admin# sh contextNumber of Contexts = 3Name: Admin , Id: 0 Config count: 137 Description: Resource-class: default FT Auto-sync running-cfg configured state: enabled FT Auto-sync running-cfg actual state: enabled FT Auto-sync startup-cfg configured state: enabled FT Auto-sync startup-cfg actual state: enabledName: WWW-CXT , Id: 1 Config count: 113 Description: WWW Frontend Context Resource-class: WWW-RC Vlans: Vlan100-101 FT Auto-sync running-cfg configured state: enabled FT Auto-sync running-cfg actual state: enabled FT Auto-sync startup-cfg configured state: enabled FT Auto-sync startup-cfg actual state: enabledName: DNS-CXT , Id: 2 Config count: 167 Description: DNS Lookup Context Resource-class: DNS-RC Vlans: Vlan110-111 FT Auto-sync running-cfg configured state: enabled FT Auto-sync running-cfg actual state: enabled FT Auto-sync startup-cfg configured state: enabled FT Auto-sync startup-cfg actual state: enabled
** Note here that FT auto-sync shows that the running config and startup config are being shared between the Fault Tolerant group.
Resource-Classing – Class those resources boy – maximise your balanced load.
One part of having contexts is the fact you have the ability to allocate an amount of the physical devices resources to a virtual context. In our example below we could split 50 percent of total chassis resources to WWW context and 30 percent to the DNS context. This allows us to reserve 20 percent for Admin base context so the device does not become overloaded.
resource-class DNS-RC limit-resource all minimum 20.00 maximum unlimited limit-resource mgmt-connections minimum 20.00 maximum unlimited limit-resource sticky minimum 20.00 maximum unlimited limit-resource rate mgmt-traffic minimum 20.00 maximum unlimited limit-resource throughput minimum 30.00 maximum equal-to-min resource-class WWW-RC limit-resource all minimum 20.00 maximum unlimited limit-resource mgmt-connections minimum 20.00 maximum equal-to-min limit-resource sticky minimum 20.00 maximum equal-to-min limit-resource rate mgmt-traffic minimum 20.00 maximum equal-to-min limit-resource throughput minimum 50.00 maximum equal-to-minTHO-EST-SLB-01/Admin# sh resource allocation | begin throughput --------------------------------------------------------------------------- Parameter Min Max Class --------------------------------------------------------------------------- throughput 0.00% 80.00% default 50.00% 50.00% WWW-RC 30.00% 30.00% DNS-RC
Although these devices are in a test lab and I am generating my own traffic – these values here should not be taken as gospel and those with far more knowledge of ACE and SLB principles should comment here if you read this. I’d love to know what DC guru’s would recommend.
Fault Tolerance and FT Groups
It is possible to share contexts between devices. This allows us to have fail over if a ACE drops. This means we have connection redundancy for traffic passing to the servers as well as device redundancy that will allow us to continue servicing requests if we need to update or lose an ACE peer.
ft interface vlan 100 ip address 169.254.0.1 255.255.255.252 peer ip address 169.254.0.2 255.255.255.252 no shutdownhostname ACE4710-01 peer hostname ACE4710-02 ft peer 1 heartbeat interval 300 heartbeat count 10 ft-interface vlan 100 ft group 10 peer 1 peer priority 110 associate-context Admin inservice ft group 20 peer 1 associate-context WWW-CXT inservice ft group 30 peer 1 associate-context DNS-CXT inserviceshared-vlan-hostid 1 peer shared-vlan-hostid 2
Here my FT VLAN allows keep-alives to be passed through to each other. We define the device hostname and the peers hostname then we set up peer 1 and how regular FT heartbeats area and the number required to miss before failure. Then I assign groups to associate contexts to. This allows sharing of context configuration. Then I set the remote peer id and voila! Friendship and Rainbows!
Lets start by discussing server farms and defining real servers. Below we define the following real servers in the ACE.
rserver host WWW01 ip address 192.168.10.10 inservice rserver host WWW02 ip address 192.168.10.11 inservice rserver host WWW03 ip address 192.168.10.12 inservice rserver host WWW04 ip address 192.168.20.13 inservice rserver host DNS01 ip address 192.168.20.10 inservice rserver host DNS02 ip address 192.168.20.11 inservice rserver host DNS03 ip address 192.168.20.12 inservice rserver host DNS04 ip address 192.168.20.13 inservice
Simple enough to define a real server. Important trick to remember is in service. Treat it like no shut! Now that we have define our real servers we need to nest them inside a Virtual Serverfarm.
This server farm will be the IP that is presented to the world. It will distribute requests based upon roundrobin load sharing and service accordingly.
serverfarm host WWW-FRONTEND-SFpredictor roundrobin rserver WWW01 inservice rserver WWW02 inservice rserver WWW03 inservice rserver WW04 inservice serverfarm host DNS-SF predictor roundrobin rserver DNS01 inservice rserver DNS02 inservice rserver DNS03 inservice rserver DNS04 inservice
Simple enough there. Now comes the head scratching pat!
Map inside my Map so we can discover while we discover.
Follow this key and decipher! It does make sense – trust me!
Define the class-map WWW-CMAP. This matches traffic from the listed IP. The Policy map multi-match MATCH-REQUEST-ACTION-PMAP matches our first WWW-CMAP then applies what is contained in the Policy map. The second policy-map then assigns it to a server farm.
** Disclaimer – As far as my little mind understand this is how it all works. Feel free to correct. I have been reading a lot and there isn’t much info out there! **
class-map match-all WWW-CMAP 2 match virtual-address 192.168.10.1 tcp eq wwwpolicy-map multi-match MATCH-REQUEST-ACTION-PMAP class WWW-CMAP loadbalance vip inservice loadbalance policy LB-WWW-PMAP loadbalance vip icmp-replypolicy-map type loadbalance http first-match LB-WWW-PMAP class class-default serverfarm WWW-FRONTEND-SF class-map match-all DNS-CMAP 3 match virtual-address 192.168.20.1 tcp eq dns policy-map multi-match MATCH-REQUEST-ACTION-PMAP class DNS-CMAP loadbalance vip inservice loadbalance policy LB-DNS-PMAP loadbalance vip icmp-reply policy-map type loadbalance http first-match LB-DNS-PMAP class class-default serverfarm DNS-SF
Alright – now because this setup is a one-armed ACE install we need to point a static route back to the SVI. Now our traffic goes to a server of the ACE’s choosing dealt out by a round-robin styled procedure.
So far so good. The ACE for me has been reliable and customisable as I need. Next little post will cover the health checking probes which allow a Server farm to mark a real server offline. Great if you need to upgrade, install, change or fix. It’s a lot to take in but I am enjoy what this product can do.
Oh and expect a rant towards programmers soon with how they put data onto the wire and think that they know best.