Cisco ACE is ACE. I am finding that this statement is a semi-truth. Before inheriting this network with a few ACE clusters, I did some reading. A light google confirmed what I had read on the Twitters but I wanted to find out for myself. It was like Cisco with this device threw a 44 gallon drum of petrol into a bonfire. The catalyst to open revolt. My doppelgänger and Data Center hero Tony Bourke (@tbourke) summed it what he thought of the ACE – especially after teaching it for a long time and now it appearing like a wild Weedle on the CCIE DC exam blueprint.
So what is this Cisco ACE?
Why the love/hate relationship with it? Or hate/hate? Let’s first off start my delving into what the ACE actually is and how it can help you.
The Application Control Engine is Cisco’s offering into the load-balancer market. This market has well known vendors such as F5 Networks, Barracuda, Riverbed and Foundry. Cisco’s ACE 4710 from simple google searches is a hot topic with contrasting reviews. That doesn’t bode well in a market that is dominated by F5 – at least, from all the data centers I walk into 🙂 .
Load-balancing is the act of distributing traffic across a number of servers. The devices increase the amount of concurrent users and reliability of the application/device. improvement of performance comes from pooling resources and the maintaining of applications/network sessions.
There are two levels of load balancing. Layer 4 and Layer 7. Now, if we refer to the OSI model – Layer 4 includes juicy stuff such as IP, TCP, FTP, UDP. Layer 7 load balancing allows distribution of traffic based upon the application layer protocols – in this case and I am guessing the most used – HTTP.
The next part is the distribution of requests. Both levels of load balancers recieve requests and they are distributed to a particular server-based on algorithms. Crazy Black voodoo is used here. Also known as Maths. These names could ring a bell to some who use the QoS a lot.
- Least connections
- Least response time
- Round Robin
- Weighted Round Robin
I was also surprised when configuring the device to find you could further manipulate requests based on specific data such as Headers, Cookies, Type of Request, and more.
Why do I need one?
Let alone the nice features offered by a load balancer thus far the best part I believe is the ability for a server to go down (or be upgraded) and to not lose connectivity. How is this achieved? Through Virtual Server Farms.
Let’s say I run an online store. I would have a cluster of 15 front end web servers set up. On the load balancer I define the real server IPs ( 10.1.1.1-15 ). By doing that I then can assign them to a server farm and assign that farm a virtual IP address (10.1.1.254) This virtual IP address becomes the IP address of the store.
With some CLI magic – I will delve into this further in upcoming posts – it is possible to have machines fail, drop, be pulled offline which is detected by a health check and session states move to a new real server maintaining connection information. Without this you’d just get a timeout.
Wow Ant, this is a lot to take in.
Well I have had a crash course in Load balancers and learnt a lot very quickly. I am currently attempting to copy run start what I have learnt. There is a lot more to discover and share and I will follow-up with a basic config including clustering the load balancers and make posts that include health probes, checks, failover and more.
Thoughts thus far?
I don’t think I could have a web server now without one. The protection it can provide is a serious benefit to any 24/7 deployment. Although Cisco’s ACE can be cost prohibitive for some – there are other cheaper alternatives.