My previous post focused on using access-lists that we based upon Fully Qualified Domain Names. This recently has posed a solution for some works that have been undertaken. Even though it might seem quite straight forward to implement – there are some considerations that need to be addressed before implementation.
DNS Time To Live
There are quite a few websites which we deal with daily that live behind a load balancer. This allows the provider to deliver resiliency to their service. This also means they have low TTL times on DNS query answers. In the case of maps.google.com (my focus for work) this was 96 seconds.
This time works rather well in my environment but I do need to consider CPU load on continuous polling. This is especially true if you are using something like Akamai or Facebook. These sites use 6 and 12 seconds respectively. This can cause quite a load upon your ASA.
Trust thy DNS
Now this should go without saying. Trust your DNS server. Your new rules are being made from DNS records. These DNS records resolve to IP addresses which are used to create access lists. If your DNS was compromised or poison you might actually be allowing in traffic that you aren’t expecting.
A low latency, trusted (internal) server is a great place to start.
I am not your URL filter
It is good to note that although this can permit or deny based upon FQDN it is not a URL filter mechanism. These are some reasons why you should not use it as a URL filter.
- The FQDN access-list purely provides dynamic entries for ACLs.
- Intermittent access based on low DNS TTL time + ASA TTL time.
- Multiple host names for single IP address
- Multiple names for a single site.
If you need URL filtering enable it through the features of the ASA or set up a proxy sever such as squid.
You might need to spend a little time with the packet captures identifying the average TTL for the sites you required. It is important that you are aware of the caveats and requirements before you implement this. The real benefits I feel are for the next generation of internet services where IPv6 and DNS AAAA records are used though why not capitalise now!
Network virtualization allows an abstraction from a physical topology. It takes the notion of a logical network further. Abstraction allows segregation of the routing table and can be used in a flexible way. The ability now to define a security policy per domain and perform L3 traffic isolation are just the beginning. The notion of creating multiple logical networks over a single physical network allows for explosive growth.
Thismay consist of such parameters such as access control. We can allow and or isolate through control. We can provide path isolation including transport for compliance and security reasons. We could tier and control access to single and multi-tenant services which allow people to apply policies on a partition basis.
VRFs are one way to achieve this. I see VRF’s the same way VLAN’s slice the CAM table. With VRFs you gain a separate RIB and FIB you gain sound isolation and control. You can even reuse IP addresses. VRF lite is a cheeky way of delivering per VRF packet processed network virtualization. It provides a unique control and data plane.
- You can have sub-interfaces on L3 trunks which allow for passing VRF information. It is important to be aware the Catalyst 4500 platform doesn’t support this. This still requires manual hands on and can be clunky to scale.
- It is possible to deliver VRF abstraction over GRE tunnels between devices. They require a few additional prerequisites such as a tunnel, loopback and a client interface per VRF. Albeit they are easy to configure it does have limited scale. You also need to factor in the additional 28 byte header along the path. 20 byte GRE header + 4 – 8 byte key + original packet.
- Enhanced Virtual Networking end to end delivers all the features listed above but is compatible on the Catalyst 4500! What it adds additionally is the automatic configuration of trunks when a new VRF is created. You could say it adds a new sub interface like VTP spreads VLANs! This can help with scaling and ensure all devices in a VRF network at consistent.
Once you achieve the level of abstraction required you then can police inter-VRF communication. With the notion of zoning from the firewall in mind you can leak routes between VRF, pass them through transparent firewalls, or simply keep them isolated. There is a growing trend of establishing BGP internet peerings in a VRF to then leak required routes to customers or partners. Think to the benefits of a multi-tenant environment. You can confidently isolate A from B and C. Needs may change and C and B require full access. Easy done. Flexibility that is abstracted from hardware.
This post was inspired from Next Generation Campus Architectures BRKCRS-2663 and is available for viewing at Ciscolive365