It's no bluff that IPv6 has been standardized for nearly two decades ([RFC 2460]). There has been close to zero percent adoption up until about 3-4 years ago where you can see some of the early adopters are getting their feet wet; [Google Statistic shows climb from 2% to 18% in the past three years, by analyzing it's users' request origin]. IPv6 BGP route table size is also up to 39k, per [cidr-report.org].
Although it may look a bit daunting at first, it is actually quite simple to get started. You can start off by contacting your ISP to see if they can provide IPv6 service on your circuit, in which they will usually provide you a several /64 (for transit and internal usage) or a larger range, depending on your size and business needs. If you prefer to have your own PI space, you can request a prefix through ARIN (or your regions RIR); generally from what I've seen, you can get anywhere from a /56 at a small business, to a /40 for a medium enterprise, to a /32 for a large enterprise or small ISP and even larger from there. If your ISP doesn't support IPv6 and you would still like to play around with it, don't fret, [you can utilize an IPv6 broker such as Hurricane Electric]
Once you are ready to peer with your provider or IPv6 broker, here is a simple guide on how to configure your side of the connection
1\. First step would be to check if your device/license supports both IPv6 and BGP. For Cisco, you can check this using their [Feature Navigator]. For my example, I am using a Cisco 3560, which will need IP Services license to support it.
2\. Configure your peering transit interface for IPv6. Usually this is provided by means of a /64 for strict transit usage. My example uses SVI:
3\. After verifying L3 connectivity, let's get started on the BGP configuration. First lets configure our inb/outb filters. For the sake of this example, let's assume default in and two custom prefixes out. These "filters" will be prefix-lists, with route-maps configured for flexibility at the BGP config level.
- Inbound is easy enough; allow default
- Outbound is also simple, just specify YOUR prefixes, whether from your ISP or RIR
- And route-maps matching our configured prefix-lists; route-map allows for flexibility and should always be used
4\. Once we are ready with our filters, we can get going with the actual BGP process configuration. First, enter the BGP router configuration and configure your peer ISP neighbor, but at this point, it will be disabled until your IPv6 family configuration exists. Second, enter the 'address-family ipv6 unicast' context and specify your networks to advertise and your neighbor activate/route-map commands as you see below. This should complete your basic IPv6 BGP configuration and we can move on to some verification
* If you haven't done so already, make sure that your prefixes exist in the global RIB ('`show ipv6 route`') in order for BGP to inject them into the BGP IPv6 RIB. The prefixes you specified in the BGP configuration will not be advertised if they don't exist in the global RIB. If they are not, you can create a placeholder route for now, with the 'ipv6 route' command (Example: `ipv6 route 2001:db8:dddd::/56 Null0`). Directing traffic towards Null0, essentially discards the traffic when it is processed by the routing engine. This is good if you plan to distribute your larger prefix into smaller /64 size subnets, because it will discard the traffic that doesn't match a more specific route but this is a whole different topic to go on about =). For now, it will allow the route to be propagated into the BGP IPv6 RIB since Null0 is always reachable.
5\. You can quickly verify IPv6 BGP neighbor status with '`show bgp ipv6 unicast summary`', which will list the neighbors and if your BGP state is 'Established', then you will not see a BGP state, but instead, a count of prefixes received from that specific neighbor:
* And you can verify received prefixes by checking the global and BGP IPv6 RIBs
You can see that the received route made it successfully into respective IPv6 RIBs, accompanied by the static routes created earlier for injection into BGP. The routes should propogate fairly quickly into the many looking glass servers available on the WWW today and can also be verified via traceroute (you can use a web tool such as [http://ping.eu/traceroute] if you do not have IPv6 client access).
After successfully deploying IPv6 at your edge network, there are several different options for moving IPv6 into your core infrastructure. You can go dual-stack from the edge to the server, which would be the most difficult. Another option would be to choose a IPv4/IPv6 segregation point in your network, to perform translations or act as an IPv6 proxy towards your IPv4 servers. If your a hosting or cloud service provider, the ladder seems to be the best option as it provides an easy migration path from IPv4 to IPv6 on the application side, if done right; you can support IPv6 to your load balancer, proxying traffic to your IPv4 servers, with the option to add your IPv6 servers to the LB pool once you are ready and migrate to them seamlessly.
Hope this helped as a primer and to give you a basic path for moving towards IPv6 at the network infrastructure level (or just to test it at the very least).
Feel free to ask any questions or drop a comment!
: https://tools.ietf.org/html/rfc2460 "RFC 2460"
: https://www.google.com/intl/en/ipv6/statistics.html#tab=ipv6-adoption&tab=ipv6-adoption "Google user IPv6 adoption"
: http://www.cidr-report.org/v6/as2.0/ "IPv6 cidr report"
: https://ipv6.he.net/ "Hurricane Electric IPv6"
: http://cfn.cloudapps.cisco.com/ITDIT/CFN/jsp/index.jsp "Cisco Feature Navigator"
: http://ping.eu/traceroute/ "ping.eu/traceroute"
It is a best practice to have your production network devices connected to their respective production networks and also to an out-of-band network, for strict management purposes. Most Cisco switches, such as a Cisco Nexus 5k or a Cisco Catalyst 2960-S have a built-in physical management interface. This interface is often different from the other interfaces on the switch, in that it has it's own default gateway to keep the out-of-band routing self-contained in the out-of-band network. On the Cisco Nexus, the interface is assigned to a default management VRF.
Not all switches have this dedicated management interface implemented in hardware, so we can create one, mocking the intended design of a physical management interface. To do this, we have to do a few things (this method is intended for use on L3 catalyst switches):
1. Dedicate an interface on the switch for management purposes, to be connected to your out-of-band and/or management network.
2. Create a separate "management only" routing/forwarding table, aka VRF
3. Assign your dedicated interface to use this management VRF, via SVI
On the Cisco Catalyst, when licensed for VRF usage, this can be achieved. I am using a 3560-CX running IOS 15.2, which does not have a dedicated managment interface.
To start, you need to prepare you physical interface. You need to assign it to your respective management/OOB VLAN, my example uses VLAN 10.
Next, we need to prepare the VRF before associating our new interface to it. Note that I create the vrf definition and then I enter the IPv4 address-family, entering the address-family is mandatory and your VRF will not route/forward IPv4 traffic if this does not exist. If you intend to use IPv6 for management on this switch, also add it's address family.
Now that the definition is created, we can add a route to our new VRF! Since this is for strict management purposes and we don't need to inter-vlan route, we only need to add a default route, which will mock the default-gateway command used on a Catalyst switch with a dedicated physical management interface.
With this method, we are utilizing a SVI, so this needs to be created (for the respective VLAN). If it has not been created already, create it and assign it to your new VRF with the 'vrf forwarding VRFNAME' command
This complete the configuration and you should now be able to reach your switch via the new management VRF. Note that if you had the SVI created before adding it to the VRF, it may have deleted IP related information such as IP address, etc. and also, this traffic will be routed using your new default route which you added to the management vrf, so make sure that the next hop device that you set is able to route back to the intended management workstation.
If you have any questions or comments, please leave them below!
So you’ve introduced a Nexus stack in your datacenter instead of using those 3750x switches deployed because well..you want DC grade switches running in your DC.
One thing you may come across is particular tuning for different types of traffic passing through your new NX switches. Whether it be iSCSI, NFS, FCoE, or just normal web traffic with little payload. With different traffic, you may think about classifying this traffic with QoS and putting policies in place. Lets start simple by classifying your iSCSI traffic for jumbo frames. The first thing we need to do is to match the traffic with an ACL:
Now we can start by creating a class map for the QoS type of traffic using the above referenced ACL:
Now we have the class, where we can point out the networks we specified for jumbo frames. Now we can create a policy and mark the traffic accordingly. To keep it simple for this example, we can set a QoS group of 2, which is an internal label local to the Nexus. You can learn more about them in this article by Cisco. The next steps are to use these QoS markings to queue the traffic and set bandwidth, etc. First we will start by creating our first policy mapping, to do so:
Now the traffic is officially marked. But if we were to apply it at this point, it still isn't doing anything. We configured the classes and applied them to the traffic. But we have no network or queuing policies to actually act on those class markings. So lets start off by configuring the queueing policies to allow at least 50% bandwidth for the jumbo frame traffic. By default, the default class is set to 100%, so the following policy will limit that also to 50:
And finally, to configure the network policies to actually allow for the jumbo frames, using a policy to set the MTU to a colossal size to accomodate:
So now the class and policy mappings are complete. We are ready to apply them to your new nexus switches. To do this, we need to override the default policies. To override, you need to specify your three policies (qos, queueing, and network-qos) as the preferred system QoS policies. With the above policy examples, this looks like:
So there you have it. Now your jumbo frames as specified in our ACL for IPv4 and IPv6 traffic, we now have set the proper markings to classify traffic to be used with our larger MTU size, with queuing and all. To verify that it is working, you can use the show queuing interface... command, like so:
You want to make sure you don't have any discarded packets or other signs of issues, which could mean many things such as improper configuration of the classes, policies or issues on the ports connecting to your ESXi server for example. Like mentioned above, this is a basic set of policies to get the point across and could be expanded much more. You don't have to match based on an access list, you could also match based on protocol or other traits, leading to some more advanced markings based on that.
Let me know if you have any questions or improvements to this basic tutorial.
You often find yourself sifting through the logs when an issue arises on any given network device. You find useful information such as flapping indications, failed SLA ops, and several other things. These are all event-driven log messages that are written when a certain event occurs on the platform (software or hardware). It may be useful to write your own messages to the local syslog instance for various reasons:
* To keep track of timestamps for various maintenance, research events. Useful for reporting incidents (ITIL) and also to aid in troubleshooting
* Notes for next on-shift engineer. Things like including a CM ticket # before a change, for the next person to reference
Well, this is a very simple thing to do, which involves entering the TCL shell, binding to syslog, writing your message, and then closing the syslog connection. This is done like so:
And from there, if any issues arise during your MAINTENANCE BEGIN/END tags, you can clearly see the errors and correlate with timestamps, etc. Also, by leaving notes such as the ticket #, you can reference much more information such as what changes were made and any issues that had came up during the maintenance.
To view history of maintenance events for further troubleshooting:
There are other situations in which writing custom messages would be useful, be creative! I hope this is informative to anyone who hasn’t done anything like this before.
There are many situations where you need to be able to capture traffic at ANY point in the network. Whether it testing from the client side, server side, or any transit component, such as your ASA. Since the traditional tools such as tcpdump or Wireshark aren’t available, you will have to use the proprietary tools on the ASA itself.
You will start off by creating an ACL to match the traffic you are wanting to capture. This should be a strict, such as a source and dest that are experiencing the issues you are troubleshooting:
Now that you have configured the access list to match your traffic, you can start the capture. I am capturing traffic before and after it goes over the VPN, so I will monitor the interface closest to the source, which is the ‘inside’ interface. There aren’t any “required” options besides the name but if you don’t specify an ACL and interface, you won’t match any traffic. There are a few options you may want to use other than those two:
* type (default is raw-data but there are other options such as ISAKMP)
* headers-only (no data, just headers; similar to tcpdump -q)
* buffer (default is 512KB. Increase with caution, you don’t want to use up all your memory)
* real-time (have output display in your terminal)
* \*more\* — use ‘capture ?’ to see all options
We will go with the basics; access-list and interface:
After this, you can check your console to see if the traffic is matching your ACL and being captured with ‘show capture'; if the bytes are a non-zero value and increasing, it’s working!
If you want to see the content, you have to specify the capture name:
So if you were troubleshooting connectivity and you see this successful ICMP traffic, the loss you are experiencing must be somewhere else. Now you can assure that packets are hitting the firewall and this should help you verify your high level packet flow, to hopefully pinpoint where the issue is. If you want to see more information, there are a few more useful options when displaying the capture:
* access-list (yes, you can use an additional access-list to filter the output of ‘show capture’)
* count (# of packets to display)
* detail (display more information such as TTL, DF bit, etc)
* dump (display the hex dump for each packet)
* \*more\* — use ‘show capture ?’ to view all options
So if you would like to view the hex dump of 1 packet, you can with:
Often, you may have a large capture and you don’t want to sift through it on the ASA. You might want to look at the capture file in wireshark or tcpdump. You can do so by copying the file via tftp to a workstation or server. But how will the output be formatted… with just headers, hex dumps or .. ? Well, when you transfer it via copy, you specify to use pcap format, which you will be able to read in just about any packet analysis program as it is the most commonly used. You can do this via:
Awesome! So although the ASA capture utility isn’t as versatile as tcpdump or wireshark for example, it is still great to start with and verify packet flow, etc. And if you want to save a larger capture for future analysis, it can be copied and opened with another packet analysis tool. From there you can use your advanced filters, etc.
I hope you enjoyed the basic tutorial on how to use the ASA capture utility… perhaps soon I will write one up on how to effectively use packet-tracer, which is yet another useful ASA command line tool for verifying packet flow and it will also give you a better understanding of how the ASA processes packets. (order of NAT/ACL processing, route lookup, etc.)