WHOIS is a standard/protocol that was created with one of the purposes being the storage of information on the ownership of a specific domain. If you need to know who owns a domain or how to contact them, you can query a hosted WHOIS database by submitting a request to one of the WHOIS servers responsible for serving the owner information for a specific TLD (Top Level Domain). Examples of common TLDs are .com, .net, .org. There are also designated country TLDs and TLDs for other purposes. You can see a detailed list on Wikipedia.
You can perform WHOIS lookups via online web utilities or more commonly, through the use of the Unix JWhois client, which comes installed on most Linux distributions and OS X. If you would like to use the client and primarily use Windows, you can use Cygwin and choose the ‘whois’ package as part of the installation process.
How does my client know what WHOIS server to query?
For the most part, the client knows which server to use based on static configuration, where each TLD has a server to perform WHOIS lookups towards. For JWhois, there would be a block of configuration in the jwhois.conf global configuration file under the whois-servers section, where you specify the TLD(s) using regular expressions followed by the given server for that TLD match. Examples:
However, with the increasing number of TLDs, it is not scalable to have all these mappings in a local configuration file. Of course, the client fails back to a default WHOIS server to query, in case there is no mapping for a given uncommon TLD but this will fail if the information requested does not exist on that server. So for a new TLD, WHOIS lookups for that TLD will likely fail on any fresh whois installation and most online WHOIS web clients out there without manual WHOIS server specification.
Querying uncommon TLDs
Luckily the JWhois client allows you to also specify the server that you want to query for a given lookup but you would need to know of this server beforehand. The Internet Assigned Numbers Authority (IANA) maintains a database of information for each TLD, served conveniently to you on this web page. Each page will give you a load of information on that specific TLD, including the WHOIS server. So querying a specific domain name using the Unix JWhois client would look like this for the ‘bingo’ TLD (using the -h flag to specify a specific WHOIS server):
One step further: automating the WHOIS server lookup
When browsing the IANA Root Zone Database, I noticed that the URIs for all of the TLDs used the same format; “/domains/root/db/(TLD_HERE).html”. This allows for easy web scraping, using a simple script.
I wrote a Python script, ‘Phois’, that takes a domain argument as JWhois does. However unlike JWhois which checks the domain name against its configuration file for the WHOIS server, Phois strips the TLD from the query, caches the IANA page for that TLD, scrapes the WHOIS server from there, then performs a normal WHOIS lookup but by specifying the WHOIS server to perform the lookup on. You can find this Phois script on my GitHub and can simply git clone it and pip install. Alternatively, you can throw it in your “/usr/bin” directory or another executable directory in the $PATH on your machine as long as you have the Python pre-reqs installed (don’t forget to make the script executable; chmod +x). It works like so:
Although it may look a bit daunting at first, it is actually quite simple to get started. You can start off by contacting your ISP to see if they can provide IPv6 service on your circuit, in which they will usually provide you a several /64 (for transit and internal usage) or a larger range, depending on your size and business needs. If you prefer to have your own PI space, you can request a prefix through ARIN (or your regions RIR); generally from what I’ve seen, you can get anywhere from a /56 at a small business, to a /40 for a medium enterprise, to a /32 for a large enterprise or small ISP and even larger from there. If your ISP doesn’t support IPv6 and you would still like to play around with it, don’t fret, you can utilize an IPv6 broker such as Hurricane Electric
Once you are ready to peer with your provider or IPv6 broker, here is a simple guide on how to configure your side of the connection
1. First step would be to check if your device/license supports both IPv6 and BGP. For Cisco, you can check this using their Feature Navigator. For my example, I am using a Cisco 3560, which will need IP Services license to support it.
2. Configure your peering transit interface for IPv6. Usually this is provided by means of a /64 for strict transit usage. My example uses SVI:
3. After verifying L3 connectivity, let’s get started on the BGP configuration. First lets configure our inb/outb filters. For the sake of this example, let’s assume default in and two custom prefixes out. These “filters” will be prefix-lists, with route-maps configured for flexibility at the BGP config level.
Inbound is easy enough; allow default
Outbound is also simple, just specify YOUR prefixes, whether from your ISP or RIR
And route-maps matching our configured prefix-lists; route-map allows for flexibility and should always be used
4. Once we are ready with our filters, we can get going with the actual BGP process configuration. First, enter the BGP router configuration and configure your peer ISP neighbor, but at this point, it will be disabled until your IPv6 family configuration exists. Second, enter the ‘address-family ipv6 unicast’ context and specify your networks to advertise and your neighbor activate/route-map commands as you see below. This should complete your basic IPv6 BGP configuration and we can move on to some verification
If you haven’t done so already, make sure that your prefixes exist in the global RIB (‘show ipv6 route’) in order for BGP to inject them into the BGP IPv6 RIB. The prefixes you specified in the BGP configuration will not be advertised if they don’t exist in the global RIB. If they are not, you can create a placeholder route for now, with the ‘ipv6 route’ command (Example: ipv6 route 2001:db8:dddd::/56 Null0). Directing traffic towards Null0, essentially discards the traffic when it is processed by the routing engine. This is good if you plan to distribute your larger prefix into smaller /64 size subnets, because it will discard the traffic that doesn’t match a more specific route but this is a whole different topic to go on about =). For now, it will allow the route to be propagated into the BGP IPv6 RIB since Null0 is always reachable.
5. You can quickly verify IPv6 BGP neighbor status with ‘show bgp ipv6 unicast summary’, which will list the neighbors and if your BGP state is ‘Established’, then you will not see a BGP state, but instead, a count of prefixes received from that specific neighbor:
And you can verify received prefixes by checking the global and BGP IPv6 RIBs
You can see that the received route made it successfully into respective IPv6 RIBs, accompanied by the static routes created earlier for injection into BGP. The routes should propogate fairly quickly into the many looking glass servers available on the WWW today and can also be verified via traceroute (you can use a web tool such as http://ping.eu/traceroute if you do not have IPv6 client access).
After successfully deploying IPv6 at your edge network, there are several different options for moving IPv6 into your core infrastructure. You can go dual-stack from the edge to the server, which would be the most difficult. Another option would be to choose a IPv4/IPv6 segregation point in your network, to perform translations or act as an IPv6 proxy towards your IPv4 servers. If your a hosting or cloud service provider, the ladder seems to be the best option as it provides an easy migration path from IPv4 to IPv6 on the application side, if done right; you can support IPv6 to your load balancer, proxying traffic to your IPv4 servers, with the option to add your IPv6 servers to the LB pool once you are ready and migrate to them seamlessly.
Hope this helped as a primer and to give you a basic path for moving towards IPv6 at the network infrastructure level (or just to test it at the very least).
It is a best practice to have your production network devices connected to their respective production networks and also to an out-of-band network, for strict management purposes. Most Cisco switches, such as a Cisco Nexus 5k or a Cisco Catalyst 2960-S have a built-in physical management interface. This interface is often different from the other interfaces on the switch, in that it has it’s own default gateway to keep the out-of-band routing self-contained in the out-of-band network. On the Cisco Nexus, the interface is assigned to a default management VRF.
Not all switches have this dedicated management interface implemented in hardware, so we can create one, mocking the intended design of a physical management interface. To do this, we have to do a few things (this method is intended for use on L3 catalyst switches):
Dedicate an interface on the switch for management purposes, to be connected to your out-of-band and/or management network.
Create a separate “management only” routing/forwarding table, aka VRF
Assign your dedicated interface to use this management VRF, via SVI
On the Cisco Catalyst, when licensed for VRF usage, this can be achieved. I am using a 3560-CX running IOS 15.2, which does not have a dedicated managment interface.
To start, you need to prepare you physical interface. You need to assign it to your respective management/OOB VLAN, my example uses VLAN 10.
Next, we need to prepare the VRF before associating our new interface to it. Note that I create the vrf definition and then I enter the IPv4 address-family, entering the address-family is mandatory and your VRF will not route/forward IPv4 traffic if this does not exist. If you intend to use IPv6 for management on this switch, also add it’s address family.
Now that the definition is created, we can add a route to our new VRF! Since this is for strict management purposes and we don’t need to inter-vlan route, we only need to add a default route, which will mock the default-gateway command used on a Catalyst switch with a dedicated physical management interface.
With this method, we are utilizing a SVI, so this needs to be created (for the respective VLAN). If it has not been created already, create it and assign it to your new VRF with the ‘vrf forwarding VRFNAME’ command
This complete the configuration and you should now be able to reach your switch via the new management VRF. Note that if you had the SVI created before adding it to the VRF, it may have deleted IP related information such as IP address, etc. and also, this traffic will be routed using your new default route which you added to the management vrf, so make sure that the next hop device that you set is able to route back to the intended management workstation.
If you have any questions or comments, please leave them below!
After my yearly lease came up for my VPS service, I decided to move providers and ended up going with AWS. I also did not want to use Wordpress anymore and liked the idea of static site generators and with some thought, I chose Jekyll. Since I was not using GitHub pages, I wanted an alternative way to have my site re-generated every time I have a successful git push. This leaded me to my next few choices:
Run my blog on Docker
To respawn a site after every change, made docker seem like a perfect choice and I had some experience with it already. After I had created my site in Jekyll, I gathered all the dependencies and needs to run it in docker….
Use nginx for control
Using just Jekyll, leaves you with a working static www root but no control for rewrites/redirects or custom magic. And no offense to Jekyll, it does a great job with static site generation but you should serve it a real web server. Jekyll does support dumping the site content into a folder, which makes it very compatible with nginx or apache
I worked with Jekyll locally and created a template I admired, which started from martin308’s left-stripe theme and made a handful or modificiations to make it work for me. These modifications included category support, social share buttons, a complete color theme change, and a few others.
After my testing, I had come up with a solid config, my Dockerfile below:
The above configuration does a few things:
Sets the base image to Ubuntu 15.10
Updates the image and installs my dependencies
Copies my complete repository to the container
Copies my nginx configs to the container
Creates the www root and runs Jekyll using my copied repository files as source and www root as destination
Expose ports 80 and 443
Start my services script, which basically just start nginx
So this gets my blog up and running but I would need to manually build and run my docker images. I want to push them to my private BitBucket repo and automatically restart my container with the new content. To do this, I need to have BitBucket communicate to the DockerHub and DockerHub relay a successful build notification to my server. This is the fun part :)
the automagical devopsy stuff
After having my docker/nginx/jekyll work completed, I needed to create a workflow for comitting changes and triggering automatic builds. Having all my BitBucket repo pushes trigger a DockerHub build was pretty simple to set up. Now I needed an endpoint for DockerHub to send towards, after a successful build. After some searching, I found captainhook, a nice project by bketelsen which is a web listener that can run scripts based on the URL called, aka “webhook.”
However, captainhook does not run over SSL, so I recommend having nginx running to forward requests to captainhook, which is how I set mine up (SSL reverse proxy). Also, run captainhook in cron so that it never dies or alternatively, use supervisord.
This ensures that if captainhook has a hiccuup and dies, it will start again under a different PID. My configuration for restarting my docker containers looks like this:
My ultimate workflow
Test my content changes locally with Jekyll using jekyll –serve
If satisfied, I delete the temp Jekyll _site destination directory and git commit & git push
When searching for A10 help, I have come short most of the time. The good thing about A10 AFlex rules, is you can reference F5 iRule documentation, because they are both based on TCL. But for any other operational or troubleshooting tasks help, I am not so lucky. Coming from an F5 background, I would like to start a short series to log some of my findings when working with the AX series ADC, such as basic A10 commands, syntax and troubleshooting methods.
Similar to the F5 (post 9.x), A10 AX also have a very useful command line but with some tasks the GUI much cleaner/faster.
Lets start with some known terms and the configuration hierarchy and how it differs from F5
The F5 you had your VIPs (virtual servers) which were tied to a specific IP:Port, which would be directly tied to a pool with pool members (server nodes). So typically you had multiple virtual servers for the same IP, if you needed to expose multiple ports.
Things are designed a bit different in the A10, you will see instead:
Where your Virtual server, you will only have one of, per IP address. In the VS configuration, you will have port mappings, which are known as ‘Virtual Services’. They map VIP ports to service groups such as:
Service Groups are very much similar to F5 pools, where you will configure member servers, load balancing algorithm, server priority, health checks, etc.
So looking back at A10’s configuration hierarchy, the Virtual Server is just an abstraction layer in the config hierarchy that makes the GUI feel cleaner. You have one Virtual Server per IP, which is represented as one page for configuration in the web UI. From there, you will configure ports to direct requests to specific Service groups. But wait…. what about ‘Virtual Services’? These are generated when you map the port to SG and to edit, you will be brought to a new page for configuration of each Virtual Service. In the text configuration, they will be noted as _10.1.1.1_HTTP_80 if you happened to map port 80 as HTTP to VIP 10.1.1.1. This is nothing daunting, just a little different and with an small learning curve for someone with F5 experience.
As far as the look and feel, the GUI is very easy
above picture is Thunder series, not AX but OS is the same
The UI is broken down into ‘Monitor’ and ‘Config’ modes. Monitor, you may see graphs and counters in relation to the objects that you are looking into, where Config is strictly for configuring.
The A10 has a proprietary HA engine, where there is a Active/Standby node but also a VCS Primary/Secondary. You make changes on the VCS Primary but the traffic will flow through the Active node, which doesn’t always mean you need to make your changes on the “Active” node. As far as network interfaces, or in F5 talk, Self-IPs.. These are controlled via VRRP. You can have multiple VRRP domains or you can throw all your networks into the default VRID domain.
Hope this helped as an introduction. I will write a few posts in the near future about basic configurations and also troubleshooting methods using the tools present on the ACOS CLI.
A cool tip here if you’ve ever lost access to your primary or secondary SRX device, you can still access them via another cluster node.
For example, if you want to gain access to node1 in the cluster from the Primary device, node0, you can do so using the ‘request routing-engine login node ’ command syntax as instrumented below:
This is very helpful if you don’t have OOB serial access oryour company operates it’s application environments in co-location facilities and it’s not as easy to walk over to the DC and plug in a console cable.
So you’ve introduced a Nexus stack in your datacenter instead of using those 3750x switches deployed because well..you want DC grade switches running in your DC.
One thing you may come across is particular tuning for different types of traffic passing through your new NX switches. Whether it be iSCSI, NFS, FCoE, or just normal web traffic with little payload. With different traffic, you may think about classifying this traffic with QoS and putting policies in place. Lets start simple by classifying your iSCSI traffic for jumbo frames. The first thing we need to do is to match the traffic with an ACL:
Now we can start by creating a class map for the QoS type of traffic using the above referenced ACL:
Now we have the class, where we can point out the networks we specified for jumbo frames. Now we can create a policy and mark the traffic accordingly. To keep it simple for this example, we can set a QoS group of 2, which is an internal label local to the Nexus. You can learn more about them in this article by Cisco. The next steps are to use these QoS markings to queue the traffic and set bandwidth, etc. First we will start by creating our first policy mapping, to do so:
Now the traffic is officially marked. But if we were to apply it at this point, it still isn't doing anything. We configured the classes and applied them to the traffic. But we have no network or queuing policies to actually act on those class markings. So lets start off by configuring the queueing policies to allow at least 50% bandwidth for the jumbo frame traffic. By default, the default class is set to 100%, so the following policy will limit that also to 50:
And finally, to configure the network policies to actually allow for the jumbo frames, using a policy to set the MTU to a colossal size to accomodate:
So now the class and policy mappings are complete. We are ready to apply them to your new nexus switches. To do this, we need to override the default policies. To override, you need to specify your three policies (qos, queueing, and network-qos) as the preferred system QoS policies. With the above policy examples, this looks like:
So there you have it. Now your jumbo frames as specified in our ACL for IPv4 and IPv6 traffic, we now have set the proper markings to classify traffic to be used with our larger MTU size, with queuing and all. To verify that it is working, you can use the show queuing interface... command, like so:
You want to make sure you don't have any discarded packets or other signs of issues, which could mean many things such as improper configuration of the classes, policies or issues on the ports connecting to your ESXi server for example. Like mentioned above, this is a basic set of policies to get the point across and could be expanded much more. You don't have to match based on an access list, you could also match based on protocol or other traits, leading to some more advanced markings based on that.
Let me know if you have any questions or improvements to this basic tutorial.
You often find yourself sifting through the logs when an issue arises on any given network device. You find useful information such as flapping indications, failed SLA ops, and several other things. These are all event-driven log messages that are written when a certain event occurs on the platform (software or hardware). It may be useful to write your own messages to the local syslog instance for various reasons:
To keep track of timestamps for various maintenance, research events. Useful for reporting incidents (ITIL) and also to aid in troubleshooting
Notes for next on-shift engineer. Things like including a CM ticket # before a change, for the next person to reference
Well, this is a very simple thing to do, which involves entering the TCL shell, binding to syslog, writing your message, and then closing the syslog connection. This is done like so:
And from there, if any issues arise during your MAINTENANCE BEGIN/END tags, you can clearly see the errors and correlate with timestamps, etc. Also, by leaving notes such as the ticket #, you can reference much more information such as what changes were made and any issues that had came up during the maintenance.
To view history of maintenance events for further troubleshooting:
There are other situations in which writing custom messages would be useful, be creative! I hope this is informative to anyone who hasn’t done anything like this before.
Recently, I had to troubleshoot an issue where there was some improper API use and it was being blamed on the application. The traffic is SNAT’d, so essentially backend servers are being proxied. What I mean by this, is in the “backend servers” perspective, the requests appear to be coming from the F5, not the original source IP. Since the SSL is offloaded on the F5, we can only trace the unencrypted traffic on the backend and due to how noisy it is and all request appear to be coming from the F5 self IP, this becomes very difficult to troubleshoot. So we need to troubleshoot on the frontend, where the public source IP is preserved. This traffic is encrypted, so we need means of viewing the traffic unencrypted. This can be done with tcpdump and ssldump, which both are installed on the F5 by default.
So now we are ready to start capturing the traffic. You can do so with tcpdump, but you should be very strict with your parameters, as you could cause performance issues sending to much garbage to stdout. You will want to save to file, so we can later decrypt with ssldump. Here is an example, listening on the frontend VLAN:
Now you should have the PCAP file in your present directory and you can view it via tcpdump, wireshark, or any other packet analysis tool that you have available, as PCAP is the industry standard packet capture format. If you were to open the file up in Wireshark, you would notice that the SSL/TLS payload displays ‘ Record Layer: Handshake Protocol: Encrypted Handshake Message’*, so you aren’t able to view the unencrypted data natively, which is expected. This is where ssldump comes in, which can utilized your F5 private keys to decrypt the trace. You need to identify what SSL cert/key pair are used on the VIP you are troubleshooting. You can find this out by looking over the VIP configuration, which will use a specific SSL profile. In looking into that profile, it will mention what cert/key pair it uses. You will be able to find these certs/keys in /config/ssl/ssl.crt & ssl.key on 9.x/10.x or it could be stored in /config/filestore/files_d/Common_d/certificate_d & certificate_key_d in 11.x. Once you find the key, you can decrypt the PCAP, using the following example:
So there you have it, you can decrypt SSL traffic if you have the private key with only tcpdump and ssldump. You can perform the same task in using tcpdump to output to PCAP and then using the private key in Wireshark to decrypt the traffic, although I find it easier to troubleshoot using tools on the F5 if I can.
Let me know if you have any questions on the subject or any suggestions to improve this method.
There are many situations where you need to be able to capture traffic at ANY point in the network. Whether it testing from the client side, server side, or any transit component, such as your ASA. Since the traditional tools such as tcpdump or Wireshark aren’t available, you will have to use the proprietary tools on the ASA itself.
You will start off by creating an ACL to match the traffic you are wanting to capture. This should be a strict, such as a source and dest that are experiencing the issues you are troubleshooting:
Now that you have configured the access list to match your traffic, you can start the capture. I am capturing traffic before and after it goes over the VPN, so I will monitor the interface closest to the source, which is the ‘inside’ interface. There aren’t any “required” options besides the name but if you don’t specify an ACL and interface, you won’t match any traffic. There are a few options you may want to use other than those two:
type (default is raw-data but there are other options such as ISAKMP)
headers-only (no data, just headers; similar to tcpdump -q)
buffer (default is 512KB. Increase with caution, you don’t want to use up all your memory)
real-time (have output display in your terminal)
*more* — use ‘capture ?’ to see all options
We will go with the basics; access-list and interface:
After this, you can check your console to see if the traffic is matching your ACL and being captured with ‘show capture’; if the bytes are a non-zero value and increasing, it’s working!
If you want to see the content, you have to specify the capture name:
So if you were troubleshooting connectivity and you see this successful ICMP traffic, the loss you are experiencing must be somewhere else. Now you can assure that packets are hitting the firewall and this should help you verify your high level packet flow, to hopefully pinpoint where the issue is. If you want to see more information, there are a few more useful options when displaying the capture:
access-list (yes, you can use an additional access-list to filter the output of ‘show capture’)
count (# of packets to display)
detail (display more information such as TTL, DF bit, etc)
dump (display the hex dump for each packet)
*more* — use ‘show capture ?’ to view all options
So if you would like to view the hex dump of 1 packet, you can with:
Often, you may have a large capture and you don’t want to sift through it on the ASA. You might want to look at the capture file in wireshark or tcpdump. You can do so by copying the file via tftp to a workstation or server. But how will the output be formatted… with just headers, hex dumps or .. ? Well, when you transfer it via copy, you specify to use pcap format, which you will be able to read in just about any packet analysis program as it is the most commonly used. You can do this via:
Awesome! So although the ASA capture utility isn’t as versatile as tcpdump or wireshark for example, it is still great to start with and verify packet flow, etc. And if you want to save a larger capture for future analysis, it can be copied and opened with another packet analysis tool. From there you can use your advanced filters, etc.
I hope you enjoyed the basic tutorial on how to use the ASA capture utility… perhaps soon I will write one up on how to effectively use packet-tracer, which is yet another useful ASA command line tool for verifying packet flow and it will also give you a better understanding of how the ASA processes packets. (order of NAT/ACL processing, route lookup, etc.)
I was messing around with HAProxy yesterday and thought it would be useful to integrate Nagios downtime into the process for taking a node off the load balancer. This method uses Xinetd to emulate HTTP headers and isn’t limited for use on HAProxy exclusively, it can be used with any LB that supports basic HTTP header health checks… So all of them?
The required components to make this demonstration work are:
Now to get started, I used this guide to get Nagios-api up and running. Once you have the Nagios-api running, you should be able to query the status of your webserver via:
So if you notice above, “scheduled_downtime_depth” is the status we are looking for, which is currently 0, so there is currently no downtime set. We can easily grab that value with the following one-liner and save for later:
So now the fun part begins, creating the Xinetd script to emulate the HTTP header. What we want to do is to return a 200 (OK) if we return a 0 from our scheduled_downtime_depth query and return a 5xx (BAD) if we are returned a non-zero value, meaning downtime is set. So there are a few things we need to do:
Write our script, which will return a 200 if our check passes, otherwise it will return a 503. In the below script, 192.168.33.10 is the Nagios server and prod-web01 is the Nagios configured host for our web server. The Xinetd script will reside on the webserver since that is where the health check from HAProxy will be directed:
Add the service name to the tail of /etc/services
Add the xinetd configuration with the same service name as above:
Now the web portion is complete. You can test it by curling the configured xinetd service port from HAProxy or any other if you didn’t limit via ‘only_from’:
Now that it works, we can configure HAProxy. To do so, lets look over the current backend config for our webserver. Here is the excerpt from /etc/haproxy/haproxy.cfg:
We need to modify this by adding the httpchk and specifying the check port:
Now lets reload haproxy and check the status:
Excellent! Now lets put the host into maintenance mode (downtime) on Nagios and see what comes of it!
And now if we check the Nagios downtime, xinetd script remotely from HAProxy on port 8189 and check the status of the BACKEND resource:
Now as we see, Nagios is reporting a non-zero value for downtime. Also, the web server shows our script as working correctly and returning a 503! HAProxy also shows the node as down, awesome! Now lets cancel the downtime to see it come back up:
SUCCESS! So effectively, this xinetd script can be set on all the webservers, by just changing the Nagios-api to query the different webserver in the script. Also, using xinetd scripts in this fashion, you can perform many other “checks” on the server behind the load balancer.. Anything that can be performed in a BASH (or language of your choice) script can be transformed into the boolean state operation needed to bring the node online/offline.
I’d like to see if anyone else has done something similar to this or has any suggestions to improve! Please comment!
DISCLAIMER: Please test thoroughly before using this solution in a production environment. I am not liable for your mistakes 😉
Just recently, I’ve had the need to map an IP address to an AS number in a very fast manner for use in another project. I started by looking for methods of obtaining the data neccessary for creating the IP to ASN map. I do not personally own a router than is running full BGP tables and I didn’t want to abuse one of the looking glass routers for obvious reasons. On my search, I came across two well formatted indexes. One mapping masks to ASNs and the other mapping ASNs to owner names.
This is great, because I could create to arrays or hashes, including the keys/values. Now since I am new to Ruby, I started out by searching for the suitable libraries to scrape the data from the two above maps. I found a nice ruby-curl library on GitHub, which had some nice features, that would allow me to Curl directly into an array using regex with it’s scan() method… how nice! After creating my initial script and pulling the data, for some reason the Curl wouldn’t put all the data into the array… it would stop between 80.x.x.x and 90.x.x.x every time. After messing around with it a bit, I couldn’t get it to work how I wanted, so I switched to using curb, which didn’t have the nice scan() method, but I was able to play with the data just how I wanted in no time. I was able to get the script working by putting all the key/values into a Ruby hash and it was easy to pull the values out how I wanted but this would mean scraping the data from the above tables every time I ran the script. I started searching and was thinking about MongoDB but after some further testing, I decided to go with Redis due to the simple model of data I was playing with (just a bunch of key/values). Now the fun part begins; storing the data for persistence. The Redis Ruby client is awesome and very easy to use. You import the values with redis.set and call them with redis.get… nice and simple.
Since this script needs to support ANY IPv4 address but the table consists of only the netmask used by the specific AS, which may not be in the same /24 or another mask easily identified by simple string matching, makes this a little bit more difficult. What we need to do, is convert the IP address into a value that can easily be compared to any other and if there is no match, check all masks until there is a match (Example: for a /24… test /24, /23, /22, /21…….until there is a match, which eventually you will run into). This is accomplished by converting the IP to long. Ruby does not have this built in function, so after researching and testing, I was able to come up with:
For 184.108.40.206, this should effectively return 16909060. The script would then take that long IP and search redis. If no match is found, another common network boundry is tested. It will continue searching until redis.get(?) isn’t nil. For example:
16909060 = no match = redis.get(16909060) = nil
16909056 = MATCH = redis.get(16909056) = 15169
Now that the logic part is finished, now time to make the script usable and make some final housekeeping changes. The ip2long code was put into a method, along with the search for a matching long netmask. I then added argument support, so you can pass the IP address to the script. Since the Redis import would occur every time, I had to find a nice way to stop this. I first tried to count the redis keys and if they were less than 50k, the Redis import would be initiated but the “redis.keys(‘*’).count” command would take sometimes up to 10 seconds to complete, which wouldn’t give me the fast lookup I want to achieve. I then decided to just use optparse and choose an option to initiate the Redis import, otherwise, it would just skip it and run the script as if the necessary Redis keys were already there. The final code I ended up with is:
The initial Redis import is initiated with the –charge option when running the script. The import usually takes around 1 minute to complete but the IP to ASN queries only take around 200ms afterwards, which is pretty fast. When running the script after the initial “charge”, the script is getting the values directly from Redis and isn’t using Curl to receive the values anymore. This is the nice thing about using a persistent cache, is that you can run the script over and over without losing the data as you would if you stored the data in a Ruby array or hash. If you want to use this script, make sure you have redis-server running and make sure you have the Ruby libraries installed, which you can install with gem install. Here is how the script functions: