Category: linux

How to perform WHOIS lookups on uncommon TLDs

What is WHOIS? ------ WHOIS is a standard/protocol that was created with one of the purposes being the storage of information on the ownership of a specific domain. If you need to know who owns a domain or how to contact them, you can query a hosted WHOIS database by submitting a request to one of the WHOIS servers responsible for serving the owner information for a specific TLD (Top Level Domain). Examples of common TLDs are .com, .net, .org. There are also designated country TLDs and TLDs for other purposes. You can see a detailed list on [Wikipedia][1]. You can perform WHOIS lookups via online web utilities or more commonly, through the use of the Unix JWhois client, which comes installed on most Linux distributions and OS X. If you would like to use the client and primarily use Windows, you can use [Cygwin][4] and choose the 'whois' package as part of the installation process. How does my client know what WHOIS server to query? ------ For the most part, the client knows which server to use based on static configuration, where each TLD has a server to perform WHOIS lookups towards. For JWhois, there would be a block of configuration in the jwhois.conf global configuration file under the whois-servers section, where you specify the TLD(s) using regular expressions followed by the given server for that TLD match. Examples:
"\\.com$" = "";
"\\.edu$" = "";
"\\.gov$" = "";
However, with the increasing number of TLDs, it is not scalable to have all these mappings in a local configuration file. Of course, the client fails back to a default WHOIS server to query, in case there is no mapping for a given uncommon TLD but this will fail if the information requested does not exist on that server. So for a new TLD, WHOIS lookups for that TLD will likely fail on any fresh whois installation and most online WHOIS web clients out there without manual WHOIS server specification. Querying uncommon TLDs ------ Luckily the JWhois client allows you to also specify the server that you want to query for a given lookup but you would need to know of this server beforehand. The Internet Assigned Numbers Authority (IANA) maintains a database of information for each TLD, [served conveniently to you on this web page][2]. Each page will give you a load of information on that specific TLD, including the WHOIS server. So querying a specific domain name using the Unix JWhois client would look like this for the 'bingo' TLD (using the -h flag to specify a specific WHOIS server):
$ whois -h
(.. Omitting query results ..)
One step further: automating the WHOIS server lookup ------ When browsing the [IANA Root Zone Database][2], I noticed that the URIs for all of the TLDs used the same format; "/domains/root/db/(TLD_HERE).html". This allows for easy web scraping, using a simple script. I wrote a Python script, '[Phois][3]', that takes a domain argument as JWhois does. However unlike JWhois which checks the domain name against its configuration file for the WHOIS server, [Phois][3] strips the TLD from the query, caches the IANA page for that TLD, scrapes the WHOIS server from there, then performs a normal WHOIS lookup but by specifying the WHOIS server to perform the lookup on. You can find this [Phois script on my GitHub][3] and can simply git clone it and pip install. Alternatively, you can throw it in your "/usr/bin" directory or another executable directory in the $PATH on your machine as long as you have the Python pre-reqs installed (don't forget to make the script executable; chmod +x). It works like so:
$ phois
(.. Omitting query results ..)
Let me know what you think! [1]: "List of Internet top-level domains - Wikipedia" [2]: "Root Zone Database -" [3]: "Phois - automatic TLD lookup JWhois wrapper script" [4]: "Cygwin"

Configure HAProxy to remove host on Nagios scheduled downtime

I was messing around with HAProxy yesterday and thought it would be useful to integrate Nagios downtime into the process for taking a node off the load balancer. This method uses Xinetd to emulate HTTP headers and isn’t limited for use on HAProxy exclusively, it can be used with any LB that supports basic HTTP header health checks… So all of them? The required components to make this demonstration work are: * Linux webserver with Xinetd * HAProxy server * Nagios server with [Nagios-api][1] installed * And root access to the above servers! Now to get started, I used [this guide][2] to get Nagios-api up and running. Once you have the Nagios-api running, you should be able to query the status of your webserver via:
[[email protected] ~]$ curl -s | python -mjson.tool                                           
    "content": {
        "acknowledgement_type": "0",
        "active_checks_enabled": "1",
        "check_command": "check-host-alive",
        "check_execution_time": "0.010",
        "check_interval": "5.000000",
        "check_latency": "0.024",
        "check_options": "0",
        "check_period": "",
        "check_type": "0",
        "comment": [],
        "current_attempt": "1",
        "current_event_id": "0",
        "current_notification_id": "0",
        "current_notification_number": "0",
        "current_problem_id": "0",
        "current_state": "0",
        "downtime": [],
        "event_handler": "",
        "event_handler_enabled": "1",
        "failure_prediction_enabled": "1",
        "flap_detection_enabled": "1",
        "has_been_checked": "1",
        "host": "prod-web01",
        "host_name": "prod-web01",
        "is_flapping": "0",
        "last_check": "1428676190",
        "last_event_id": "0",
        "last_hard_state": "0",
        "last_hard_state_change": "1428674980",
        "last_notification": "0",
        "last_problem_id": "0",
        "last_state_change": "1428674980",
        "last_time_down": "0",
        "last_time_unreachable": "0",
        "last_time_up": "1428676200",
        "last_update": "1428676315",
        "long_plugin_output": "",
        "max_attempts": "10",
        "modified_attributes": "0",
        "next_check": "1428676500",
        "next_notification": "0",
        "no_more_notifications": "0",
        "notification_period": "24x7",
        "notifications_enabled": "1",
        "obsess_over_host": "1",
        "passive_checks_enabled": "1",
        "percent_state_change": "0.00",
        "plugin_output": "PING OK - Packet loss = 0%, RTA = 0.06 ms",
        "problem_has_been_acknowledged": "0",
        "process_performance_data": "1",
        "retry_interval": "1.000000",
        "scheduled_downtime_depth": "0",
        "services": [
        "should_be_scheduled": "1",
        "state_type": "1",
        "type": "hoststatus"
    "success": true
So if you notice above, “scheduled\_downtime\_depth” is the status we are looking for, which is currently 0, so there is currently no downtime set. We can easily grab that value with the following one-liner and save for later:
[[email protected] ~]$ curl -s | python -mjson.tool | grep time_depth | awk -F'"' '{print $4}'
So now the fun part begins, creating the Xinetd script to emulate the HTTP header. What we want to do is to return a 200 (OK) if we return a 0 from our scheduled\_downtime\_depth query and return a 5xx (BAD) if we are returned a non-zero value, meaning downtime is set. So there are a few things we need to do: 1. Write our script, which will return a 200 if our check passes, otherwise it will return a 503. In the below script, is the Nagios server and prod-web01 is the Nagios configured host for our web server. The Xinetd script will reside on the webserver since that is where the health check from HAProxy will be directed: #### /opt/serverchk

DOWN=`curl -s | python -mjson.tool | grep time_depth | awk -F'"' '{print $4}'`

if [ "$DOWN" == "0" ]
    	# server is online, return http 200

        /bin/echo -e "HTTP/1.1 200 OK\r\n"
        /bin/echo -e "Content-Type: Content-Type: text/plain\r\n"
        /bin/echo -e "\r\n"
        /bin/echo -e "No downtime scheduled.\r\n"
        /bin/echo -e "\r\n"
    	# server is offline, return http 503

        /bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n"
        /bin/echo -e "Content-Type: Content-Type: text/plain\r\n"
        /bin/echo -e "\r\n"
        /bin/echo -e "**Downtime is SCHEDULED**\r\n"
        /bin/echo -e "\r\n"
2. Add the service name to the tail of /etc/services
serverchk	8189/tcp		# serverchk script
3. Add the xinetd configuration with the same service name as above: #### /etc/xinetd.d/serverchk
# default: on

# description: serverchk

service serverchk
        flags           = REUSE
        socket_type     = stream
        port            = 8189
        wait            = no
        user            = nobody
        server          = /opt/
        log_on_failure  += USERID
        disable         = no
        only_from       =
        per_source      = UNLIMITED
4. Restart xinetd
[[email protected] ~]$ sudo service xinetd restart
Redirecting to /bin/systemctl restart  xinetd.service
Now the web portion is complete. You can test it by curling the configured xinetd service port from HAProxy or any other if you didn’t limit via ‘only_from':
[[email protected] ~]$ curl -s
Content-Type: Content-Type: text/plain

No downtime scheduled.

[email protected]:~#
Now that it works, we can configure HAProxy. To do so, lets look over the current backend config for our webserver. Here is the excerpt from /etc/haproxy/haproxy.cfg:
backend nagios-test_BACKEND
  balance roundrobin
  server nagios-test check
We need to modify this by adding the httpchk and specifying the check port:
backend nagios-test_BACKEND
  option httpchk HEAD
  balance roundrobin
  server nagios-test check port 8189
Now lets reload haproxy and check the status:
[email protected]:~# sudo /etc/init.d/haproxy reload
 * Reloading haproxy haproxy                                                                                                                                                                             [ OK ]
[email protected]:~# echo 'show stat' | socat unix-connect:/var/lib/haproxy/stats stdio | grep test | cut -d',' -f1,18
[email protected]:~#
Excellent! Now lets put the host into maintenance mode (downtime) on Nagios and see what comes of it!
[[email protected] nagios-api]~$ ./nagios-cli -H localhost -p 8080 schedule-downtime prod-web01 4h
[2015/04/10 15:16:59] {diesel} INFO|Sending command: [1428679019] SCHEDULE_HOST_DOWNTIME;prod-web01;1428679019;1428693419;1;0;14400;nagios-api;schedule downtime
[[email protected] nagios-api]~$
And now if we check the Nagios downtime, xinetd script remotely from HAProxy on port 8189 and check the status of the BACKEND resource:
[email protected]:~# curl -s | python -mjson.tool | grep time_depth
        "scheduled_downtime_depth": "1",
[email protected]:~# curl -s
Content-Type: Content-Type: text/plain

**Downtime is SCHEDULED**

[email protected]:~# curl -sI
HTTP/1.1 503 Service Unavailable

[email protected]:~# echo 'show stat' | socat unix-connect:/var/lib/haproxy/stats stdio | grep test | cut -d',' -f1,18
[email protected]:~#
Now as we see, Nagios is reporting a non-zero value for downtime. Also, the web server shows our script as working correctly and returning a 503! HAProxy also shows the node as down, awesome! Now lets cancel the downtime to see it come back up:
[[email protected] nagios-api]~$ ./nagios-cli -H localhost -p 8080 cancel-downtime prod-web01
[2015/04/10 15:24:09] {diesel} INFO|Sending command: [1428679449] DEL_HOST_DOWNTIME;4
[[email protected] nagios-api]~$
[email protected]:~# echo 'show stat' | socat unix-connect:/var/lib/haproxy/stats stdio | grep test | cut -d',' -f1,18
[email protected]:~#
SUCCESS! So effectively, this xinetd script can be set on all the webservers, by just changing the Nagios-api to query the different webserver in the script. Also, using xinetd scripts in this fashion, you can perform many other “checks” on the server behind the load balancer.. Anything that can be performed in a BASH (or language of your choice) script can be transformed into the boolean state operation needed to bring the node online/offline. I’d like to see if anyone else has done something similar to this or has any suggestions to improve! Please comment! DISCLAIMER: Please test thoroughly before using this solution in a production environment. I am not liable for your mistakes 😉 [1]: [2]: