Automating and sending speedtest.net data to web services

Recently I got frustrated by a series of broadband service failures. I realized they were difficult to diagnose both by me and my service provider (who, by the way, was very helpful) because it was difficult to determine when exactly they occurred and whether the issue was with the broadband connection or my wireless router. This weekend, inspired by this Make: Magazine feature, I hooked up a Raspberry Pi to my broadband router, set it up to periodically query speedtest.net (using speedtest-cli) and log the results.

I’m not a particular fan of IFTTT, which I find too linear and limiting (not to mention a certain arrogance towards third-party content providers) and thus I looked for alternative ways to post my speedtest results to an online place where I could obsessively check them whenever I’m out of the house. I liked this post describing how to use the same speedtest-cli with Loggly instead of IFTTT. But of course I wasn’t satisfied with hashing together a bunch of perl one-liners, so instead I found this script to manipulate speedtest-cli output, and modified it so it could log results to a CSV file, post them to IFTTT, Loggly or to any URL that would accept JSON, such as Zapier:

./speedtest-extras.sh [-d] [-c] [-h] [-i secret-key] [-l]
    -d: debugging-mode (reuses previously logged speedtest result instead of queriying speedtest - faster)
    -c: CSV mode
    -h: Print CSV header (only if used together with the -c flag)
    -i: IFTTT mode. Takes an IFTTT Maker Channel secret key as argument (required)
    -l: Loggly mode. Takes a Loggly Customer Token as argument (required)
    -j: JSON mode. Posts the result as a JSON document to any URL passed as argument (required)

My modified command-line interface to speedtest.net is available on GitHub, where I’ve also posted a few usage examples. Here, I will concentrate on how to use it to post to Zapier.

How to automatically send speedtest results to Zapier

First, take care of dependencies. My script makes use of speedtest-cli, which in turn is written in Python. Assuming you’ve got a working install of Python, you can use your favourite package manager to get hold of speedtest-cli:

$ pip install speedtest-cli

Then download my code, either as a ZIP archive or by using git:

$ git clone https://github.com/timtomch/speedtest-cli-extras.git

Once you have downloaded my repository, navigate to the bin folder1 that’s inside it:

$ cd speedtest-cli-extras
$ cd bin

Then you can try running my script in CSV mode to make sure everything is working properly:

$ ./speedtest-extras.sh -c
2016-03-29 02:33:38 UTC;2016-03-29 02:34:19 UTC;Start Communications;XXX.XXX.XX.XXX;SoftLayer Technologies, Inc. (Toronto, ON);8.53 km;17.794 ms;23.97 Mbit/s;1.95 Mbit/s;http://www.speedtest.net/result/XXXXXXXX.png

Depending on the speed of your Internet connection, it should take about a minute to run the test. If you see output similar to the above, things are working.

It is now time to setup Zapier to receive your data. If you haven’t got an account yet, go ahead and create one (the free plan should work just fine). Then click the bright red “Make a Zap” button to get started.

Using the search box, choose “Webhooks by Zapier” as your trigger, then select the “Catch Hook” option. Leave the next screen (options) empty and click Next until you reach a screen that should look like this:

Screenshot of the Zapier interface showing which URL to send JSON data to.
Setting up a Webhook on Zapier.

Zapier will issue a custom webhook URL to trigger your events. Copy that URL to the clipboard.

Now run

$ ./speedtest-extras.sh -j <PASTE YOUR ZAPIER URL HERE>

and wait again for the prompt to reappear. If nothing else shows up on your Terminal it’s a good sign. Go back to your browser and click the blue “OK, I did this” button. After a short while, Zapier should display a nice green message saying the test was successful. Go ahead and click on the “view your hook” link to check what data was sent to Zapier. You should see something like this:

Screenshot of the Zapier interface, showing data submitted via a JSON Webhook.
Testing the Zapier Webhook to ensure the JSON data was properly received.

Then you can decide what to do with that data. I chose to have each event add a new line to a Google Spreadsheet:

Screenshot of the Zapier interface, showing options to set up a Google Spreadsheets app.
Setting up Zapier to add rows to a Google Spreadsheet.

Go ahead and test your setup, then save your Zap once you are happy with the results. Don’t forget to turn on your Zap.

Now, every time you fire

$ ./speedtest-extras.sh -j <PASTE YOUR ZAPIER URL HERE>

Zapier will execute the operation you specified (add a row to a Google Spreadsheet in my example). Now, if you had to manually run the script to get a measurement, that would defeat the whole purpose, so the last step is to add a cron job so the script is run automatically:

$ crontab -e

This lets you edit your crontab. To run a speed test every hour, add the following line to it:

0 * * * * ./absolute/path/to/speedtest-extras.sh -j <YOUR ZAPIER URL>

Note that you need to specify the whole path to the speedtest-extras.sh script in your crontab for it to work.

Now watch the data slowly pile up, and start drafting that email to your broadband provider.

Next step: full Raspberry Pi tutorial?

A recent conversation with a friend facing the same issue made me think I could also write up a short tutorial on how to replicate my Raspberry Pi speed tester setup from scratch. Anything to avoid working on more useful things, like getting ahead on my MLIS research or freshening up my resume for this position I’m considering applying to…

  1. This directory structure is not entirely necessary but is a leftover from the original speedtest-cli-extras which I forked.

Migrating to a WordPress Network

I use this website to host a bunch of (mostly unrelated) services: wikis, my feed reader, and a couple of blogs for family members I like to keep separate. Those blogs used to each have their own WordPress install, which was not only a pain to keep up to date, but it also finally ate up all the SQL databases and subdomains I was allocated as part as my hosting plan. Setting up my wife’s new portfolio was an excellent excuse to find a better solution than to fire up yet another CMS instance. I decided to migrate the whole mess to a WordPress Network (previously called WordPress Multi-User). Which turned out to be much easier than I thought. Here’s how I did it and what I learned on the way.

1. Start fresh

I started with an (almost) fresh install of WordPress, the one that was powering this blog since November. Since I had used Softaculous to install it, I was able to setup automatic backups and updates while I was at it. I decided to move it to a subdirectory first to clean things up a bit on my home directory. According to the documentation, this would prevent me from creating subdomain sites (e.g. things like this.timtom.ch1 and that.timtom.ch) but I was able to find a way around this limitation by using the WP subdomain plugin, more on that later.

After moving WordPress to a new subdirectory, I checked that everything was still working on the main site. Since I already had a few posts live on that WordPress install, I backed everything up for good measure before I started the process.

2. Enable the Network feature

This is as easy as adding a single line to wp-config.php, and clicking through a few options on the admin interface. Since I was now running WordPress in its own directory, I knew running my Network under the subdomain model (this.timtom.ch) would not be straightforward, so chose to run it under the subdirectory model instead (timtom.ch/that).

Once my embryonic network was set up, I verified that the main site (this blog) was still working fine.

3. Import the other blogs into the new network

For each of the standalone WP instances I wanted to replace, I exported all content using the Tools > Export function.

Back in my Network admin interface, I then created a new site for each of them. I didn’t worry too much about naming the new sites, knowing I would fix their addresses later on. I chose unique names that I knew would not be conflicting with any pages I’d like to create on this blog in the future. I named them something like timtom.ch/sub-familynews etc.

Before importing the WXR files I had exported out of the old sites, I needed to install the WordPress Importer plugin. Despite being an official WP plugin, it unfortunately has a pretty bad reputation, and justifiably so, because of its poor interface and error management. It basically gives no feedback during the import process, which is unnerving and problematic if anything goes wrong. Fortunately, nothing bad happened to me. I imported each blog into the new sites I had just created, making sure to reallocate posts to the users that already had an account on my network, or to allow WordPress to create new accounts for those that didn’t. I chose to import all media, which is important since it will make a new local copy of all images and files that were referenced in the old blogs. Since I planned to delete the old blogs once the process was over, copying media was essential. I then armed myself with patience and a cup of herbal tea while the import plugin did its unnerving thing.

Once each import was done, I visited the new sites and made sure everything was in order.

4. Create subdomain redirects

I now had a suite of sites (e.g. timtom.ch/sub-familynews, timtom.ch/sub-projectx etc.) mirroring the old independent installs of WordPress that were all living in subdomains (e.g. familynews.timtom.ch). Since I wanted all URLs to continue working, I now had to map the old URL structure to the new sites.

I started by renaming each of the old blogs by 1. doing a full backup (or maybe I don’t, but if you’re following this, you should), 2. change their URL in Settings > General (the blog will instantly stop working, but not to worry) and then 3. rename the subdomain they’re operating in accordingly, e.g. in cpanel. I ended up having all my old blogs living at addresses such as projectx-old.timtom.ch. I will likely keep them around for a short while to make sure all is well with the new sites, before deleting them and free up some badly needed database space.

Then it was time for magic. I started by installing the WordPress MU Domain Mapping plugin, setting it up (a file with the slightly worrying name of sunrise.php notably has to be copied away from the plugin directory) and network-activating it.

I then went back to cpanel and created a new “alias” (also known as “parked domains”) for each of the subdomains I needed for my sites. Yes, even though they were all subdomains of timtom.ch (e.g. projectx.timtom.ch), I still needed to treat them as aliases for this to work:

Creating a new alias in cpanel
Creating a new alias in cpanel.

All the aliases I thus created point to the main timtom.ch directory. At first I thought I had to redirect them to the subdirectory in which my main WordPress install lives, but it turned out to be wrong. All subdomain aliases have to point to the home directory of your site for this to work.

As an aside, I found that I was able to make this work only by creating “aliases” using the procedure above. Merely adding a type “A” record in my host’s DNS using cpanel’s “Advanced Zone Editor” didn’t work, probably because the IP address my site uses is shared with other customers. The “alias” function probably makes the required extra settings so that any DNS entries point to the virtual server that’s allocated to me.

Back in WordPress, I then assigned the new subdomains to each of my new sites. The interface to do so is in My Sites > Network Admin > Settings > Domains. Unhelpfully, WordPress MU Domain Mapping’s interface asks for the “site ID” of each site to set this up, which isn’t that obvious to find out. One way I found to identify which ID corresponds to each site is to navigate to the Sites list panel of WordPress Network admin and hover over each site name. The ID will be visible in the URL for each site:

Screenshot of the WordPress Site admin, showing the mouse cursor hovering over a site URL to reveal its ID.
How to identify the site ID of a WordPress Network Site.

Once this is done, the last step was to set up each site’s main URL to the new subdomain, this is done in the Settings > General tab of each site.

Then for a short while none of the new sites worked, which was normal, as the new DNS information didn’t have time to propagate through the Internet yet. This can take up to one hour, depending on settings, so it was a good time to do something else, like starting to work on this blog post!

Once the DNS information was fully propagated, I verified that each of my sites was now working well, each in their own subdomain! I verified that the permalink structure was still the same that I was using for each of my old sites, so that the URLs to the posts and pages were still the same. Migration complete!

I am now the proud owner of a WordPress Network and I can create a new site in a few minutes. All I need once I created the site is go to cpanel, register a new subdomain using the “Alias” function and then assign it to the new site.

5. Fixing HTTPS

There was one extra step in my case since I’m using HTTPS encryption on this website (and you should too) and wanted it to work across all my subdomain WordPress sites too. The certificate I had for timtom.ch did not contain the subdomains I had just created, therefore my browser raised a security alarm when I tried to navigate to my new subdomains using HTTPS. Since I’m now using the Let’s Encrypt cpanel module to handle encryption, the only way to alter the certificate and include my new subdomains was to delete the old certificate and immediately create a new one. I then made sure to include all the new subdomains when creating the new certificate, and bingo, instant HTTPS across all my sites.

There were a few remaining caveats, however. Since the blogs I had just imported were not using HTTPS in the past, all the images I had embedded from Flickr were using HTTP in their <img> tags and thus raising mixed-content errors. I therefore had to go through all the posts that were affected and make sure all <img> tags were using HTTPS.

  1. N.B. all the URLs and directories mentioned here are examples and not actual URLs to anything on this site

Generating TLS/SSL certificates with Let’s Encrypt on hosted websites

At the OLA Super Conference Hackfest last week, I chose to work on trying to enable TLS/SSL encryption on library websites using Let’s Encrypt. Digital privacy has been a hot topic of discussion in libraries lately, most prominently around the efforts of the Library Freedom Project. Enabling TLS/SSL (HTTPS) encryption on library websites and online catalogue is among the first steps recommended by the LITA Patron Privacy Technologies Interest Group to protect the privacy of patrons.

Unfortunately, TLS/SSL encryption is a fairly complex (and costly) process. While large institutions often manage their own web servers and have complete control over them, many small public libraries resort to web hosting services. Some of these services will arrange for HTTPS certificates at a premium, but the process is often not straightforward. Enter Let’s Encrypt, which opened as a public beta service last December. As a “free, automated, and open certificate authority (CA)”, Let’s Encrypt aims to enable anyone who owns a domain name to receive a trusted certificate at no cost, and more easily than by using commercial certificate authorities.

With the helpful support of Dan Scott, we spent the morning trying to install and run Let’s Encrypt, generate certificates and enable them on our own personal websites and, for one brave librarian, on her live public library page.

Installing

The first step was to download and run Let’s Encrypt. We all had Mac laptops and so the following may only apply to Mac OS X. The source code for generating certificates is distributed on GitHub and the following steps were required to make it work, following this tutorial.

  1. Install git. We all had different setups on our laptops. Using Homebrew to download and install git as well as the other required packages worked well. As for me, I found out that recently upgrading the OS to El Capitan broke my Xcode toolchain, including git. This fix found on Stackexchange worked well to restore the Xcode command-line tools without having to go the trouble of downloading the whole Xcode package:
    $ xcode-select --install
  2. Download (git clone) Let’s Encrypt from GitHub:
    $ git clone https://github.com/letsencrypt/letsencrypt
    $ cd letsencrypt
  3. Installing any remaining dependencies. It turns out that letsencrypt-auto checks for missing dependencies every time it is run. So running the following will install all that is needed using Homebrew (as well as install Homebrew itself if not already present).
    $ ./letsencrypt-auto --help

    Since it checks dependencies at every run and will potentially want to install missing ones in /usr/local and /usr/bin, Let’s Encrypt will request root access every time it is run. Not only is this a bit unexpected and unsettling, because root access should not be required to generate certificates that will not be applied locally, but it will cause other problems when retrieving the generated certificate, as we will see below.

Generating certificates

Once Let’s Encrypt was up and running on our laptops, the next step was to generate the certificates. By default, letsencrypt-auto assumes it is run on the same server that is hosting the website, but in our case we wanted to set up encryption on hosted domains. For this, we need to tell Let’s Encrypt to only generate the certificates, which we would then upload to our hosting providers. This is done using the certonly option.

The syntax we used was

./letsencrypt-auto certonly -a manual --rsa-key-size 4096 -d domain.com -d www.domain.com

Note that this will generate two distinct certificates, one for domain.com (without www) and one for www.domain.com. Again, this will ask for root password, as discussed above.

The console turns into a quaint pseudo-interactive mode to ask for an email address, request acceptance of the terms of use and warn that the IP address of the machine requesting the certificate will be logged:

letsencrypt-auto dialog box

The next step is important to ensure that the user requesting a new certificate for a particular domain has legitimate claims to that domain. To this end, Let’s Encrypt will generate a hash string that needs to be copied to a specific file on the server that hosts the domain in question. The hash, the name of the file and the path are provided:

Make sure your web server displays the following content at
http://domain.com/.well-known/acme-challenge/weoEFKS-aasksdSKCKEIFIXCNKSKQwa3d35ds30_sDKIS before continuing:

weoEFKS-aasksdSKCKEIFIXCNKSKQwa3d35ds30_sDKIS.Rso39djaklj3sdlkjckxmsne3a

If you don't have HTTP server configured, you can run the following
command on the target server (as root):

mkdir -p /tmp/letsencrypt/public_html/.well-known/acme-challenge
cd /tmp/letsencrypt/public_html
printf "%s" weoEFKS-aasksdSKCKEIFIXCNKSKQwa3d35ds30_sDKIS.Rso39djaklj3sdlkjckxmsne3a > .well-known/acme-challenge/weoEFKS-aasksdSKCKEIFIXCNKSKQwa3d35ds30_sDKIS

This can be copied over to the web server either using FTP, a web based file manager, or by opening a SSH connection to the server. If the SSH route is chosen, then the commands provided on lines 9-11 above can be used to generate the challenge file. Note that the hash and filename provided above are examples. Use the actual text provided by Let’s Encrypt.

screenshot of cpanel file manager

screenshot of cpanel file editor
Uploading the challenge file on the server using the cpanel file editor.

Once the file is live on the website, hit enter to finish the process. Let’s Encrypt will visit the newly created page on our website, check that it’s ours and generate the certificates. If more than one domain were specified (e.g. with and without www), the process is repeated for each domain.

The generated key and files are saved in /etc/letsencrypt/live/domain.com. However since Let’s Encrypt was run as root (see above), these files belong to the root user and trying to display them returns a Permission denied error. To display them, use sudo:

$ sudo ls -l /etc/letsencrypt/live/domain.com
total 32
lrwxr-xr-x  1 root  wheel  33 Feb  5 22:03 cert.pem -> ../../archive/domain.com/cert1.pem
lrwxr-xr-x  1 root  wheel  34 Feb  5 22:03 chain.pem -> ../../archive/domain.com/chain1.pem
lrwxr-xr-x  1 root  wheel  38 Feb  5 22:03 fullchain.pem -> ../../archive/domain.com/fullchain1.pem
lrwxr-xr-x  1 root  wheel  36 Feb  5 22:03 privkey.pem -> ../../archive/domain.com/privkey1.pem

Note that if you specified more than one domain when running letsencrypt-auto , LetsEncrypt will generate a single certificate covering all specified domains. It will appear on /etc/letsencrypt/live under the name of the first domain that you specified.

As can be seen, the files stored in /etc/letsencrypt/live are actually symlinks to files stored in /etc/letsencrypt/archive. Using sudo every time we want to access those files is bothersome, so we can use chown  to change their owner back to us:

$ sudo chown -R username /etc/letsencrypt/

Update: Enabling TLS/SSL on hosted websites

The next and final step is to copy the content of the generated keys in the SSL setup interface on our web host. Unfortunately, it turned out that neither of us had this functionality turned on by our service providers and we ended up having to write them to request enabling SSL.

I have asked my provider to enable SSL on my domain, and after a week or so they wrote back saying that they had not only done the necessary configuration changes to allow me to upload my own certificates, but they had implemented a cpanel extension allowing their customers to generate their own certificates without going through the hassle described above! This is excellent news, and I shall soon try it out for my other domains, but for now I wanted to try loading the certificates I had generated. Here’s how I did it (my hosting admin interface uses cpanel 54.0.15).

The previous version of this post linked to this post (in German, which provides a screenshot of a different configuration interface.

Under Security -> SSL/TLS, I chose the “Install and Manage SSL for your site (HTTPS)” option, which installs certificate and key in one single step. I then selected my main domain name in the drop-down menu, and copied the contents of cert.pem  in the field labeled CRT and privkey.pem  to the KEY field. I left the CABUNDLE field blank and hit “Install Certificate”.

On Mac OS X, piping anything to pbcopy  will place it in the Clipboard, ready to be pasted anywhere. So this is how I copied the contents of the certificate file before pasting it on the cpanel form:

$ cat cert1.pem | pbcopy

And that was it! I got a nice confirmation message that the certificate was installed, detailing all domain names it was covering. It also helpfully listed other domain names I have pointing to my site but not covered by the certificate, warning me that using them will cause browsers to raise a security issue.

I’m using WordPress to manage this blog, and it needed to be reconfigured so as to serve all inserted images as HTTPS, and thus avoiding mixed content issues: Setting the main URL of the WordPress install to HTTPS.

Once this was done, my browser started rewarding me with a nice padlock icon on all my pages, confirming that I had successfully enabled HTTPS on my domain! I also ran it through an SSL checker for good measure.

Screenshot of the Chrome navigator security message, confirming timtom.ch is secure.
The security details as displayed by Chrome. Yay green lock!

Conclusion

The availability of Let’s Encrypt as a free and open alternative to commercial certificate authorities is an important step towards a more secure Internet. However, the current beta version of Let’s Encrypt still requires some familiarity with command-line interfaces, web development tools and an understanding of how TLS/SSL works. Better documentation and a more user-friendly interface will certainly go a long way in making the process easier. The necessity to run the client as root is another barrier that will hopefully be lifted as the software evolves. Finally, even though generating certificates is now freely accessible, setting them up on hosted websites still require the service providers to activate this option.

There is an extension for cpanel that claims to allow end-users to easily setup Let’s Encrypt certificates on their website. Maybe as demand grows, hosting providers will begin enabling this extension for their customers and HTTPS will then truly become an easy option for everyone to use. Since my hosting provider recently enabled this option, I plan to try it out soon and will report back if I do.

Thanks a lot to Dan and the OLA Super Conference Hackathon organisers and facilitators, as well as the other attendees with whom I worked on this project! I certainly learned a lot.