I like going to art fairs. Even though a future in which I can waltz through such an event with a chequebook and pick up original art for the walls of my perfectly decorated lodgings will probably remain a fiction, it is a fantasy that I like indulging in. And sometimes I do end up buying a print, and I never regretted it. Here are some of my favourite artists from this years Artist Project Toronto.
By reducing the resolution of his photographs of Toronto high-rises, McLean created a series of abstract images. The strict geometry of pixels meets that of the glass facades, while only slight variations in colour hint to the reflections of other buildings. Some look like Sim City blow-outs, others verge on the abstract. Simple, yet clever.
From the streets of Toronto to the realm of Canada’s singing astronaut Chris Hadfield, who reportedly owns some of Malvada’s otherworldly creations. Her acrylic planetary bodies straddle the borders between hyperrealism and the blurriness of imagination. Also, they would make a great cover for the next La Planète Bleue album.
Recently I got frustrated by a series of broadband service failures. I realized they were difficult to diagnose both by me and my service provider (who, by the way, was very helpful) because it was difficult to determine when exactly they occurred and whether the issue was with the broadband connection or my wireless router. This weekend, inspired by this Make: Magazine feature, I hooked up a Raspberry Pi to my broadband router, set it up to periodically query speedtest.net (using speedtest-cli) and log the results.
./speedtest-extras.sh [-d] [-c] [-h] [-i secret-key] [-l]
-d: debugging-mode (reuses previously logged speedtest result instead of queriying speedtest - faster)
-c: CSV mode
-h: Print CSV header (only if used together with the -c flag)
-i: IFTTT mode. Takes an IFTTT Maker Channel secret key as argument (required)
-l: Loggly mode. Takes a Loggly Customer Token as argument (required)
-j: JSON mode. Posts the result as a JSON document to any URL passed as argument (required)
How to automatically send speedtest results to Zapier
First, take care of dependencies. My script makes use of speedtest-cli, which in turn is written in Python. Assuming you’ve got a working install of Python, you can use your favourite package manager to get hold of speedtest-cli:
Depending on the speed of your Internet connection, it should take about a minute to run the test. If you see output similar to the above, things are working.
It is now time to setup Zapier to receive your data. If you haven’t got an account yet, go ahead and create one (the free plan should work just fine). Then click the bright red “Make a Zap” button to get started.
Using the search box, choose “Webhooks by Zapier” as your trigger, then select the “Catch Hook” option. Leave the next screen (options) empty and click Next until you reach a screen that should look like this:
Zapier will issue a custom webhook URL to trigger your events. Copy that URL to the clipboard.
$ ./speedtest-extras.sh -j <PASTE YOUR ZAPIER URL HERE>
and wait again for the prompt to reappear. If nothing else shows up on your Terminal it’s a good sign. Go back to your browser and click the blue “OK, I did this” button. After a short while, Zapier should display a nice green message saying the test was successful. Go ahead and click on the “view your hook” link to check what data was sent to Zapier. You should see something like this:
Then you can decide what to do with that data. I chose to have each event add a new line to a Google Spreadsheet:
Go ahead and test your setup, then save your Zap once you are happy with the results. Don’t forget to turn on your Zap.
Now, every time you fire
$ ./speedtest-extras.sh -j <PASTE YOUR ZAPIER URL HERE>
Zapier will execute the operation you specified (add a row to a Google Spreadsheet in my example). Now, if you had to manually run the script to get a measurement, that would defeat the whole purpose, so the last step is to add a cron job so the script is run automatically:
$ crontab -e
This lets you edit your crontab. To run a speed test every hour, add the following line to it:
Note that you need to specify the whole path to the speedtest-extras.sh script in your crontab for it to work.
Now watch the data slowly pile up, and start drafting that email to your broadband provider.
Next step: full Raspberry Pi tutorial?
A recent conversation with a friend facing the same issue made me think I could also write up a short tutorial on how to replicate my Raspberry Pi speed tester setup from scratch. Anything to avoid working on more useful things, like getting ahead on my MLIS research or freshening up my resume for this position I’m considering applying to…
This directory structure is not entirely necessary but is a leftover from the original speedtest-cli-extras which I forked. ↩
As I’m in the early stages of my final research project for my studies in Library Science, I’m looking at different ways to organise my thoughts and materials, and taking it as an opportunity to try some of the tools that are defining the current trend towards open and reproducible research. Things like version control can however quickly become complex and might scare away the bravest. It is certainly one of the most challenging topics I’ve had to teach during Software Carpentry workshops. And I’m far from understanding all of it. That’s why this Plain Person’s Guide to Plain Text Social Science looks to be a fantastic resource, laying out a complete workflow using open formats. As far as writing the actual paper, there is still no tool that will replace me. Although it might soon change, as a novel written by a computer almost won a literary price in Japan. The wind-up bird got creative.
Words in transit
I like it when a subway station is being refurbished and traces of the past are briefly brought to light again while walls are being resurfaced. This happened recently on the Paris Métro Trinité station. Together with a glimpse of swanky typefaces and yellowing memories, one learns in passing that this operation in French is called décarrossage.
Of course they couldn’t resist trying to identify the contents of the library. And it is only a matter of time before Shackleton’s collection is dutifully catalogued on LibraryThing.
Earlier this month, the same Royal Geographical Society was also hosting my friend and land-art artist Sylvain Meyer for the annual conference of the Society of Garden Designers. I’m very happy that Sylvain is getting recognized for his fantastic work! I also miss my print of his early piece Ondulation, which I loaned to another friend when I left Switzerland.
Open.Theremin is an open-source theremin from a makerspace in Lucerne, Switzerland. It is a digital version, based on an Arduino, but it sounds great!
DIY medicine is of course not new, but new fabrication technology and cheap materials are now making relatively sophisticated procedures within reach of the home tinkerer. Like, for example, orthodontics.
Crowe also links to Why Children Still Need to Read (and Draw) Maps [PBS], which reminded me I still need to find a good atlas I can share with my daughter. I gave her a map of Ontario for a camping trip we did last summer so she could keep track of our journey. I agree that learning to use (and appreciate) maps is still an important life skill.
Note that this map already includes the Spadina line’s extension, scheduled to open at the end of 2017. Isn’t it interesting to note that many such fan-fiction versions of the TTC map4 include future or imaginary lines? Cartographic wishful thinking…
Update 2: And here’s a map of the Toronto subway with the approximate walking time between stations5:
this reminds me of the Chuchichästli-Orakel, which places visitors on a map of Switzerland based on how they pronounce 10 words, with uncanny precision. ↩
A good candidate for a possible post on travel planning tools, I think. ↩
Reminding me that I’m still looking for a data source for my idea of trying to map Switzerland’s patronyms by popularity… ↩
Listing my favourites here would be another idea for a post. ↩
Next time I cross the Atlantic, I shall stop a moment to reflect on everything that’s happening in the background to make this possible. AeroSavvy has a great post explaining how the North Atlantic Tracks system works.
The Anatomy of a Tweet. Unsurprisingly, a tweet is composed of much more than 140 characters. There’s a scary amount of metadata coming along with it.
Serialization formats are not toys. Things to watch out for if you are building a web application that takes YAML, XML or JSON input. Watch it even if you don’t: being aware of how easy it is to break software is sobering.
I’m currently reading Mark Miodownik’s Stuff Matters and discovering little snippets information about materials I wasn’t aware of. For example, that the reason reinforced concrete works so well is because “as luck would have it, steel and concrete have almost identical coefficients of expansion” (p. 75). In passing, he also warns against the simplistic equation concrete = ugly:
But the truth is that cheap design is cheap design whatever the material. Steel can be used in good or bad design, as can wood or bricks, but it is only with concrete that the epithet of ‘ugly’ has stuck. There is nothing intrinsically poor about the aesthetics of concrete.
I use this website to host a bunch of (mostly unrelated) services: wikis, my feed reader, and a couple of blogs for family members I like to keep separate. Those blogs used to each have their own WordPress install, which was not only a pain to keep up to date, but it also finally ate up all the SQL databases and subdomains I was allocated as part as my hosting plan. Setting up my wife’s new portfolio was an excellent excuse to find a better solution than to fire up yet another CMS instance. I decided to migrate the whole mess to a WordPress Network (previously called WordPress Multi-User). Which turned out to be much easier than I thought. Here’s how I did it and what I learned on the way.
1. Start fresh
I started with an (almost) fresh install of WordPress, the one that was powering this blog since November. Since I had used Softaculous to install it, I was able to setup automatic backups and updates while I was at it. I decided to move it to a subdirectory first to clean things up a bit on my home directory. According to the documentation, this would prevent me from creating subdomain sites (e.g. things like this.timtom.ch1 and that.timtom.ch) but I was able to find a way around this limitation by using the WP subdomain plugin, more on that later.
After moving WordPress to a new subdirectory, I checked that everything was still working on the main site. Since I already had a few posts live on that WordPress install, I backed everything up for good measure before I started the process.
2. Enable the Network feature
This is as easy as adding a single line to wp-config.php, and clicking through a few options on the admin interface. Since I was now running WordPress in its own directory, I knew running my Network under the subdomain model (this.timtom.ch) would not be straightforward, so chose to run it under the subdirectory model instead (timtom.ch/that).
Once my embryonic network was set up, I verified that the main site (this blog) was still working fine.
Back in my Network admin interface, I then created a new site for each of them. I didn’t worry too much about naming the new sites, knowing I would fix their addresses later on. I chose unique names that I knew would not be conflicting with any pages I’d like to create on this blog in the future. I named them something like timtom.ch/sub-familynews etc.
Before importing the WXR files I had exported out of the old sites, I needed to install the WordPress Importer plugin. Despite being an official WP plugin, it unfortunately has a pretty bad reputation, and justifiably so, because of its poor interface and error management. It basically gives no feedback during the import process, which is unnerving and problematic if anything goes wrong. Fortunately, nothing bad happened to me. I imported each blog into the new sites I had just created, making sure to reallocate posts to the users that already had an account on my network, or to allow WordPress to create new accounts for those that didn’t. I chose to import all media, which is important since it will make a new local copy of all images and files that were referenced in the old blogs. Since I planned to delete the old blogs once the process was over, copying media was essential. I then armed myself with patience and a cup of herbal tea while the import plugin did its unnerving thing.
Once each import was done, I visited the new sites and made sure everything was in order.
4. Create subdomain redirects
I now had a suite of sites (e.g. timtom.ch/sub-familynews, timtom.ch/sub-projectx etc.) mirroring the old independent installs of WordPress that were all living in subdomains (e.g. familynews.timtom.ch). Since I wanted all URLs to continue working, I now had to map the old URL structure to the new sites.
I started by renaming each of the old blogs by 1. doing a full backup (or maybe I don’t, but if you’re following this, you should), 2. change their URL in Settings > General (the blog will instantly stop working, but not to worry) and then 3. rename the subdomain they’re operating in accordingly, e.g. in cpanel. I ended up having all my old blogs living at addresses such as projectx-old.timtom.ch. I will likely keep them around for a short while to make sure all is well with the new sites, before deleting them and free up some badly needed database space.
I then went back to cpanel and created a new “alias” (also known as “parked domains”) for each of the subdomains I needed for my sites. Yes, even though they were all subdomains of timtom.ch (e.g. projectx.timtom.ch), I still needed to treat them as aliases for this to work:
All the aliases I thus created point to the main timtom.ch directory. At first I thought I had to redirect them to the subdirectory in which my main WordPress install lives, but it turned out to be wrong. All subdomain aliases have to point to the home directory of your site for this to work.
As an aside, I found that I was able to make this work only by creating “aliases” using the procedure above. Merely adding a type “A” record in my host’s DNS using cpanel’s “Advanced Zone Editor” didn’t work, probably because the IP address my site uses is shared with other customers. The “alias” function probably makes the required extra settings so that any DNS entries point to the virtual server that’s allocated to me.
Back in WordPress, I then assigned the new subdomains to each of my new sites. The interface to do so is in My Sites > Network Admin > Settings > Domains. Unhelpfully, WordPress MU Domain Mapping’s interface asks for the “site ID” of each site to set this up, which isn’t that obvious to find out. One way I found to identify which ID corresponds to each site is to navigate to the Sites list panel of WordPress Network admin and hover over each site name. The ID will be visible in the URL for each site:
Once this is done, the last step was to set up each site’s main URL to the new subdomain, this is done in the Settings > General tab of each site.
Then for a short while none of the new sites worked, which was normal, as the new DNS information didn’t have time to propagate through the Internet yet. This can take up to one hour, depending on settings, so it was a good time to do something else, like starting to work on this blog post!
Once the DNS information was fully propagated, I verified that each of my sites was now working well, each in their own subdomain! I verified that the permalink structure was still the same that I was using for each of my old sites, so that the URLs to the posts and pages were still the same. Migration complete!
I am now the proud owner of a WordPress Network and I can create a new site in a few minutes. All I need once I created the site is go to cpanel, register a new subdomain using the “Alias” function and then assign it to the new site.
5. Fixing HTTPS
There was one extra step in my case since I’m using HTTPS encryption on this website (and you should too) and wanted it to work across all my subdomain WordPress sites too. The certificate I had for timtom.ch did not contain the subdomains I had just created, therefore my browser raised a security alarm when I tried to navigate to my new subdomains using HTTPS. Since I’m now using the Let’s Encrypt cpanel module to handle encryption, the only way to alter the certificate and include my new subdomains was to delete the old certificate and immediately create a new one. I then made sure to include all the new subdomains when creating the new certificate, and bingo, instant HTTPS across all my sites.
Facebook and the New Colonialism (the Atlantic). The articles cites a scholar in post-colonialism identifying traits of colonialist behaviours. Prompting the slightly unsettling exercise of trying to apply said list to other things, such as proponents of Gold Open Access:
Early 2016 is brutal to artists. But it’s the news of Umberto Eco’s death that unsettled me the most, and Geoff Manaugh of BLDGBLOG echoed that feeling perfectly in his eulogy. Foucault’s Pendulum is one of my favourite novels of all time.
Last week saw a brief spike in interest from non-librarians towards the “digital divide” in academic publishing (an overly simplistic way to put it would be Open Access vs paywall-model). Besides the fact that none of it is new, this gem of an interaction with SciHub is indicative of how broken the model of the librarian-as-an-information-broker is:
My friend Xavier sent me the coolest belated Christmas gift ever, a tablecloth printed with the beautiful 1:50,000 swisstopo map of Western Switzerland. He used a service called Spoonflower to custom-print the fabric. Of course I immediately started thinking of all the cool things I could do, notably a wallpaper with contour lines, e.g. of the Niagara escarpement…
This is our 4th winter in Toronto and so far it’s been much milder than the previous years. It snowed again last night, but nothing like the snows of yesteryear.
Keepalive is a piratebox hidden in a boulder in northern Germany. Lighting a fire on its side with generate enough power to bring the server to life and share PDF survival guides over WiFi. I just hope there is a USB port on the boulder to power up the lost wanderers’ devices (and that they remembered how to start a fire without needing a survival guide)…
Unfortunately, TLS/SSL encryption is a fairly complex (and costly) process. While large institutions often manage their own web servers and have complete control over them, many small public libraries resort to web hosting services. Some of these services will arrange for HTTPS certificates at a premium, but the process is often not straightforward. Enter Let’s Encrypt, which opened as a public beta service last December. As a “free, automated, and open certificate authority (CA)”, Let’s Encrypt aims to enable anyone who owns a domain name to receive a trusted certificate at no cost, and more easily than by using commercial certificate authorities.
With the helpful support of Dan Scott, we spent the morning trying to install and run Let’s Encrypt, generate certificates and enable them on our own personal websites and, for one brave librarian, on her live public library page.
The first step was to download and run Let’s Encrypt. We all had Mac laptops and so the following may only apply to Mac OS X. The source code for generating certificates is distributed on GitHub and the following steps were required to make it work, following this tutorial.
Install git. We all had different setups on our laptops. Using Homebrew to download and install git as well as the other required packages worked well. As for me, I found out that recently upgrading the OS to El Capitan broke my Xcode toolchain, including git. This fix found on Stackexchange worked well to restore the Xcode command-line tools without having to go the trouble of downloading the whole Xcode package:
$ xcode-select --install
Download (git clone) Let’s Encrypt from GitHub:
$ git clone https://github.com/letsencrypt/letsencrypt
$ cd letsencrypt
Installing any remaining dependencies. It turns out that letsencrypt-auto checks for missing dependencies every time it is run. So running the following will install all that is needed using Homebrew (as well as install Homebrew itself if not already present).
$ ./letsencrypt-auto --help
Since it checks dependencies at every run and will potentially want to install missing ones in /usr/local and /usr/bin, Let’s Encrypt will request root access every time it is run. Not only is this a bit unexpected and unsettling, because root access should not be required to generate certificates that will not be applied locally, but it will cause other problems when retrieving the generated certificate, as we will see below.
Once Let’s Encrypt was up and running on our laptops, the next step was to generate the certificates. By default, letsencrypt-auto assumes it is run on the same server that is hosting the website, but in our case we wanted to set up encryption on hosted domains. For this, we need to tell Let’s Encrypt to only generate the certificates, which we would then upload to our hosting providers. This is done using the certonly option.
The syntax we used was
./letsencrypt-auto certonly -a manual --rsa-key-size 4096 -d domain.com -d www.domain.com
Note that this will generate two distinct certificates, one for domain.com (without www) and one for www.domain.com. Again, this will ask for root password, as discussed above.
The next step is important to ensure that the user requesting a new certificate for a particular domain has legitimate claims to that domain. To this end, Let’s Encrypt will generate a hash string that needs to be copied to a specific file on the server that hosts the domain in question. The hash, the name of the file and the path are provided:
Make sure your web server displays the following content at
http://domain.com/.well-known/acme-challenge/weoEFKS-aasksdSKCKEIFIXCNKSKQwa3d35ds30_sDKIS before continuing:
If you don't have HTTP server configured, you can run the following
command on the target server (as root):
mkdir -p /tmp/letsencrypt/public_html/.well-known/acme-challenge
printf "%s" weoEFKS-aasksdSKCKEIFIXCNKSKQwa3d35ds30_sDKIS.Rso39djaklj3sdlkjckxmsne3a > .well-known/acme-challenge/weoEFKS-aasksdSKCKEIFIXCNKSKQwa3d35ds30_sDKIS
This can be copied over to the web server either using FTP, a web based file manager, or by opening a SSH connection to the server. If the SSH route is chosen, then the commands provided on lines 9-11 above can be used to generate the challenge file. Note that the hash and filename provided above are examples. Use the actual text provided by Let’s Encrypt.
Once the file is live on the website, hit enter to finish the process. Let’s Encrypt will visit the newly created page on our website, check that it’s ours and generate the certificates. If more than one domain were specified (e.g. with and without www), the process is repeated for each domain.
The generated key and files are saved in /etc/letsencrypt/live/domain.com. However since Let’s Encrypt was run as root (see above), these files belong to the root user and trying to display them returns a Permission denied error. To display them, use sudo:
$ sudo ls -l /etc/letsencrypt/live/domain.com
lrwxr-xr-x 1 root wheel 33 Feb 5 22:03 cert.pem -> ../../archive/domain.com/cert1.pem
lrwxr-xr-x 1 root wheel 34 Feb 5 22:03 chain.pem -> ../../archive/domain.com/chain1.pem
lrwxr-xr-x 1 root wheel 38 Feb 5 22:03 fullchain.pem -> ../../archive/domain.com/fullchain1.pem
lrwxr-xr-x 1 root wheel 36 Feb 5 22:03 privkey.pem -> ../../archive/domain.com/privkey1.pem
Note that if you specified more than one domain when running letsencrypt-auto , LetsEncrypt will generate a single certificate covering all specified domains. It will appear on /etc/letsencrypt/live under the name of the first domain that you specified.
As can be seen, the files stored in /etc/letsencrypt/live are actually symlinks to files stored in /etc/letsencrypt/archive. Using sudo every time we want to access those files is bothersome, so we can use chown to change their owner back to us:
$ sudo chown -R username /etc/letsencrypt/
Update: Enabling TLS/SSL on hosted websites
The next and final step is to copy the content of the generated keys in the SSL setup interface on our web host. Unfortunately, it turned out that neither of us had this functionality turned on by our service providers and we ended up having to write them to request enabling SSL.
I have asked my provider to enable SSL on my domain, and after a week or so they wrote back saying that they had not only done the necessary configuration changes to allow me to upload my own certificates, but they had implemented a cpanel extension allowing their customers to generate their own certificates without going through the hassle described above! This is excellent news, and I shall soon try it out for my other domains, but for now I wanted to try loading the certificates I had generated. Here’s how I did it (my hosting admin interface uses cpanel 54.0.15).
The previous version of this post linked to this post (in German, which provides a screenshot of a different configuration interface.
Under Security -> SSL/TLS, I chose the “Install and Manage SSL for your site (HTTPS)” option, which installs certificate and key in one single step. I then selected my main domain name in the drop-down menu, and copied the contents of cert.pem in the field labeled CRT and privkey.pem to the KEY field. I left the CABUNDLE field blank and hit “Install Certificate”.
On Mac OS X, piping anything to pbcopy will place it in the Clipboard, ready to be pasted anywhere. So this is how I copied the contents of the certificate file before pasting it on the cpanel form:
$ cat cert1.pem | pbcopy
And that was it! I got a nice confirmation message that the certificate was installed, detailing all domain names it was covering. It also helpfully listed other domain names I have pointing to my site but not covered by the certificate, warning me that using them will cause browsers to raise a security issue.
I’m using WordPress to manage this blog, and it needed to be reconfigured so as to serve all inserted images as HTTPS, and thus avoiding mixed content issues:
Once this was done, my browser started rewarding me with a nice padlock icon on all my pages, confirming that I had successfully enabled HTTPS on my domain! I also ran it through an SSL checker for good measure.
The availability of Let’s Encrypt as a free and open alternative to commercial certificate authorities is an important step towards a more secure Internet. However, the current beta version of Let’s Encrypt still requires some familiarity with command-line interfaces, web development tools and an understanding of how TLS/SSL works. Better documentation and a more user-friendly interface will certainly go a long way in making the process easier. The necessity to run the client as root is another barrier that will hopefully be lifted as the software evolves. Finally, even though generating certificates is now freely accessible, setting them up on hosted websites still require the service providers to activate this option.
There is an extension for cpanel that claims to allow end-users to easily setup Let’s Encrypt certificates on their website. Maybe as demand grows, hosting providers will begin enabling this extension for their customers and HTTPS will then truly become an easy option for everyone to use. Since my hosting provider recently enabled this option, I plan to try it out soon and will report back if I do.
Thanks a lot to Dan and the OLA Super Conference Hackathon organisers and facilitators, as well as the other attendees with whom I worked on this project! I certainly learned a lot.
Several takes on the #DeleteAcademiaEdu thing. First off, let’s not forget that academia.edu is a (for profit) social network, not a repository. In the academic rat’s race, every mean to get one’s work out there is justified, so deleting profiles might be a luxury only established folks can afford. Also, not all researchers have access to an institutional repository. Personally, I kept my (few) publications on my former institution’s IR and only linked them from my academia.edu profile. This is the approach I recommend when asked.