I’m taking a long-awaited vacation next week, in part to attend my friend Kristin’s wedding down in New Jersey, but also for the Streaming Media East conference in Manhattan. My work these days requires a great deal of knowledge about video (and audio) delivery workflows for online media, and I can see many aspects of our operation ramping up in near term. Flash-based players like the Maven Networks front-end are already in use, and I can see live Flash being only six months off. It seems like Flash is suddenly on everyone’s tongue, and at least at CBC, Windows Media, while still our standard, is no longer the market darling that it once was. Continue reading
A few years ago, when I was still in charge of the Toronto Community Co-Location Project (a project that I’m pretty sure is defunct by now), I was approached by a fellow named Da Shi, who was just starting a company called 3z Canada. He provided some competitive rates for co-location, but we ultimately sublet space from Chris Kirby. Continue reading
VIA Rail offers WiFi service aboard its trains. I’m on my way to Montreal for a business trip, so I thought I’d try it out. My conclusion: stay away!
It seems that VIA is partnered with a company called Parsons to provide the WiFi aboard the train. Judging by the latency, I can only assume that it is a satellite link. Check out what kind of latency you get for $8.95 per 24 hours of access:
C:>tracert aphrodite.aquezada.com Tracing route to aphrodite.aquezada.com [126.96.36.199] over a maximum of 30 hops: 1 1 ms 1 ms 2 ms VIA_3454 [192.168.134.1] 2 * * 3071 ms 10.0.15.1 3 4033 ms 3878 ms 3684 ms link1.parsons.com [188.8.131.52] 4 987 ms * 1154 ms 184.108.40.206 5 758 ms 1586 ms 798 ms ge-5-0.a0.dlls.broadwing.net [220.127.116.11] 6 899 ms 1457 ms 739 ms 18.104.22.168 7 697 ms 1180 ms 339 ms 22.214.171.124 8 633 ms 640 ms 859 ms te-8-3-73.car4.Dallas1.Level3.net [126.96.36.199] 9 2958 ms 2638 ms 1620 ms ae-13-69.car3.Dallas1.Level3.net [188.8.131.52] 10 1308 ms 1136 ms 1258 ms 184.108.40.206 11 2126 ms 2132 ms 2962 ms 5.icore1.CT8-Chicago.teleglobe.net [220.127.116.11 9] 12 * * * Request timed out. 13 944 ms 1572 ms 4343 ms if-15-0-0-15.mcore3.TTT-Scarborough.teleglobe.ne t [18.104.22.168] 14 887 ms 937 ms 1901 ms if-15-0.core1.TNK-Toronto.teleglobe.net [216.6.9 8.54] 15 1235 ms 477 ms 320 ms ix-1-151.core1.TNK-Toronto.teleglobe.net [216.6. 112.22] 16 1120 ms 1936 ms 2679 ms 22.214.171.124 17 * 3228 ms 1316 ms 126.96.36.199 18 1519 ms 1882 ms 3716 ms h216-235-8-211.host.egate.net [188.8.131.52] Trace complete.
Unless you’re just doing SSH — save your money!
A few weekends ago, I got up at the crack of dawn and headed out to the first (annual, I hope) Ontario Linux Fest. The admission price of $40 clearly signalled that this was a grassroots gathering of Linux hobbyists, but I’m sure many of those in attendance were also professional developers and/or system administrators. Although some of the talks were more show-and-tell that I would have hoped, I had to keep in mind the target audience, and I still learned a few things, particularly regarding the optimization of high traffic websites – thanks to Khalid Baheyeldin for his talk on this topic.
WordPress 2.2.3 was released a little while ago, and I finally thought I should say something about the upgrade process as documented. Can anyone think of a reason why I can’t just download the tarball and diff the contents against the previous version’s tarball, and then run the upgrade.php? That’s certainly what I’ve been doing so far, and have not had any problems. This way I also don’t need to “watch out” for dangling objects in my wp-content directory, since that stuff will get ignored by the patch file. I only wish the WordPress authors could issue a patch file so that I don’t need to do this myself (and it would be nice if the authors could also tar up each distribution from a directory named wordpress-x.y.z instead of just wordpress)
In other news, I decided to fly to Dallas, TX this year to attend the LISA conference, sponsored by the USENIX Association. It’s my first time going to LISA, mostly because of the expense of doing so. Fortunately, this year I have some Air Canada Aeroplan points to use, so my airfare is essentially free (except for the $131.02 in taxes that I have to pay), but registration is still costing me around $700 and the hotel will be $875. All in all, I expect to spend just short of $2,000 on the conference. Sadly, my employer doesn’t have a policy around paying for conferences. They only reimburse for training programs, and even those need to be approved via a lengthy bureaucratic process. Hopefully my manager and I will, at some point, manage to convince the HR folks that the best “training” you can give an IT person is to keep them up to date with new developments in the relevant field, rather than sending them on meaningless courses.
It was busy in June and July over at $WORK, so I didn’t get a chance to write any entries here. Some of the work I’ve been doing include turning off all legacy servers (among the legacy servers are only 2 FreeBSD boxes and a handful of HP/UX dinosaurs, but the rest of the production environment is SUSE Linux Enterprise), shepherding the BlueArc storage upgrade through (a huge pallet containing disks, controllers, disk shelves, and a replacement Fibre Channel switch arrived last week), and, of course, planning our upgrade to a modern Apache/Java environment. This will consist of Apache 2.x with a Tomcat 5.5 back end — a far cry from our current Apache 1.x and Tomcat 3.x setup.
One of the major challenges is getting Tomcat 5.5 running on SLES 9 under a Java 1.5.x virtual machine. Actually, it’s not so much the “running” part — I’m sure that since it’s Java, it would just run if I did the old tar zxvf tomcat-5.5.tar.gz && make && make install dance. But we’re after sensible package management here, and that means trying to make SLES 9 behave the standard way. SLES 9 is missing a lot of the “standard” tools that folks use to manage Java apps; it has no jpackage-utils built-in, it doesn’t use the alternatives system, and it can’t talk to Yum repositories out of the box. The work instructions I developed here hack up the base OS a bit to bolt on these tools, but ultimately do the job.
The long-term solution, of course, is to move to either SLES 10 or RedHat Enterprise Linux 5. SLES 10 ships Tomcat 5.0.x out of the box (just like SLES 9) so on the surface, it doesn’t seem like much of an improvement. But they have moved to the alternatives system; jpackage-utils is bundled with the base system, and ZMD (for what it’s worth) will talk to Yum repositories. (Of course, that’s in theory: in practice, as with many Novell tools, it’s broken.) RHEL 5 seems like the obvious answer, since it ships Tomcat 5.5 right out of the box.
Anyway, that’s a bit of a digression. Here are my directions for getting Tomcat 5.5 installed and properly package-managed on SLES 9 with JPackage. Continue reading
It’s pretty clear from my journal entries that I’m not a big fan of all these so-called "Web 2.0" websites (and I really have to use the air quotes every time I say that, because I can’t say it with a straight face). Part of that stems from me being a system administrator who really doesn’t care that much about what people put on their website, as long as it’s not total crap, but part of it is also that I despise marketing-and-sales-style buzzwords. I cringe with the same ferocity when I hear "Web 2.0" as I would if someone said "leverage the value proposition to create a win-win synergy" to me.
My biggest complaint about so-called "Web 2.0" tools is that many of them are solutions looking for problems. I used to work with a developer like this; we’d call his overcomplicated 60-table database schemas "enterprise solutions to non-problems". My most recent pet peeve is Twitter. I guess it isn’t bad enough for people to pollute their LiveJournals with inane banter about what kind of socks they are washing tonight; they also need to do it by "phone, IM, or right here on the web!" (to quote their boundless enthusiasm directly) Does the world really need this?
Actually, wait, I take it back! For all its inanity, Twitter isn’t even sufficiently Web 2.0. The website isn’t http://tw.itt.er/, nor is it labelled twitt
I didn’t think you could do this, but it is possible to export SSL certificates creating under a Windows IIS environment for use in Apache. Here’s how to do it:
- On the Windows box, fire up Microsoft Management Console (mmc.exe) and add the Certificates snap-in. Choose Computer Account and then Local Computer.
- Find the certificate that you want to export and choose All Tasks > Export. Follow the Export wizard and make sure you export the private key too. You’ll be asked for a passphrase to use to encrypt the key.
- Take the PFX-format file that was created by the wizard and copy it to your UNIX machine.
- Use OpenSSL to convert the PFX file into a PKCS12 format:
$ openssl pkcs12 -in whatever.pfx -out pfxoutput.txt
- The PKCS12 output file is basically a concatenation of the private key and the certificate, so use vi to slice it up into two files, a .crt for the cert and a .key for the private key.
- If you want to remove the passphrase from the key (highly recommended in a production environment where Apache must start up unattended) then just run:
$ openssl rsa -in encrypted.key -out unencrypted.key
That’s it! You can now use the key and cert in your Apache config files.
I decided to move my shared hosting from 1&1 to DreamHost. I had some poor experiences with 1&1:
- remapping domains to subdirectories of my $HOME didn’t work at first
- excessively stringent RLimitCPU meant that certain operations, like trying to migrate from Gallery 1.x to 2.x would fail and time out
- trying to use 1&1’s built-in photo gallery hosed my site for a day as it remapped all the virtual host to subdirectory mappings
I hope hosting with Dreamhost will help these issues. I would really love to have my own server in a co-lo (i.e. eating my own dog food by having one in the TCCP co-lo, which I run) but I can’t justify the expense.
Devlin’s rebuilding its intranet and moving away from the old Lotus Domino-based directory service. One of the developers on the intranet project asked me if he could authenticate employees against Active Directory instead. He’ll be using the MODx CMS, and would like to authenticate using mod_auth_ldap.
We’ve done this before to authenticate Subversion SCM users, but just as a test. This time I decided to try and create a user in Active Directory that would be used solely to bind to LDAP when doing lookups. I called this user “LDAP User”.
Making this work required a lot of trial and error, and I still haven’t managed to figure out a few things (see below). The first problem I had was that I was confused as to what the CN actually is for this particular user: it’s going to be cn=LDAP User, cn=Users, dc=devlin, dc=ca rather than cn=ldapuser, cn=Users, dc=devlin, dc=ca. ldapuser is just the login ID of the account rather than the actual CN.
The other thing I did wrong is that I put quotes around the Require statement, so rather than having
Require group “cn=Devlin Employees,cn=Users,dc=devlin,dc=ca”
the correct syntax is just
Require group cn=Devlin Employees,cn=Users,dc=devlin,dc=ca
A few things are still broken:
- I can’t figure out why LDAPS isn’t working. Doing searches from the command line using ldapsearch over SSL work fine, but the configuration of LDAP-SSL within Apache seems to be really tricky. I already have the directives
LDAPTrustedCA certs/sf_issuing.crt LDAPTrustedCAType BASE64_FILE
in the configuration file, and Apache does say [notice] LDAP: SSL support available, but any attempt to actually use it gives an
[LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server]
- I’m not particularly impressed that AuthLDAPBindPassword is stored in cleartext in the configuration file, but there doesn’t seem to be a way of hashing it or otherwise concealing it.
- I haven’t figured out how to enable LDAPS on Domain Controllers that aren’t already HTTPS-enabled, so for now I’m not authenticating against them.