A couple of days ago I wrote a post about Veewee, the automated boxgrinder for Virtualbox boxes. But if you had no idea what Veewee was, all that detail wouldn’t have made much sense to you. So I threw together a quick screencast on YouTube. Here it is!
One of my last projects at SecondMarket was to automate and rebuild the Jenkins infrastructure. We’d previously had a static setup in the NYC office with a build master and three slaves that ran all the time, but this handled developer check-in storms very poorly. For example, when developers were trying to make code cutoff for a feature, many builds would be queued for lack of available executors. But at other times, these agents would be completely idle. It made more sense to move the entire setup to the cloud and implement some kind of auto-scaling for Jenkins. Continue reading →
Apologies in advance if you’re not interested in a post about the guts of Opscode Chef.
I recently started to adopt Bryan Berry’s application & library cookbook model as outlined in his excellent and funny blog post, "How to Write Reusable Chef Cookbooks, Gangnam Style". But I quickly ran into a blocker, because people are trying to solve problems using the compile phase and not the execute phase of Chef. Perhaps this calls into question the entire viability of compile-phase providers like chef_gem. Continue reading →
As a user, I’ve always been impressed with Atlassian‘s products for software development, issue tracking and documentation. For companies who take these things seriously, JIRA, Greenhopper and Confluence are quickly becoming the go-to products, and with good reason: the products are easy to get started with but have the enterprise features that allow a company to customize workflows as their business changes. I hate to slam open-source products but just try doing what JIRA does with Bugzilla or Trac.
The products themselves, though, can be a nightmare to install, despite the fact that they are mostly just Java web applications living in a WAR file. The products have improved immensely from the days when setting them up involved hacking up a multitude of XML files in WEB-INF (though there still is some of that), and it’s still annoying that Atlassian doesn’t support running the applications as unexploded WARs within Tomcat or another servlet container, probably for the aforementioned reasons. All that aside, though, it’s satisfying when everything is working together and users can single-sign-onto the entire Atlassian suite because of the magic of Crowd, Atlassian’s SSO directory server.
Last week, I released a set of Chef cookbooks I wrote at SecondMarket to ease the installation of the Atlassian tools on a server. I’m still looking to automate more parts of this, including the ability to edit the aforementioned XML files in-place in an idempotent way, so pull requests against our GitHub repo would be welcome.
Special Note on Using Atlassian Products in the Amazon Cloud
I should also mention that my first attempt to set up Atlassian’s products using Amazon Relational Database Service (RDS) as a backing store was a failure. To spare you the pain of finding this out yourself, I’ll just mention the reason: Crowd, JIRA and Confluence expect MySQL to be configured with READ-COMMITTED transaction isolation level, which means you need to configure MySQL to have row-based binary-logging. Unfortunately, binlog_format is not a parameter you can configure in RDS’s DB Parameter Groups, for obvious reasons; it would affect all other clients on that MySQL instance. This has been confirmed with Amazon support, so JIRA/Crowd/Confluence with RDS is a no-go.
In the previous entry, I made the statement that many of us working in new media don’t have a clue about what’s going to be successful and what’s not. I wanted to expand on this topic with a few key points. At first glance, you could interpret these as being pet peeves. My intention, however, is to set some basic ground rules for success even in a space where tools, technologies and strategies change at the drop of a hat. Continue reading →
Those of you who have been following my journal closely know that I’ve been working on a project at work to migrate our main web and Java cluster from SuSE Linux Enterprise Server 9 to Red Hat Enterprise Linux 5. Well, we did the cut-over tonight and I’m pleased to note that everything pretty much went according to plan. Netcraft will now tell you that CBC.ca is running Apache 2.2.8 on Red Hat. Continue reading →
Some of you are aware that I’m into vintage computers. Sadly, my basement cannot hold all the computers I wish I could actually have – and some of them are forever going to be too big to fit in any man’s house (not to mention “make it past a man’s significant other“).
But why would one actually need a VAX when, these days, one can emulate one on a Linux PC using SIMH? Not only can one emulate a VAX (take your pick: MicroVAX or VAX 11/780) but also a PDP-11, Data General Nova, some ancient Honeywell mainframes I’ve never heard of, or a bunch of other old mainframes or minicomputers.
I have a special nostalgia for the VAX, since I accessed my first real e-mail account at the National Capital Free-Net via a VAX in my dad’s office. On the anniversary of my Dad’s retirement, I’ve decided I’m going to try to get a VAX running in emulation under SIMH – running OpenVMS, no less. Do I know anything about running OpenVMS? Nope, I do not – but I’m going to find out. Yes, I know it’s a nearly obsolete operating system, and DCL is not the most intuitive. But hopefully it should prove to be a little bit amusing at least – wish me luck!
(On a completely unrelated note: People are still writing in to comment on the blog post where I got yelled at by Drew of Toothpaste For Dinner for offering an RSS feed. Haha! I’ve moved onto reading xkcd these days … that fellow seems far less uptight, and his comics are more reliably funny. And yes, xkcd has an RSS feed, if you had to ask.)
My VoIP PBX (built on an embedded Linksys NSLU2) blew up tonight with a bad hard disk. Here’s the cheat sheet on how to recover it should it do the same next time.
Replace the hard disk and reboot the NSLU2. Since the network settings are stored in flash, it will come up on the old IP even if the hard disk has failed.
Format the new hard disk and partition it using fdisk. Swap space is recommended. Format it using mkfs.ext3.
Run turnup disk -i /dev/sda1 -t ext3 to move the rootfs to the disk.
Reboot NSLU2 and install Optware as follows:
tar -zxvf /tmp/ipkg-opt_0.99.163-9_armeb.ipk
tar -ztvf /tmp/data.tar.gz
tar -zxvf /tmp/data.tar.gz
sed -i “s//stable//unstable/” ipkg.conf
Restore old packages – namely, xinetd, net-snmp, asterisk14, tftp-hpa, esmtp, and all the things that asterisk recommends you install
Reconfigure /opt/etc/xinetd.conf to allow connections from the local LAN.
Restore data from backup – namely, the contents of /opt/tftpboot and /opt/etc/asterisk
Create a startup script for Asterisk because it’s missing in the default package:
if [ -z “$1” ] ; then
case `echo “$0″ | /bin/sed ‘s:^.*/(.*):1:g’` in
S??*) rc=”start” ;;
K??*) rc=”stop” ;;
*) rc=”usage” ;;
case “$rc” in
echo -n “Starting asterisk: ”
$ASTERISK_DAEMON 2>/dev/null &
if [ -n “`pidof asterisk`” ] ; then
echo -n “Stopping asterisk: ”
$ASTERISK_DAEMON -qrx ‘stop now’
echo “Usage: $0 (start|stop|restart|usage)”