Every so often, I read a bunch of snark on Twitter about Ruby on Rails and how it “doesn’t scale”. I think these folks are missing the point. Continue reading
Back in June, Chef launched Habitat, the product that I’ve been working full-time on since fall 2015. Habitat is a system for building and running applications in a way that makes them portable across a wide range of infrastructure platforms, anything from bare metal and virtual machines all the way up to PaaS and containers. Along the way, you get clustering, secrets management, service discovery, runtime configuration injection, and a whole host of other features for free. You can learn more about Habitat at habitat.sh.
Today, I’d like to spend a moment and talk about Habitat’s user experience design, and specifically, how we put a great deal of thought into making the command-line UX awesome. Continue reading
I recently installed XenServer 7.0.0 to check out its capabilities around unikernels & running native Docker containers. Unfortunately, one major limitation — at least for lab use — is that XenServer does not let the administrator create a storage repository (SR in XenServer terms) right on the machine, to store ISO images for the various operating systems you’re going to install. Normally it requires that your ISO SR live on either a CIFS or NFS share, which is impractical for the home hobbyist who doesn’t want to maintain yet another piece of infrastructure.
Fortunately, there’s a way you can hack a local ISO SR onto the server. The directions have changed a little bit for XenServer 7.0.0, since by default it sets the LVM groups’ metadata as read-only, which inhibits you from adding a volume group to store ISOs. So, in between steps 1 and 2 on the instructions page, you need to edit
/etc/lvm/lvm.conf and change the
metadata_read_only setting from 1 to 0. Then, after performing step 6, change it back.
A couple of years ago I set up a Raspberry Pi as a cheap AirTunes server using Shairport. In the intervening time, I’ve also noticed a couple of defects with Shairport: high network utilization causes playback to be interrupted, it crashes occasionally, and the volume control synchronization is somewhat laggy.
Unfortunately, the Shairport project has been abandoned in the interim, so I started looking for a fork that I could use instead. Enter Shairport Sync, which is actively maintained and fixes a lot of these problems. I decided to spend a couple hours packaging it properly for Raspbian and publish packages.
If you’re running Raspbian and want to use Shairport Sync, just add the following source to your
deb https://dl.bintray.com/juliandunn/deb wheezy main
sudo apt-get install shairport-sync. It should start up automatically and then you’ll be able to play to a source named “Shairport Sync on [hostname of Pi]” from your iTunes. Happy listening!
In the move to create humane work environments where we don’t finger-point and engage in witch hunts (good), we’ve created systems in which roles & responsibilities are unclear (bad). I find the term “blameless culture” often synonymous with (sometimes willful) lack of clarity around roles & responsibilities. I wish we would stop using it. Continue reading
Dear “Dear GitHub”,
Thanks for your recent open letter about GitHub’s shortcomings. As an occasional contributor to Chef, a large open source project itself , I completely empathize with your position. It totally sucks, for example, when an issue is DoSsed with a wall of unhelpful +1’s, especially by people who think open source creates a contractual obligation of free support and instantaneous bug fixes. I also agree that it sucks that GitHub hasn’t provided you with timely responses about your feedback, even if it’s to tell you “no”.
However, as a product person working for Chef Software, Inc., the company backing the aforementioned open source project, I feel obligated to inform you that, if I were the product manager for GitHub, the answer to your requests would probably be “no”. That’s not because I think your feature requests aren’t legitimate. It’s because they don’t impact paying customers very highly. And as a company whose developers have to eat, GitHub is probably going to prioritize those customers first. Sorry!
I could delve into a detailed analysis of why each feature you’ve asked for doesn’t matter much to GitHub’s paying customer base, but I think it’s pretty obvious. Paying customers use private repositories and/or GitHub Enterprise, and the wild[er]-west aspects of an open-source ecosystem simply don’t exist inside a company. You’re unlikely to see +1-DDoS-type behavior inside private repos, for example.
Don’t get me wrong. GitHub’s success has been built upon its popularity as a platform for open-source collaboration, and part of creating a commercial business upon an open-source foundation involves maintaining a fine balance between paying customers and open-source users. We have the same challenges at Chef. We’re a bit luckier, actually: we’re not trying to directly monetize our product, so we can afford to make the source code available to both our client and server so that anyone can contribute to it. But GitHub’s trying to sell the code behind the site as GitHub Enterprise, so that’s not a viable option for them.
In summary, I think you’re right to highlight GitHub’s lack of response to your feature requests. That’s just not nice. However, I don’t think they’re going to prioritize your actual requests highly. After all, what options do you have? Move Selenium, ember.js, and every other project the open letter signatories work on to BitBucket? Just the fact that you posted your missive to GitHub itself shows how attached you are to the platform. The only reason GitHub will work on your requests is if the media attention over your open letter gets too hot.
Well, I guess maybe your open letter isn’t such a bad idea after all. Carry on?
Your local friendly product guy, Julian
As many of you know, adoption of containers has skyrocketed over the last year or two. Thus far, containers have been used mostly by early adopters, yet over the next several years we can expect widespread enterprise adoption. In fact, I expect the rate of adoption to exceed that of cloud (IaaS services), or virtualization before that. While it took enterprises perhaps a decade to fully plan and implement their virtualization initiatives, we can expect many enterprises to have production container deployments within three to five years. I fear that many of these implementations will have serious problems. Worse still, container technologies, when misused, inherently force us to own bad solutions for far longer.
Bear in mind that I am definitely not calling enterprises out specifically for misuse of container technology. Dismissive sentiments like “and that’s why we can’t have nice things” are not what I’m after. Enterprise adoption merely amplifies, by virtue of scale, the effects of anti-patterns in any technology, and containers are no exception. I am also not here to make fun of Docker for being on the Gartner hype cycle and being just short of the peak of inflated expectations. Container technology is sufficiently mature to be useful today and runtime environments like Kubernetes or Mesosphere are well on their way to becoming widely usable. In a couple years, they may even be classified as “boring technology” so that nobody, even the late majority or laggards, would feel it risky to use any of this technology.
Rather, this post is simply about the horrifying realization that containerization opens up a whole new playing field for folks to abuse. Many technology professionals in the coming decades will bear the brunt of the mistakes people are making today in their use of containers. Worse, these mistakes will be even more long-lived, because containers — being portable artifacts independent of the runtime — can conceivably survive in the wild far longer than, say, a web application written using JSP, Struts 1.1 and running under Tomcat 3.
Here are a few antipatterns I see out in the wild as container adoption spreads. Continue reading
One of the things that bothers me most about America is that it’s not enough to like something or be good at something. You have to aim to be the best. Apparently, things are not worth doing unless you do them to the extreme, which presumably is why workout regimes like CrossFit are so popular. You’re not really working out unless you’re doing an Olympic-level workout. (Presumably, after you’re seriously injured, you’re not having “real” surgery unless you’re going to the most expensive hospital.) Continue reading
Ever since I became a product manager at Chef in January, I’ve observed a fair amount of confusion and uncertainty in our industry about the role of product management. Although this is my first job as a product manager, I now have enough of a grasp of the responsibilities that I can offer a succinct explanation of the role (so if you’re reading this, dad, you now know what the heck I do.)
The other evening I went to a very informative talk by Joe Stein on Apache Mesos at the New York City DigitalOcean Meetup. I have to say that I’m very impressed with DigitalOcean organizing meetups about interesting technology that they use in their stack. Ever since I first saw DigitalOcean at the New York Tech Meetup several years ago, I knew they were going to go places because of their strong developer orientation — a niche that Amazon Web Services has consciously left behind in the pursuit of enterprise cloud adoption.
Although I learned a lot from the talk, something that Joe said at the very end about “no longer needing configuration management like Chef or Puppet” made me roll my eyes yet again. I’ve also heard people say sentences like, “we have Docker now, we don’t need configuration management.” My reaction really has nothing to do with the fact that I work for Chef. If there truly is a configuration management killer out there, I would love to know about it, and if it is real, I would seriously contemplate quitting my job and going to work for that vendor.
My objection is basically that declaring the death of configuration management is a really naïve way of thinking. Do you still need to configure things when running applications under Mesos? Yes? Then you still need configuration management! Many of these presentations about the new-hotness-funnily-named-hipster-technologies have a slide that looks something like this (I’ll pick on Joe here again for a minute):
This looks like a shell script that you cooked up on your laptop! Moreover, that payload (yourScriptAndStuff.tgz) that Mesos is going to unpack and run in your cluster — how did it get created? More shell scripts!
In sum, I mostly object to the cognitive dissonance that I detect from these types of presentations. Let’s see: Mesos is a system that gives me a “data center operating system” to abstract all the compute resources like CPU, memory, disk, etc. I can simply declaratively state which of those I need, and a bunch of schedulers and executors will pick up my job and run my job somewhere appropriate. That’s great. Now how do I actually use this tool? I write a bunch of imperative scripts? What? Why would I want to do that?
Docker suffers from a similar problem. Okay, so you’re giving me a wonderful abstraction called the “container”, which is a portable, self-contained runtime image for my application. I can run it anywhere! That’s awesome. How do I build it? Oh, I have to write a “Dockerfile” which appears to be a cross between a Makefile a shell script, and CONFIG.SYS. I’m not sure how that’s scalable when I have 500 applications and 20 versions of each.
The whole point of configuration management is to give you a declarative interface to something you would have done imperatively, because it’s easier to understand and maintain declarative statements rather than imperative ones. And so long as we have computers, we have things that need configuring. Unless you want to spend your days building things by writing one-off scripts, you need configuration management.