The Minimum Viable Context

2017-05-09_08-06-58

This is not a subtweet; if I thought this should be about you, I’d have said so to your face months ago. If you get all the way through it and still kind of suspect it’s about you, though, you should spend some time looking inward and gear yourself up to deal with whatever you find in there, rattling the chains.

I’ve started and stopped writing this a couple of times now. Some drafts have been didactic, other self-congratulatory. “Blogging isn’t real if it’s not the first draft”, I’ve read somewhere, but I’ve never been able to do that; writing has always been a slog from what I’ve got written to what I can just barely sense I could. If I wanted to flatter myself I’d wheel out the old Mozart/Beethoven analogy, but that feels too much like fishing for compliments and besides, that garbage was in an early draft too.

So let’s lead with the punchline. Here’s the checklist: does everyone on your team…

  1. have a shared understanding of success?
  2. know what everyone else’s role is, and what they need to do their job well?
  3. know how their work contributes to the team’s success?
  4. know how their team’s success contributes to their own?

If you’re surveying the field from the executive suite and need big-picture, master-class management advice, well. This is not that. Talk to my friends Shappy and Johnath at Raw Signal. If you understand what they’re offering you know better than to look for it here. What I’ve got here is penny-ante table stakes, the difference between a team and a handful of people sharing the same corner of an orgchart. It is not complicated; it should, in theory, be trite. But to borrow a line, the fact is that in the day-to-day trenches of adult existence banal platitudes can have life-or-death importance.

In theory, you’d think hitting 4 out of 4 would be not just easy, but expected. In practice, in my experience, you’ll be lucky to make it to 2.

A few months ago I was asked to help a team out of the weeds. Getting into the details would be a disservice, so I won’t; in the broad strokes, I’m talking about a cross-discipline team of smart, invested people doing an important job. But for whatever reason, something – several somethings, it turned out – had gone really, really wrong. Execution, morale and retention were all going south. Everyone knew it, but nobody was really sure what had happened or what to do about it.

So I talked to a lot of people, I read a lot of mailing lists and bugs, and offered some advice.

If you’ve been around the team-dysfunction block before, you know there are plenty of probable causes. Shakeout from a reorg, a company pivoting hard, a team managing some sudden turnover, maybe the organization has grown from everyone being in the same room to nobody even being in the same city. Maybe you’ve hit that critical mass where communicating has suddenly gone from something nobody needed to worry about to something nobody remembers how to do. Maybe the one person who made it work left, maybe it’s just been that way so long nobody remembers the possibility that it could be different.

The advice I had for them was straightforward, a word I love for the veneer of upright nobility it adds to a phrase I could just as easily close out with “simple” or “obvious”. Get everyone into the same room for a few days, preferably away from everyone’s home base. Start the first day by having everyone give a talk about their jobs, not some name-and-title intro but a deep dive into what their job involves and the information, context and resources they need to do it well. Have some conversations – some public, some privately – between team leads and members about personal or professional goals and growth paths.

And then take the roadmap and the entire work backlog for the team and – ideally in the last meeting of that first day – throw it out the floor and tell everyone to come back the next day ready to start fresh.

The goal of this exercise was to make all the hidden costs – all the hidden work, all the hidden friction, everything people couldn’t see through the lens of their own disciplines – visible. And then, with that information, to take a hard reset. To narrow the team scope down to one or two tightly focused, high-impact features that could ship soon, and – critically – explicitly stop working on everything else. That sounded a bit dramatic, maybe impossible – I’ve been called worse – but nothing else seemed like it would work at all.

Because when I was asking my questions, the answers I got were mostly about the questions those teammates were asking each other. And it wasn’t hard to spot a common theme.

“If only it weren’t for the people, the goddamned people,” said Finnerty, “always getting tangled up in the machinery. If it weren’t for them, earth would be an engineer’s paradise.” – Kurt Vonnegut, “Player Piano”, 1952

Does everyone on the team understand that when you ask a designer to make a new button, that you’re asking them for a few dozen hours of product and market research, and a few more of design and testing, and not half an hour in Illustrator drawing pretty pictures? Does everyone really get that accommodating that schema change means refactoring a pile of business logic or backup processes? Did you all notice that you were asking for a contractual change with a major partner when you said “just change this string”?

I made those questions up for this post; the real ones were different in the specifics but definitely not in substance. You realize that you’re asking for the entire process, not just the output at the end, right? Why don’t you just?

You’ve seen this. You’ve probably even asked questions like them; I sure have. And unchallenged, even the mildest case of engineer’s disease left untreated will fester; eventually cultural rot sets in. We don’t really have a word for the long decline that happens next, the eventual checking out that happens the moment you clock in. The septic shock, the team’s paralysis and organ failures of core people ragequitting near the end. But you’ve seen that, too.

“You should focus on a small number of things” and “it helps to understand how your colleagues do their best work” is not exactly going to spur a revolution in technical leadership. I get that. But: don’t mistake the roadmap for the terrain. If you’ve made that plan without a clear, shared idea of where you’re going, how everyone can help you get there, and why you’re going at all? Then it’s hard to see how that will succeed, much less give rise to the kind of work or working environment you can be proud of. So toss it. Do the work of understanding where and who you are, and draw the map from there to somewhere that matters.

I told you this was table stakes, and I was not kidding about that at all. I wanted to help them get to a point where everyone on the team could confidently go 4 for 4 on the list, to get them to necessary so they could launch themselves at sufficient. And now, a couple of months later, I think it worked. They’re not all the way there yet – culture’s got a lot of inertia, and if I ever find a way to hard-pivot a whole org I’ll let you know – but they’re on the way, with a lot of clarity about what they’re doing, how they’re going to get it done together, and why it matters.

So: what about your team? Does everyone on your team have a shared understanding of success? Do you know what everyone else’s role is, and what they need to do their job well? Do you know how your work, and theirs, contributes to the team’s success and to your own?

Or does your team – maybe, possibly, kind of, just – suck at being a team?

You should do something about that. What are you going to do about that?

Trimming The Roster

This is a minor administrative note about Planet Mozilla.

In the next few weeks I’ll be doing some long-overdue maintenance and cleaning out dead feeds from Planet and the various sub-Planet blogrolls to help keep them focused and helpful.

I’m going to start by scanning existing feeds and culling any that error out every day for the next two weeks. After that I’ll go down the list of remaining feeds individually, and confirm their author’s ongoing involvement in Mozilla and ask for tagged feeds wherever possible. “Involved in Mozilla” can mean a lot of things – the mission, the many projects, the many communities – so I’ll be happy to take a yes or no and leave it at that.

The process should be pretty painless – with a bit of luck you won’t even notice – but I thought I’d give you a heads up regardless. As usual, leave a comment or email me if you’ve got questions.

I’m Walking, Yes Indeed

They’re called “walking simulators”, which I guess is a pejorative in some circles, but that certain type of game that’s only a little bit about the conventions of some gaming subgenre – puzzles, platforming, whatever – and mostly about exploration, narrative and atmosphere is one of my favorite things.

Over the last year or two, I suspect mostly thanks to the recent proliferation of free-to-use, high-quality game engines, excellent tutorials and the generally awesome state of consumer hardware, we’re currently in a golden age of this type of game.

One of the underappreciated things that blogging did for writing as a craft was free it from the constraints of the industries around it; you don’t need to fit your article to a wordcount or column-inch slot; you write as much or as little as you think your subject required, and click publish, and that’s OK. It was, and I think still is, generally underappreciated how liberating that has been.

Today the combination of Steam distribution, arbitrary pricing and free-to-use engines has done much the same thing for gaming. Some of the games I’ve listed here are less than half an hour long, others much longer; either way, they’re as long as they need to be, but no more. A stroll through a beautifully-illustrated story doesn’t need to be drawn out, diluted or compressed to fit a market niche precisely anymore, and I thought all of these were a good way to spend however much time they took up.

Plenty of well-deserved superlatives have already been deployed for The Stanley Parable, and it is absolutely worth your time. But two short games by its creators – the free Dr. Langeskov, The Tiger, and The Terribly Cursed Emerald: A Whirlwind Heist and the much longer The Beginner’s Guide are radically different, but both excellent. Dr. Langeskov is brief and polished enough to feel like a good joke; The Beginner’s Guide feels more like exploring the inside of a confession than a game, a unique and interesting experience; I enjoyed them both quite a bit.

Firewatch is, in narrative terms, kind of mechanical – despite its may accolades, you eventually get the sense that you’re turn the handle on the dialogue meat grinder and you know what’s coming out. But it’s still affecting, especially in its quieter moments, and the environment and ambience is unquestionably beautiful. it’s worth playing just to explore. I’d be happy to wander through Firewatch again just to see all the corners of the park I missed the first time around, and there’s a tourist mode in which you can find recordings that explore the production process that I enjoyed quite a bit more than I’d expected.

“Homesick” is very much the opposite of Firewatch, a solitary and mostly monochromatic struggle through environmental and psychological decay, set in a rotting institution in what we eventually learn is an abandoned industrial sacrifice zone. The story unfolds through unexpected puzzles and mechanisms, and ends up being as much a walkthrough of the experience of mental illness as of the environment. Homesick isn’t a difficult game to play, but it’s a difficult game to experience; I’m cautiously recommending it on those terms, and I don’t know of any game I can compare it to.

“Lifeless Planet” is a slow exploration of a marooned FTL expedition to an alien world discovering the abandoned ruins of a fifties-era Soviet settlement. It’s not graphically spectacular, but somehow there is something I found really great about the slow unfolding of it, the pacing and puzzles of this well, if obliquely, told story. I found myself enjoying it far more than I would have expected.

Another space-exploration type game, though (supposedly?) much more sophisticated, Event[0] was generally very well received – Procedurally generated dialog! An AI personality influenced by the player’s actions! – but I played through it and found it… strangely boring? I suspect my gameplay experience was sabotaged by my Canadianness here, because I went into it knowing that the AI would react to your tone and it turns out if you consistently remember your manners the machine does whatever you want. The prime antagonist of the game this ostensibly-secretive-and-maybe-malevolent AI, but if you say please and thank you it turns out to be about as menacing as a golden retriever. Maybe the only reason I found it boring is because I’m boring? Could be, I guess, but I bet there’s a lesson in there somewhere.

The most striking of the bunch, though, the one that’s really stuck with me and that I absolutely recommend, is Everybody’s Gone To The Rapture, essentially an exploration of a small, inexplicably abandoned English village near an observatory in the aftermath of something Iain Banks once referred to as an “Outside-Context Problem”. It is all of interesting, beautiful and relentlessly human, investing you in not just the huge what-just-happened question but the lives and relationships of the people confronting it and trying to live through it. If walking simulators appeal to you – if exploring a story the way you’d explore an open-world game appeals to you – then I don’t want to tell you anything more about it so that you can experience it for yourself.

I’ve played a few other games I’m looking forward to telling you about – some of the best 2D-platformer and Sierra-like games ever made are being made right now – but that’s for another day. In the meantime, if you’ve got some other games that fit in to this genre that you love, I’d love to hear about them.

A Security Question

To my shame, I don’t have a certificate for my blog yet, but as I was flipping through some referer logs I realized that I don’t understand something about HTTPS.

I was looking into the fact that I sometimes – about 1% of the time – I see non-S HTTP referers from Twitter’s t.co URL shortener, which I assume means that somebody’s getting man-in-the-middled somehow, and there’s not much I can do about it. But then I realized the implications of my not having a cert.

My understanding of how this works, per RFC7231 is that:

A user agent MUST NOT send a Referer header field in an unsecured HTTP request if the referring page was received with a secure protocol.

Per the W3C as well:

Requests from TLS-protected clients to non- potentially trustworthy URLs, on the other hand, will contain no referrer information. A Referer HTTP header will not be sent.

So, if that’s true and I have no certificate on my site, then in theory I should never see any HTTPS entries in my referer logs? Right?

Except: I do. All the time, from every browser vendor, feed reader or type of device, and if my logs are full of this then I bet yours are too.

What am I not understanding here? It’s not possible, there is just no way for me to believe that it’s two thousand and seventeen and I’m the only person who’s ever noticed this. I have to be missing something.

What is it?

FAST UPDATE: My colleagues refer me to this piece of the puzzle I hadn’t been aware of, and Francois Marier’s longer post on the subject. Thanks, everyone! That explains it.

SECOND UPDATE: Well, it turns out it doesn’t completely explain it. Digging into the data and filtering out anything referred via Twitter, Google or Facebook, I’m left with two broad buckets. The first is is almost entirely made of feed readers; it turns out that most and maybe almost all feed aggregators do the wrong thing here. I’m going to have to look into that, because it’s possible I can solve this problem at the root.

The second is one really persistent person using Firefox 15. Who are you, guy? Why don’t you upgrade? Can I help? Email me if I can help.

What I’m Talking About When I’m Talking About Biking

It’s a funny little quirk of Ontario traffic laws that the fine for killing a cyclist is often less expensive than the bike they were riding when they were killed.

I’m a cyclist. I own bikes for different jobs, I commute to work every day I can on a bike, and ride for fun when I have a chance. There’s no better way to get around this city; you have almost perfect freedom and nothing is faster. If I’m pushing hard my commute at the height of rush hour is 25 minutes. 22 is a personal best, pretty good for a 10 kilometer ride.

I only drive it two or three times a year but in a car I’ve never been able make it door to door in rush hour in less than 45. As a cyclist I’m faster and more agile than anything else on the road, but all that speed and freedom comes at one cost: total vulnerability. I am, I think I’ve mentioned, one of those proverbial “scofflaw cyclists”. I can guess what you think about that; I don’t particularly care.

Riding in Toronto means you’re quote-sharing-unquote the road; there are very few genuinely separated bike lanes, mostly disconnected from each other. All of them are about five feet wide, maybe enough for for cyclists to pass each other if one hugs the curb. If you’re lucky something more robust than a painted line separates you from passing cars, but usually not.

Navigating infrastructure built with your existence as less as than an afterthought is never boring; the casual transgressions drivers barely notice themselves committing every day can injure or kill an inattentive cyclist, so space and direction are never things you can just have, or trust. You fight for every inch of it, carve it out and press forward. What you’re given is the worst parts of the pavement if anything, where people will pull up to park, unload, and cut you off without so much as a glance. So you take as much of the lane as you can. It might be rough, the route might end up circuitous, but if you don’t assert your right to the lane you’re stuck. You might get in somebody’s way and they might get angry but you do it anyway because the alternative to being loud and visible is being a statistic.

And if you’ve ever been in an accident bad enough to warrant the police showing up, you know the drill already: it’s always an exercise in figuring out what the cyclist did wrong. Did they have lights on their bike? A bell? Did they signal? Maybe their clothes weren’t visible enough. It must have been something like that, but if not it was probably the cyclist being “too aggressive”. Just to give you a taste of how little the Toronto police think of cyclists, here’s an accident prevention campaign they ran on May 16th of this year by parking an old-timey novelty police car in the Adelaide bike lane. That’s right, a traffic safety awareness campaign forcing cyclists into traffic at rush hour.

For the most part, that’s just how it is. Cops don’t actually think cyclists are people, and the laws don’t actually treat cyclists like people. Cars, yes, definitely! Cyclists, not so much; this is why so many cyclists have bike- or helmet-cams now; without the recording, the police will always – always – find a reason it was the cyclists’ fault. If somebody threatens to kill you with a knife or a gun, police are on the way, sirens blazing. With a car, though? If they show up at all, it’s to tell the cyclist it was their fault.

So as a cyclist, you have to navigate this world full of people who are wearing three thousand pounds of indestructible, gasoline-powered armour and do not care enough if you live or die to glance in their mirrors – motorists who’ve lived in the armour of their privilege for so long they can’t distinguish it from a capital-R Right – but who will get incredibly upset if you do anything that so much as hurts their feelings.

And, Oh My God, they have so many feelings. They’re full to bursting with Driver Feelings. If you so much as startle somebody in a car, those feelings all come out at once. They’ll chase you down, cut you off, roll down their windows and start into the insults, the death threats, it’s amazing.

You’d think being functionally invulnerable would give you some sort of minimal sense of confidence, but my goodness no. That’s not the case at all; I’ve swerved around a car that decided to park in the Bloor bike lane, only to have the person in the SUV who had to brake behind me honk, pull up and start yelling. I’ve had a car on the Danforth start swerving into me like he’s playing chicken. Screaming, swearing, all of it, from people who’ve got three other lanes to choose from and an entire city of infrastructure purpose-built for their vehicles ahead of them. I’ve had a car run a stop sign just so they could catch up with me and yell at me not to run stop signs. I’ve been told, by somebody parked in the bike lane, that I should think of the reputation of cyclists and stay in my lane.

This is a routine experience in this city. You’re riding through a city where four- and eight- lane highways crisscrossing the downtown core are completely normal but safe bike lanes are somehow “controversial”. Nobody really cares about following the rules, to the point where people get upset if you’re following the wrong ones, and if you’re on a bike those rules aren’t, by design, going to protect you anyway. They’re just what drivers point back to when you’ve made them angry, and they’ll get angry if you break the rules, or if you follow the rules, or if you’re nearby, or if cyclists exist at all. It’s the veil of authority people hide behind, when they have power and want to vent their anger at people without.

You’ve heard this story before, I suspect. With different labels, in a different context maybe, but I bet the broad strokes of it are familiar. My routine bike commute is at the core of my politics, of my understanding of the nature of power.

For me, though, that ride is a choice. Ultimately I can put the bike down. And because I’m an upper-middle-class white man who works in tech, when I put the bike down I get to step into my own, different suit of nearly-invulnerable power armour.

It says a lot about you, I think, if you can look at any imbalance of power and vulnerability and your first reflexive reaction is to talk about how important rules are. I don’t know about you, but that’s not who I want to be. I live and work in a world full of people who can’t put their vulnerabilities aside so casually, who are full time, 24/7 navigating social and economic structures that are far more pervasive and hostile to them than cars are to me and my adorable little hour per day of commute. People who understand how those “rules” really work where the rubber (and sometimes the skin, and sometimes the skull) meets the road. So the least, the very least I can do is listen carefully to people who can’t put down their gender, their disability or the color of their skin, who suffer the whims of those oppressive, marginalizing systems, and to try to understand more than the problem or grievance they’re facing right now, but the architectures that give those problems their durability, their power. And to do the day-in, day-out work of understanding my own blind spots and taking responsibility for the spaces and systems around me.

It’s not super-convenient for me personally, to be honest. It takes me a bit longer to get places or find a place to park. But this is the job.

Nerd-Cred Level-Up

P5052724

In 2007 I was an extra in an indie zombie movie called “Sunday Morning” that featured Ron Tarrant. Tarrant starred with Mark Slacke in a 2010 short called “The Painting In The House”, who in turn played a role in Cuba Gooding Jr.’s “Sacrifice”. Gooding, of course, played a role in A Few Good Men, as did Kevin Bacon.

Recently, I’ve co-authored a paper with Greg Wilson – “Do Software Developers Understand Open Source Licenses?” – principal authors are Daniel Almeida and Gail Murphy at UBC – that will be presented at ICPC 2017 later this year. Greg Wilson has previously co-authored a paper with Robert Sedgewick, who has co-authored a paper with Andrew Chi-Chih Yao, who has in turn co-authored a paper with Ronald L. Graham.

You can find all of Graham’s many collaborations with Paul Erdős, one of the most prolific mathematicians of the 20th century, on his homepage.

Which is all to say that I now have an Erdős-Bacon number of 9.

I’m unreasonably stoked about that for some reason.

Wooden Shoes As A Service

P5012703

In international trade, the practice of selling state-subsidized goods far below cost – often as a way of crushing local producers of competing goods – is called “dumping”:

Under the Tariff Act of 1930, U.S. industries may petition the government for relief from imports that are sold in the United States at less than fair value (“dumped”) or which benefit from subsidies provided through foreign government programs. Under the law, the U.S. Department of Commerce determines whether the dumping or subsidizing exists and, if so, the margin of dumping or amount of the subsidy; the USITC determines whether there is material injury or threat of material injury to the domestic industry by reason of the dumped or subsidized imports.

To my knowledge there’s not much out there as far as comparable prohibitions around services. Until recently, I think, the idea wouldn’t have made much sense. How do you “dump” services? The idea was kind of nonsensical; you couldn’t, particularly not at any kind of scale.

If you put your black hat on for a minute, though, and think of commerce and trade agreements as extensions of state policy: another way to put that might be, how do you subject a services-based economy to the same risks that dumping poses to a goods-based economy?

Unfortunately, I think software has given us a pretty good answer to that: you dig into deep pockets and fund aggressively growing, otherwise-unsustainable service companies.

Now a new analysis of Uber’s financial documents suggests that ride subsidies cost the company $2 billion in 2015. On average, the analysis suggests, Uber passengers paid only 41% of the cost of their trips for the fiscal year ended in September 2015.

In other words: given enough subsidy, a software startup can become an attack vector on a services-based economy. A growing gig economy is a sign of extreme economic vulnerability being actively exploited.

I don’t know what to do about it, but I think this is new. Certainly the Canadian Special Import Measures Act only mentions services as a way to subsidize the offending company, not as the thing being sold, and all the recent petitions I can find in Canada and the U.S. both involve actual stuff, nothing delivered or mediated by software. At the very least, this is an interesting, quasi-guerilla way to weaponize money in trans-national economic conflicts.

For industries not yet established, the USITC may also be asked to determine whether the establishment of an industry is being materially retarded by reason of the dumped or subsidized imports.

I have a theory that the reason we’re not calling this out an as act of trade war – the reason we can’t see it at all, as far as I can tell – is that the people worst affected are individuals, not corporations. The people losing out are individuals, working on their own, who have no way to petition the state for redress at that scale, when the harm done in aggregate is functionally invisible without a top-down view of the field.

It’d be easy to make this sound isolationist and xenophobic, and that’s not what I intend – I like cool things and meeting people from other places, and international trade seems like the way the world gets to have that. But we know to put a stop to that when trade policies turn into weapons by another name. And I don’t understand down here at street level if there’s much of a difference between “foreign subsidies artificially undercut price of steel ingots” and “foreign subsidies artificially undercut price of cab rides”.

Planet: Secure For Now

Elevation

This is a followup to a followup – hopefully the last one for a while – about Planet. First of all, I apologize to the community for taking this long to resolve it. It turned out to have a lot more moving parts than were visible at first, and I didn’t know enough about the problem’s context to be able to solve it quickly. I owe a number of people an apology for that, first among them Ehsan who originally brought it to my attention.

The root cause of the problem was that HTTPlib2 in Python 2.x doesn’t – and apparently will never – support Server Name Indication, an important part of Transport Layer Security on shared hosts. This is probably not a big deal for anyone who doesn’t need to make legacy web-facing Python utilities interact securely with modernity, but… well. It me, as the kids say. Here we are.

For some context, our particular SSL problems manifested themselves with error messages like “Error urllib2 Python. SSL: TLSV1_ALERT_INTERNAL_ERROR ssl.c:590” behind the scenes and “internal error” in Planet proper, and I think it’s fair to feel like those messages are less than helpful. I also – no slight on my colleagues in this – don’t have a lot of say in the infrastructure Planet is running on, and it’s equally fair to say I’m not much of a programmer. Python feature-backporting is kind of a rodeo too, and I had a hard time mapping from “I’m using this version of Python on this OS” to “therefore, I have these tools available to me.” Ultimately this combination of OS constraints, library opacity and learning how (and if, where and when) SSL works (or doesn’t, and why) while working in the dated idioms of a language I only half-know didn’t add up to the smoothest experience.

I had a few options open to me, or at least I thought I did. Refactoring for Python 3.x was a non-starter, but I spent far more time than I should have trying to rewrite Planet to work directly with Requests. That turned out to be harder than I’d expected, largely because Planet code has a lot of expectations all over it about HTTPlib2 and how it behaves. I mistakenly thought re-engineering that behavior would be straightforward, and I definitely wasn’t expecting the surprising number of rusty edge cases I’d run into when my assumptions hit the real live web.

Partway through this exercise, in a curious set of coincidences, Mike Connor and I were talking about an old line – misquoted by John F. Kennedy as “Don’t ever take a fence down until you know the reason why it was put up” – by G. K. Chesterton, that went:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

Infrastructure

One nice thing about ancient software is that it builds up these fences; they look like cruft, like junk you should tear out and throw away, until you really, really understand that your code, and you, are being tested. That conversation reminded me of this blog post from Joel Spolsky, about The Worst Thing you can do with software, which smelled suspiciously like what I was right in the middle of doing.

There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:

It’s harder to read code than to write it.

This is why code reuse is so hard. This is why everybody on your team has a different function they like to use for splitting strings into arrays of strings. They write their own function because it’s easier and more fun than figuring out how the old function works.

As a corollary of this axiom, you can ask almost any programmer today about the code they are working on. “It’s a big hairy mess,” they will tell you. “I’d like nothing better than to throw it out and start over.”

Why is it a mess?

“Well,” they say, “look at this function. It is two pages long! None of this stuff belongs in there! I don’t know what half of these API calls are for.”

[…] I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.

When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

The first one of these fences I hit was when I discovered that HTTPlib2.Response objects are (somehow) case-insensitive dictionaries because HTTP headers, per spec, are case-insensitive (though normal Python dictionaries very much not, even though examining Response objects with basic tools like “print” makes them look just like they’re a perfectly standard python dict(), nothing to see here move along. Which definitely has this kind of a vibe to it.) Another was hitting what might be a bug in Requests, where usually it gives you “200” as the HTTP “Everything’s Fine” response, which Python will happily and silently turn into the integer HTTPlib2 is expecting, but sometimes gives you “200 OK” which: kaboom.

On the bright side, I did get to spend a few minutes reminiscing fondly to myself about working with Dave Humphrey way back in the day; in hindsight he warned me about this kind of thing when we were working through a similar problem. “It’s the Web. You get whatever you get, whenever you get it, and you’ll like it.”

I was mulling over all of this earlier this week when I decided to take the best (and also worst, and also last) available option: I threw out everything I’d done up to that point and just started lying to the program until it did what I wanted.

This gist is the meat of that effort; the rest of it (swap out the HTTPlib2 calls for Requests and update your error handling) is straightforward, and running in production now. It boils down to taking a Requests object, giving it an imaginary friend, and then standing it on that imaginary friend’s shoulders, throwing a trenchcoat over it and telling it to act like a grownup. The content both calls returns is identical but the supplementary data – headers, response codes, etc – isn’t, so using this technique as a shim potentially makes Requests a drop-in replacement for HTTPlib2. On the off chance that you’re facing the same problems Planet was facing, I hope it’s useful to you.

Again, I apologize for the delay in sorting this out, and thank you for your patience.

Mechanized Capital

Construction at Woodbine Station

Elon Musk recently made the claim that humans “must merge with machines to remain relevant in an AI age”, and you can be forgiven if that doesn’t make a ton of sense to you. To fully buy into that nonsense, you need to take a step past drinking the singularity-flavored Effective Altruism kool-aid and start bobbing for biblical apples in it.

I’ll never pass up a chance to link to Warren Ellis’ NerdGod Delusion whenever this posturing about AI as an existential threat comes along:

The Singularity is the last trench of the religious impulse in the technocratic community. The Singularity has been denigrated as “The Rapture For Nerds,” and not without cause. It’s pretty much indivisible from the religious faith in describing the desire to be saved by something that isn’t there (or even the desire to be destroyed by something that isn’t there) and throws off no evidence of its ever intending to exist.

… but I think there’s more to this silliness than meets the rightly-jaundiced eye, particularly when we’re talking about far-future crypto-altruism as pitched by present-day billionaire industrialists.

Let me put this idea to you: one byproduct of processor in everything is that it has given rise to automators as a social class, one with their own class interests, distinct from both labor and management.

Marxist class theory – to pick one framing; there are a few that work here, and Marx is nothing if not quotable – admits the existence of management, but views it as a supervisory, quasi-enforcement role. I don’t want to get too far into the detail weeds there, because the most important part of management across pretty much all the theories of class is the shared understanding that they’re supervising humans.

To my knowledge, we don’t have much in the way of political or economic theory written up about automation. And, much like the fundamentally new types of power structures in which automators live and work, I suspect those people’s class interests are very different than those of your typical blue or white collar worker.

For example, the double-entry bookkeeping of automation is: an automator writes some code that lets a machine perform a task previously done by a human, or ten humans, or ten thousand humans, freeing those humans to… do what?

If you’re an automator, the answer to that is “write more code”. If you’re one of the people whose job has been automated away, it’s “starve”. Unless we have an answer for what happens to the humans displaced by automation, it’s clearly not some hypothetical future AI that’s going to destroy humanity. It’s mechanized capital.

Maybe smarter people than me see a solution to this that doesn’t result in widespread starvation and crushing poverty, but I only see one: an incremental and ongoing reduction in the supply of human labor. And in a sane society, that’s pretty straightforward; it means the progressive reduction of maximum hours in a workweek, women with control over their own bodies, a steadily rising minimum wage and a large, sustained investments in infrastructure and the arts. But for the most part we’re not in one of those societies.

Instead, what it’s likely to mean is much, much more of what we already have: terrified people giving away huge amounts of labor for free to barter with the machine. You get paid for a 35 hours week and work 80 because if you don’t the next person in line will and you’ll get zero. Nobody enforces anything like safety codes or labor laws, because once you step off that treadmill you go to the back of the queue, and a thousand people are lined up in front of you to get back on.

This is the reason I think this singularity-infected enlightened-altruism is so pernicious, and morally bankrupt; it gives powerful people a high-minded someday-reason to wash their hands of the real problems being suffered by real people today, problems that they’re often directly or indirectly responsible for. It’s a story that lets the people who could be making a difference today trade it in for a difference that might matter someday, in a future their sitting on their hands means we might not get to see.

It’s a new faith for people who think they’re otherwise much too evolved to believe in the Flying Spaghetti Monster or any other idiot back-brain cult you care to suggest.

Vernor Vinge, the originator of the term, is a scientist and novelist, and occupies an almost unique space. After all, the only other sf writer I can think of who invented a religion that is also a science-fiction fantasy is L Ron Hubbard.
– Warren Ellis, 2008

Planet Migration Shakeout

This note is intended for Planet and its audience, to let you know that while we’re mostly up and running, we’ve found a few feeds that aren’t getting pulled in consistently or at all. I’m not sure where the problem is right now – for example, Planet reports some feeds as returning 403 errors, but server logs from the machines those feeds live on don’t show those 403s as having ever been served up. A number of other feeds show Planet reporting “internal server errors”, but again, no such errors are visible elsewhere.

Which is a bit disconcerting, and I have my suspicions, but I won’t be able to properly dig into this stuff for a few days. Apologies for the degraded state of the service, and I’ll report back with more information as I find it. Tracking bug is #1338588.

Update: Looks like it’s a difference of opinion between an old version of Python and a new version of TLS. I expect this to be resolved Monday.

Second update: I do not expect this to be resolved today. The specific disagreement between Python and TLS describes itself as the less-than-helpful SSL23_GET_SERVER_HELLO:tlsv1 alert internal error whose root cause can be found here; HTTPlib2 does not support SNI, needed to connect to a number of virtually-hosted blogs here in modernity, and it will take some more extensive surgery than expected to get Planet back on its feet.

Third update: The solution we have for this problem is to excise some outdated but vendored-in dependencies on Planet and move it to a recent version of Python. The combination of those things resolves this in staging, but it will take a few days before we can move this to production.