blarg?

documentation

I gave this talk at FSOSS last week, in which I try to reclaim the term “Social Engineering”, so that it stops meaning “get the receptionist to give you their password” and starts meaning “Measuring community growth and turning that into processes and practices that work.”

I thought it went well, though listening to it I can see I’ve got a couple of verbal tics to work on. Gotta stop using ‘um’ and ‘right’ as punctuation.

Cuban Shoreline

I tried to explain to my daughter why I’d had a strange day.

“Why was it strange?”

“Well… There’s a thing called a cryptocurrency. ‘Currency’ is another word for money; a cryptocurrency is a special kind of money that’s made out of math instead of paper or metal.”

That got me a look. Money that’s made out of made out of math, right.

“… and one of the things we found today was somebody trying to make a new cryptocurrency. Now, do you know why money is worth anything? It’s a coin or a paper with some ink on it – what makes it ‘money’?”

“… I don’t know.”

“The only answer we have is that it’s money if enough people think it is. If enough people think it’s real, it becomes real. But making people believe in a new kind of money isn’t easy, so what this guy did was kind of clever. He decided to give people little pieces of his cryptocurrency for making contributions to different software projects. So if you added a patch to one of the projects he follows, he’d give you a few of these math coins he’d made up.”

“Um.”

“Right. Kind of weird. And then whoever he is, he wrote a program to do that automatically. It’s like a little robot – every time you change one of these programs, you get a couple of math coins. But the problem is that we update a lot of those programs with our robots, too. Our scripts run, our robots, and then his robots try to give our robots some of his pretend money.”

“…”

“So that’s why my day was weird. Because we found somebody else’s programs trying to give our programs made-up money, in the hope that this made-up money would someday become real.”

“Oh.”

“What did you to today?”

“I painted different animals and gave them names.”

“What kind of names?”

“French names like zaval.”

“Cheval. Was it a good day?”

“Yeah, I like painting.”

“Good, good.”

(Charlie Stross warned us about this. It’s William Gibson’s future, but we still need to clean up after it.)

I bought a new bag.

I’ve come to the conclusion that I shouldn’t buy anything in the wintertime; I spend too much time indoors and it’s bad for my head. After a while I start believing that I should start having things that are nice, and maybe even – dare I say it – fancy, and when you’re a guy in the throes of middle-age that can end poorly.

As a side anecdote: my personal canonical example (is “headcanonical” a word?) comes from late winter about two years ago, when I mentioned to an old friend that I’d been (at 37, with two kids; painfully trite, I know) casually window-shopping for motorcycles. She’s known me forever, and her reply slid in flat between the ribs that special way only an old friend’s can.

“So did your dad ever hug you when you were a kid, or are you going to get one of the really loud ones?”

Painful wince, scene.

Gentlemen, having women in your life who will call you on your bullshit is invaluable. I’m not getting a motorbike.

Which, in fact, is great – all that cabin-fever stir-craziness ends in the spring, because what I really want, every year, isn’t fancy shoes or a motorcycle, it’s to get back on my bike. A few weeks of summer commutes has cemented it, too; I fly past a lot of expensive European metal on my ride in and your Porsche or Ducati doesn’t matter much if everyone in front of you is parked. But on a bike I can blow through traffic like the wind, and in rush hour traffic – and that’s most of the time, downtown – I’m far and away faster than anything else on the road.

Anyway, back to the topic at hand: after a fair bit of screwing around trying to turn my venerable old laptop bag into the messenger bag I actually wanted, I’d decided I needed to solve the problem once and for all.

I’m partial to messenger bags as because of the kind of riding I tend towards is the “playing-in-traffic” kind, and for that you need any weight you’re carrying to sit as high on your back as possible. It’s hard to cinch the load on a backpack up over you, and the lateral stability on them is usually iffy. They’re just not meant for this kind of work. I love the look of Saddleback Leather’s bags – so beautiful, so utterly impractical – but when spring rolled around I had to own up to the fact that they’re not right thing. I’m the semi-mythical Scofflaw Cyclist that comes up whenever people talk about traffic, and I needed something for the aggro bike commuting I do every single day. So I laid out my criteria and broadened my search.

My needs turned out to be pretty straightfoward:

  • Waterproof for real. Not “resistant”; clean-it-with-a-hose waterproof.
  • Holds a 15″ laptop plus the usual nerd fixins’ plus two days’ clothing.
  • Replaceable straps – that is, the straps can’t be sewn in to the bag.
  • Quick-adjust straps. Gotta be able to cinch it down and step out of it easily.
  • Second support strap, ideally also quick-adjust.
  • Side pockets I can reach without opening the whole bag.
  • Little or no velcro, just because it annoys me.
  • Being able to clip stuff to the sides is a plus, and Molle webbing is nice and everything but
  • if the word “tactical” appears anywhere in the product’s page, close the tab. “Tactical” has become shorthand for “substandard gear aimed at the macho bullshit market”, so when you’re in the market for sturdy, dependable gear this is a huge timesaver. Remember: amateurs study tactics, professionals study logistics.

The replaceable straps part is really important. They’re generally the least-thought-out part of the bag, despite being the most important. Being able to either get them just right or replace them is a deal-breaker.

As beautiful as they are, the Saddleback bags – any leather bags – were disqualified early on, and the strap criteria ruled out all of Crumpler’s products. Maxpedition bags are solid, but they suffer from that mall-commando velcro-and-tiny-pockets-everywhere aesthetic that makes you look like a deflated Rob Leifield character, so that’s that. They’re like some of the better Targus bags, in that sense; all the ingredients of a great product are there, you can see them, but nobody with any taste cared enough about how they worked or fit together.

I had a couple of strong choices, though. The last candidates to get cut were:

  • The Tom Bihn Ego/Superego, cut for the straps. It’s a nice bag and Tom Bihn sees a lot of love around the office, but bags that hang low off clips generally seem to be designed for casual cyclists and pedestrians.
  • I spent a very long time looking at Acronym’s Third Arm products – this one is just so close to perfect – but $1100 for a messenger bag indefensible lollerskates.
  • The MEC Velocio, a very strong contender particularly for the price, maxes out at a 13″ laptop and was cut for size & strap reasons.
  • Chrome’s Buran looks great and is well-reviewed, and the seatbelt-buckle strap is compelling. but falls down on the side pockets and removable strap questions. Chrome makes great bags in general, and the Buran was the last cut. [UPDATE: This was an error - the Buran has removable/adjustable straps that are equivalent to those on the Timbuk2 Especial, and if I were doing this again it would be a tossup; the Buran also meets my requirements.]

The winning candidate was the Timbuk2 Especial Cycling Messenger Bag, which is as close to perfect as I’ve seen. Sits high on the back. waterproof, the strap is great and the magnetic-clip latches are good enough that I find going back to the old kind pointlessly cumbersome now. Fits a lot if it has to, cinches down if it doesn’t, comfortable and lifts off the back a little bit to air out which is quite nice. This plus their extra 3Way phone case for the strap has been making me very happy for about a month now.

There are a few caveats::

  • I generally dislike velcro, but Timbuk2’s “silencer” straps aren’t worth it. A yard of velcro does the job for a fraction the price. If those straps had incorporated some extra molle-style gear loops I’d have jumped at them – some extra clip-in points under the flap would be welcome – but you’d need two sets to quiet this bag, so I wouldn’t bother.
  • I’ve replaced the stock support strap with $5 worth of straps and buckles from MEC so that I can loosen it up or cinch it down as easily as the main strap. This isn’t a big deal until you’ve got to wear a jacket, but it was worth it. Likewise I’ve added a small strap to the main buckle so that it’s easier to unlatch with gloves.

… but that’s not much, and the result is exactly what I wanted.

horse-castle

A friend of mine has called me a glass-half-broken kind of guy.

My increasingly venerable Nokia N9 has been getting squirrelly for a few months, and since it finally decided its battery was getting on in years it was time for a new phone.

I’m going to miss it a lot. The hardware was just a hair too slow, the browser was just a hair too old and even though email was crisp and as well done as I’ve ever seen it on a small screen, Twitter – despite being the one piece of software that periodically got updates, strangely – was always off in the weeds. Despite all that, despite the storied history of managerial incompetence and market failure in that software stack, they got so many things right. A beautiful, solid UI, an elegant gesture system that you could work reliably one-handed and a device whose curved shape informed your interaction with the software in a meaningful way. Like WebOS before it, it had a consistent and elegantly-executed interaction model full of beautiful ideas and surprisingly human touches that have pretty much all died on the vine.

Some friends have been proposing a hedge-fund model where they follow my twitter feed, scrape it for any piece of technology I express interest in and then short that company’s stock immediately and mercilessly. The reasoning being, of course, that I tend to back underdogs and generally underdogs are called that because of their unfortunate tendency to not win.

So now I own a Nexus 5; do with that information what you will. The experience has not been uniformly positive.

Android, the joke goes, is technical debt that’s figured out how to call 911, and with KitKat it seems like somebody has finally sent help. For a while now Android has been struggling to overcome its early… well, “design process” seems like too strong a term, but some sort of UI-buglist spin-the-bottle thing that seemed to amount to “how can I ignore anyone with any sort of design expertise, aesthetic sensibility or even just matching socks and get this bug off my desk.” KitKat is clearly the point we all saw coming, where Android has pivoted away from being a half-assed OS to start being a whole-assed Google-services portal, and it really shows.

Look: I know I’m a jagged, rusty edge case. I know. But this is what happened next.

As you open the box, you find a protective plastic sheet over the device that says “NEXUS 5″ in a faint grey on black. If you don’t peel it off before pushing the power button, the Google logo appears, slightly offset and obscured behind it. It’s not a big thing; it’s trivial but ugly. If either word had been a few millimetres higher or lower it would have been a nice touch. As shipped it’s an empty-net miss, a small but ominous hint that maybe nobody was really in charge of the details.

I signed in with my Google Apps account and the phone started restoring my old apps from other Android installs. This is one of the things Google has done right for a long time; once you see it you immediately think it should have worked that way everywhere the whole time. But I didn’t realize that it restored the earlier version of the software you had on file, not the current one; most of my restored pre-KitKat apps crashed on startup, and it took me a while to understand why.

Once I’d figured that out and refreshed a few of them manually, set up my work email and decided to see if Google Goggles was neat as it was last time I looked. Goggles immediately crashed the camera service, and I couldn’t figure out how make the camera work again in any app without power-cycling the phone.

So I restart the phone, poked around at Hangouts a bit; seems nice enough and works mostly OK, could use some judicious copy-editing in the setup phase to sound a little less panopticon-stalkerish. (But we’re all affluent white men here it’s no big deal, right? Who doesn’t mind being super-easy to find all the time?)

I went to make dinner then, and presumably that’s when the phone started heating up.

Eventually I noticed that I’d lost about a quarter of my battery life over the course of an almost-idle hour, with the battery monitor showing that the mail I’d received exactly none of was the culprit. From what I can tell the Exchange-connection service is just completely, aggressively broken; it looks like if you set up the stock mail client for Exchange and pick “push” it immediately goes insane, checking for mail hundreds of times per second and trying to melt itself, and that’s exciting. But even if you dial it back to only check manually, after a while it just… stops working. A reboot doesn’t fix it, I’ve had to delete and recreate the account to make it work again. Even figuring out how to do that isn’t as easy as it should be; I’ve done it twice so far, one day in. So I guess it’s IMAP and I’ll figure calendars out some other way. We use Zimbra at the office, not Exchange proper, and their doc on connecting to Android hasn’t been updated in two years so that’s a thing. I’m totally fine in this corner, really. Cozy. I can warm my hands on my new phone.

I’ve been using my Bespoke I/O Google Apps accounts before Google doubled down on this grasping, awful “G+ Or GTFO” policy, and disabling G+ in Apps years ago has turned my first-touch experience with this phone into a weird technical tug-of-war-in-a-minefield exercise. On the one hand, it’s consistently protected me from Google’s ongoing “by glancing at this checkbox in passing you’re totally saying you want a Google+ account” mendacity, but it also means that lots of things on the phone fail in strange and wonderful ways. The different reactions of the various Play $X apps is remarkable. “Play Games” tells me I need to sign up for a G+ account and won’t let me proceed without one, Play Movies and Music seem to work for on-device content, and Play Magazines just loses its mind and starts into a decent imitation of a strobe light.

I went looking for alternative software, but The Play Store reminds me a lot more of Nokia’s Ovi Store than the App Store juggernaut in a lot of unfortunate ways. There are a handful of high-profile apps there work fast and well if you can find them. I miss Tweetbot and a handful of other iOS apps a lot, and keep going back to my iPod Touch for it. In what I’m sure is a common sentiment Tweetbot for Android is looking pretty unlikely at this point, probably because – like the Ovi Store – there’s a hundred low-rent knockoffs of the iOS app you actually want availabl, but developing for Android is a nightmare on stilts and you make no money so anything worth buying isn’t for sale there.

It’s really a very nice piece of hardware. Fast, crisp, big beautiful screen. Firefox with Adblock Plus is way, way better than anything else in that space – go team – and for that on its own I could have overlooked a lot. But this is how my first day with this phone went, and a glass that’s half-broken isn’t one I’m super happy I decided to keep drinking from.

I may revisit this later. Consider this a late draft. I’m calling this done.

“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle

Sometimes somebody says something to me, like a whisper of a hint of an echo of something half-forgotten, and it lands on me like an invocation. The mania sets in, and it isn’t enough to believe; I have to know.

I’ve spent far more effort than is sensible this month crawling down a rabbit hole disguised, as they often are, as a straightforward question: why do programmers start counting at zero?

Now: stop right there. By now your peripheral vision should have convinced you that this is a long article, and I’m not here to waste your time. But if you’re gearing up to tell me about efficient pointer arithmetic or binary addition or something, you’re wrong. You don’t think you’re wrong and that’s part of a much larger problem, but you’re still wrong.

For some backstory, on the off chance anyone still reading by this paragraph isn’t an IT professional of some stripe: most computer languages including C/C++, Perl, Python, some (but not all!) versions of Lisp, many others – are “zero-origin” or “zero-indexed”. That is to say, in an array A with 8 elements in it, the first element is A[0], and the last is A[7]. This isn’t universally true, though, and other languages from the same (and earlier!) eras are sometimes one-indexed, going from A[1] to A[8].

While it’s a relatively rare practice in modern languages, one-origin arrays certainly aren’t dead; there’s a lot of blood pumping through Lua these days, not to mention MATLAB, Mathematica and a handful of others. If you’re feeling particularly adventurous Haskell apparently lets you pick your poison at startup, and in what has to be the most lunatic thing I’ve seen on a piece of silicon since I found out the MIPS architecture had runtime-mutable endianness, Visual Basic (up to v6.0) featured the OPTION BASE flag, letting you flip that coin on a per-module basis. Zero- and one-origin arrays in different corners of the same program! It’s just software, why not?

All that is to say that starting at 1 is not an unreasonable position at all; to a typical human thinking about the zeroth element of an array doesn’t make any more sense than trying to catch the zeroth bus that comes by, but we’ve clearly ended up here somehow. So what’s the story there?

The usual arguments involving pointer arithmetic and incrementing by sizeof(struct) and so forth describe features that are nice enough once you’ve got the hang of them, but they’re also post-facto justifications. This is obvious if you take the most cursory look at the history of programming languages; C inherited its array semantics from B, which inherited them in turn from BCPL, and though BCPL arrays are zero-origin, the language doesn’t support pointer arithmetic, much less data structures. On top of that other languages that antedate BCPL and C aren’t zero-indexed. Algol 60 uses one-indexed arrays, and arrays in Fortran are arbitrarily indexed – they’re just a range from X to Y, and X and Y don’t even need to be positive integers.

So by the early 1960’s, there are three different approaches to the data structure we now call an array.

  • Zero-indexed, in which the array index carries no particular semantics beyond its implementation in machine code.
  • One-indexed, identical to the matrix notation people have been using for quite some time. It comes at the cost of a CPU instruction to manage the offset; usability isn’t free.
  • Arbitrary indices, in which the range is significant with regards to the problem you’re up against.

So if your answer started with “because in C…”, you’ve been repeating a good story you heard one time, without ever asking yourself if it’s true. It’s not about *i = a + n*sizeof(x) because pointers and structs didn’t exist. And that’s the most coherent argument I can find; there are dozens of other arguments for zero-indexing involving “natural numbers” or “elegance” or some other unresearched hippie voodoo nonsense that are either wrong or too dumb to rise to the level of wrong.

The fact of it is this: before pointers, structs, C and Unix existed, at a time when other languages with a lot of resources and (by the standard of the day) user populations behind them were one- or arbitrarily-indexed, somebody decided that the right thing was for arrays to start at zero.

So I found that person and asked him.

His name is Dr. Martin Richards; he’s the creator of BCPL, now almost 7 years into retirement; you’ve probably heard of one of his doctoral students Eben Upton, creator of the Raspberry Pi. I emailed him to ask why he decided to start counting arrays from zero, way back then. He replied that…

As for BCPL and C subscripts starting at zero. BCPL was essentially designed as typeless language close to machine code. Just as in machine code registers are typically all the same size and contain values that represent almost anything, such as integers, machine addresses, truth values, characters, etc. BCPL has typeless variables just like machine registers capable of representing anything. If a BCPL variable represents a pointer, it points to one or more consecutive words of memory. These words are the same size as BCPL variables. Just as machine code allows address arithmetic so does BCPL, so if p is a pointer p+1 is a pointer to the next word after the one p points to. Naturally p+0 has the same value as p. The monodic indirection operator ! takes a pointer as it’s argument and returns the contents of the word pointed to. If v is a pointer !(v+I) will access the word pointed to by v+I. As I varies from zero upwards we access consecutive locations starting at the one pointed to by v when I is zero. The dyadic version of ! is defined so that v!i = !(v+I). v!i behaves like a subscripted expression with v being a one dimensional array and I being an integer subscript. It is entirely natural for the first element of the array to have subscript zero. C copied BCPL’s approach using * for monodic ! and [ ] for array subscription. Note that, in BCPL v!5 = !(v+5) = !(5+v) = 5!v. The same happens in C, v[5] = 5[v]. I can see no sensible reason why the first element of a BCPL array should have subscript one. Note that 5!v is rather like a field selector accessing a field in a structure pointed to by v.

This is interesting for a number of reasons, though I’ll leave their enumeration to your discretion. The one that I find most striking, though, is that this is the earliest example I can find of the understanding that a programming language is a user interface, and that there are difficult, subtle tradeoffs to make between resources and usability. Remember, all this was at a time when everything about the future of human-computer interaction was up in the air, from the shape of the keyboard and the glyphs on the switches and keycaps right down to how the ones and zeros were manifested in paper ribbon and bare metal; this note by the late Dennis Ritchie might give you a taste of the situation, where he mentions that five years later one of the primary reasons they went with C’s square-bracket array notation was that it was getting steadily easier to reliably find square brackets on the world’s keyboards.

“Now just a second, Hoye”, I can hear you muttering. “I’ve looked at the BCPL manual and read Dr. Richards’ explanation and you’re not fooling anyone. That looks a lot like the efficient-pointer-arithmetic argument you were frothing about, except with exclamation points.” And you’d be very close to right. That’s exactly what it is – the distinction is where those efficiencies take place, and why.

BCPL was first compiled on an IBM 7094here’s a picture of the console, though the entire computer took up a large room – running CTSS – the Compatible Time Sharing System – that antedates Unix much as BCPL antedates C. There’s no malloc() in that context, because there’s nobody to share the memory core with. You get the entire machine and the clock starts ticking, and when your wall-clock time block runs out that’s it. But here’s the thing: in that context none of the offset-calculations we’re supposedly economizing are calculated at execution time. All that work is done ahead of time by the compiler.

You read that right. That sheet-metal, “wibble-wibble-wibble” noise your brain is making is exactly the right reaction.

Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.

Does it get better? Oh, it gets better:

IBM had been very generous to MIT in the fifties and sixties, donating or discounting its biggest scientific computers. When a new top of the line 36-bit scientific machine came out, MIT expected to get one. In the early sixties, the deal was that MIT got one 8-hour shift, all the other New England colleges and universities got a shift, and the third shift was available to IBM for its own use. One use IBM made of its share was yacht handicapping: the President of IBM raced big yachts on Long Island Sound, and these boats were assigned handicap points by a complicated formula. There was a special job deck kept at the MIT Computation Center, and if a request came in to run it, operators were to stop whatever was running on the machine and do the yacht handicapping job immediately.

Jobs on the IBM 7090, one generation behind the 7094, were batch-processed, not timeshared; you queued up your job along with a wall-clock estimate of how long it would take, and if it didn’t finish it was pulled off the machine, the next job in the queue went in and you got to try again whenever your next block of allocated time happened to be. As in any economy, there is a social context as well as a technical context, and it isn’t just about managing cost, it’s also about managing risk. A programmer isn’t just racing the clock, they’re also racing the possibility that somebody will come along and bump their job and everyone else’s out of the queue.

I asked Tom Van Vleck, author of the above paragraph and also now retired, how that worked. He replied in part that on the 7090…

“User jobs were submitted on cards to the system operator, stacked up in a big tray, and a rudimentary system read, loaded, and ran jobs in sequence. Typical batch systems had accounting systems that read an ID card at the beginning of a user deck and punched a usage card at end of job. User jobs usually specified a time estimate on the ID card, and would be terminated if they ran over. Users who ran too many jobs or too long would use up their allocated time. A user could arrange for a long computation to checkpoint its state and storage to tape, and to subsequently restore the checkpoint and start up again.

The yacht handicapping job pertained to batch processing on the MIT 7090 at MIT. It was rare — a few times a year.”

So: the technical reason we started counting arrays at zero is that in the mid-1960’s, you could shave a few cycles off of a program’s compilation time on an IBM 7094. The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.

There are a few points I want to make here.

The first thing is that as far as I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.

Part of the problem is access to the historical record, of course. I was in favor of Open Access publication before, but writing this up has cemented it: if you’re on the outside edge of academia, $20/paper for any research that doesn’t have a business case and a deep-pocketed backer is completely untenable, and speculative or historic research that might require reading dozens of papers to shed some light on longstanding questions is basically impossible. There might have been a time when this was OK and everyone who had access to or cared about computers was already an IEEE/ACM member, but right now the IEEE – both as a knowledge repository and a social network – is a single point of a lot of silent failure. “$20 for a forty-year-old research paper” is functionally indistinguishable from “gone”, and I’m reduced to emailing retirees to ask them what they remember from a lifetime ago because I can’t afford to read the source material.

The second thing is how profoundly resistant to change or growth this field is, and apparently has always been. If you haven’t seen Bret Victor’s talk about The Future Of Programming as seen from 1975 you should, because it’s exactly on point. Over and over again as I’ve dredged through this stuff, I kept finding programming constructs, ideas and approaches we call part of “modern” programming if we attempt them at all, sitting abandoned in 45-year-old demo code for dead languages. And to be clear: that was always a choice. Over and over again tools meant to make it easier for humans to approach big problems are discarded in favor of tools that are easier to teach to computers, and that decision is described as an inevitability.

This isn’t just Worse Is Better, this is “Worse Is All You Get Forever”. How many off-by-one disasters could we have avoided if the “foreach” construct that existed in BCPL had made it into C? How much more insight would all of us have into our code if we’d put the time into making Michael Chastain’s nearly-omniscient debugging framework – PTRACE_SINGLESTEP_BACKWARDS! – work in 1995? When I found this article by John Backus wondering if we can get away from Von Neumann architecture completely, I wonder where that ambition to rethink our underpinnings went. But the fact of it is that it didn’t go anywhere. Changing how you think is hard and the payoff is uncertain, so by and large we decided not to. Nobody wanted to learn how to play, much less build, Engelbart’s Violin, and instead everyone gets a box of broken kazoos.

In truth maybe somebody tried – maybe even succeeded! – but it would cost me hundreds of dollars to even start looking for an informed guess, so that’s the end of that.

It’s hard for me to believe that the IEEE’s membership isn’t going off a demographic cliff these days as their membership ages, and it must be awful knowing they’ve got decades of delicious, piping-hot research cooked up that nobody is ordering while the world’s coders are lining up to slurp watery gruel out of a Stack-Overflow-shaped trough and pretend they’re well-fed. You might not be surprised to hear that I’ve got a proposal to address both those problems; I’ll let you work out what it might be.

Watching

Before they were called cubicles, the prefabricated office furniture we all now take despondently for granted was part of an idea called an “Action Office”. Though they’ve apparently lost their way at Herman Miller where the idea was born the idea was, at least in part, that:

[...] during the 20th century, the office environment had changed substantially, especially when considering the dramatic increase in the amount of information being processed. Despite the change in what an employee had to analyze, organize, and maintain on a daily basis, the basic layout of the corporate office had remained largely unchanged, with employees sitting behind rows of traditional desks in a large open room that was devoid of privacy. Propst’s studies suggested that an open environment actually reduced communication between employees, and impeded personal initiative. On this, Propst commented that “one of the regrettable conditions of present day offices is the tendency to provide a formula kind of sameness for everyone.“ In addition, the employee’s bodies were suffering from long hours of sitting in one position. Propst concluded that office workers require both privacy and interaction, depending on which of their many duties they were performing.

Action offices, in short, were meant to provide a variety of environments and physical working positions, so people weren’t forced into a single space and position for the whole day. Which, it turns out, is really really bad for you. That’s not the history of office furniture we know, of course – all it quickly became was a cheap way of providing prefab desks and air circulation to shabby, beige and slightly greenishly-lit cubefarms furnished by the lowest bidder, because the invisible hand of the free market likes nothing better than flipping off the proles. But the idea, at least, had a lot of merit.

About three weeks ago, I switched to a standing desk. It’s this bolt-on model, and while I love it, it’s not perfect. My desk has an unfortunate amount of of flex to it, making the heavy Ergotron dingus a bit bouncy, but I’ve mostly addressed that that a bit by screwing in an extra table leg just under the bracket.

I love it. A lot. I don’t think I’m going to be able to go back to using an office chair.

What moved me to do this was two things. First: after poking around, the best information available suggests that spending ten or fifteen hours a day sitting is approximately as bad for you as smoking, and in a lot of ways worse. The other thing was that largely of curiosity I picked up a Jawbone Up wristband and, doubling down on the metrics tools, a Fitbit One*.

Whatever else those Quantified Life dongles claim, the one thing they can do very accurately is tell you how much time you spend doing nothing. And would you look at that, it turns out that I spend… nineteen hours or more of a typical day basically immobile. Um, that can’t be good. I’m going to have to do something about that.

During the first week. you really feel it. All those little muscles in my back that I really hadn’t been using, they expressed considerable displeasure at being suddenly called back into active duty, and understandably given the abusive relationship I’ve had with my knees, they’re were right there in line too. But sometime late in week two, that all settled right down. Even biking to work and lifting stuff around the house, back and knee pains I’ve had for years are going away, my posture is clearly getting better and that oh-god-it’s-painful-to-stand-up process I used to experience after uncoiling from an hour or four over hunched over a terminal just doesn’t happen anymore.

I feel unaccountably strong. I doubt I’m actually any stronger than I was a month ago, but I end my day feeling like I’ve put in a day of real work and I’m looking forward to the bike ride home, rather than feeling like I’m spent and I’ve got to drag my sorry ass across town again, and that’s not nothing.

I don’t know who’s got my chair at MoTo right now, and I don’t care. I think I’m pretty much done with it.

A little while ago, the espresso machine in our office broke down. This doomsday scenario is, and I say this without the least bit of hyperbole, the most catastrophically dire situation that can exist in this or any other possible universe. If the intertubes felt slow for you the last few weeks, that’s probably why.

After a while, I started asking a colleague, Sean Martell, to ‘shop up some old war propaganda every few days, to express our dismay.

So, here you go.

We Need Coffee To Survive

It Can Happen Here

We Can Do It

Mercifully it is now fixed, and productivity should normalize in a day or two.

So, this is a cute trick that’s been making the rounds:

In Firefox, right-click your bookmarks bar and pick “new bookmark”. Call it “Quick Notepad”, and in the Location box, put:

data:text/html,<html contenteditable>

and now when you click on that bookmark, your browser window will basically become Notepad, a very light text editor. File -> Save works great, too.

Perhaps better, if you check the “Load this bookmark in the sidebar” option, that will give you an nice little way of making notes about a tab, though unfortunately this option isn’t easy to save.

Keep This Area Clear

Man, how awful is it to see people broken by the realization that they are no longer young. Why are you being cantankerous, newly-old person? It’s totally OK not to be 17 or 23, things are still amazing! Kids are having fun! You may not really understand it, but just roll with it! The stuff you liked when you were 17 isn’t diminished by your creeping up on 40!

This has been making the rounds, a lazy, disappointing article from Wired about the things we supposedly “learned about hacking” from the 1995 almost-classic, Hackers. It’s a pretty unoriginal softball of an article, going for a few easy smirks by cherrypicking some characters’ sillier idiosyncrasies while making the author sound like his birthday landed on him like a cartoon piano.

We need a word for this whole genre of writing, where the author tries far too hard to convince you of his respectable-grownup-hood by burning down his youth. It’s hard to believe that in fifteen years the cycle won’t repeat itself, with this article being the one on the pyre; you can almost smell the smoke already, the odor of burning Brut and secret regrets.

The saddest part of the article, really, is how much it ignores. Which is to say: just about everything else. There’s plenty of meat to chew on there, so I don’t really understand why; presumably it has something to do with deadlines or clickthroughs or word-counts or column inches or something, whatever magic words the writers at Wired burble as they pantomime their editor’s demands and sob into their dwindling Zima stockpile.

I’ve got quite a soft spot in my heart and possibly also my brain for this movie, in part because it is flat-out amazing how many things Hackers got exactly right:

  • Most of the work involves sitting in immobile concentration, staring at a screen for hours trying to understand what’s going on? Check.
  • It’s usually an inside job from a disgruntled employee? Check.
  • A bunch of kids who don’t really understand how severe the consequences of what they’re up to can be, in it for kicks? Check.
  • Grepping otherwise-garbage swapfiles for security-sensitive information? Almost 20 years later most people still don’t get why that one’s a check, but my goodness: check.
  • Social-engineering for that one piece of information you can’t get otherwise, it works like a charm? Check.
  • Using your computer to watch a TV show you wouldn’t otherwise be able to? Golly, that sounds familiar.
  • Dumpster-diving for source printouts? I suspect that for most of my audience “line printers” fit in the same mental bucket as “coelecanth”, and printing anything at all, much less code, seems kind of silly and weird by now, so you’ll just have to take my word for it when I say: very much so, check.
  • A computer virus that can affect industrial control systems, causing a critical malfunction? I wonder where I’ve heard that recently.
  • Abusive prosecutorial overreach, right from the opening scene? You’d better believe, check.

So if you haven’t seen it, Hackers is a remarkable artefact of its time. It’s hardly perfect; the dialog is uneven, the invented slang aged as well as invented-slang always does. Moore’s Law has made anything with a number on the side look kind of quaint, and there’s plenty of that horrible neon-cars-on-neon-highways that directors seem to fall back on when they need to show you what the inside of a computer is doing. But really: Look at that list. Look at it.

For all its flaws, sure, Hackers may not be something you’d hold aloft as a classic. But it’s good fun and it gets an awful lot more right than wrong, and that’s not nothing.

You need a Silpat nonstick cooking mat, a baking tray, an oven and tongs. Turn the oven up to 400, but you don’t need to let it finish preheating; this starts from cold. Silpat goes on the tray, bacon goes on the Silpat and it all goes in the oven for 20 to 25 minutes.

No other interaction, no stirring, no splatter, no mess. Pull out the tray when it’s as crispy as you like; I prefer crispy bacon so I aim for the 25 minute mark, but there’s room for debate here. Pick the bacon up and shake off any excess fat, plate your evenly cooked, perfect-all-the-way-across bacon, done. Cleanup is incredibly easy, just pour the grease out and rinse the Silpat and tray with hot water.

This has really revolutionized my bacon-having experience. You should try it.