November 10, 2018


Filed under: a/b,analog,documentation,interfaces,life,travel — mhoye @ 2:13 am

Toronto’s oldest subway line, and the newest. This the view east from the Bloor Station platform:

Subway Tunnel, Bloor Station

… and this is the view north from York University:


November 15, 2016


Filed under: a/b,digital,interfaces,toys — mhoye @ 11:25 am


For no particular reason, here’s a picture of a computer with a 1.3Ghz processor, 2 gigabytes of RAM, 32 gigabytes of solid-state internal storage and another 32 gigabytes on a MicroSD card, wifi, bluetooth and HDMI video output plugged into a standard 3.5″ floppy drive that would store 1.44MB, or approximately 0.002% of 64 gigabytes.

When you account for inflation, at the time of purchase the floppy drive was about $20 more expensive.

November 14, 2016

Switching Sides

Filed under: a/b,digital,documentation,interfaces,linux,mozilla,toys,work — mhoye @ 4:48 pm

Toronto Skyline

I’ve been holding off on a laptop refresh at work for a while, but it’s time. The recent Apple events have been less than compelling; I’ve been saying for a long time that Mozilla needs more people in-house living day to day on Windows machines and talk is cheaper than ever these days, so.

I’m taking notes here of my general impressions as I migrate from a Macbook Pro to a Surface Book and Windows 10.

I’ll add to them as things progress, but for now let’s get started.

  • I don’t think highly of unboxing fetishism, but it’s hard to argue against the basic idea that your very tactile first contact with a product should be a good one. The Surface Book unboxing is a bit rough, but not hugely so; there’s the rare odd mis-step like boxes that are harder than necessary to open or tape that tears the paper off the box.
  • I’ve got the Performance Base on the Surface Pro here; the very slight elevation of the keyboard makes a surprisingly  pleasant difference, and the first-run experience is pretty good too. You can tell Microsoft really, really wants you to accept the defaults, particularly around data being sent back to Microsoft, but you can reasonably navigate that to your comfort level it looks like. Hard to say, obvs.
  • I’m trying to figure out what is a fair assessment of this platform vs. what is me fighting muscle memory. Maybe there’s not a useful distinction to be made there but considering my notable idiosyncrasies I figure I should make the effort. If I’m going to pretend this is going to be useful for anyone but some alternate-universe me, I might as well. This came up in the context of multiple desktops – I use the hell out of OSX multiple desktops, and getting Windows set up to do something similar requires a bit of config twiddling and some relearning.The thing I can’t figure out here is the organizational metaphor. Apple has managed to make four-fingered swiping around multiple desktop feel like I’m pushing stuff around a physical space, but Windows feels like I’m using a set of memorized gestures to navigate a phone tree. This is a preliminary impression, but it feels like I’m going to need to just memorize this stuff.
  • In a multiple desktops setting, the taskbar will only show you the things running in your current desktop, not all of them? So crazymaking. [UPDATE: Josh Turnath in the comments turns out that you can set this right in the “multitasking” settings menu, where you can also turn off the “When I move one window, move other windows” settings which are also crazymaking. Thanks, Josh!]
  • If you’re coming off a Mac trackpad and used to tap-to-click, be sure to set the delay setting to “Short delay” or it feels weird and laggy. Long delay is tap, beat, beat, response; if you move the cursor the action vanishes. That, combined with the fact that it’s not super-great at rejecting unintentional input makes it mostly tolerable but occasionally infuriating, particularly if you’ve got significant muscle memory built up around “put cursor here then move it aside so you can see where you’re typing”, which makes it start selecting text seemingly at random. It’s way  better than any other trackpad I’ve ever used on a PC for sure, so I’ll take it, but still occasionally: aaaaaaargh. You’re probably better just turning tap-to-click off. UPDATE: I had to turn off tap to click, because omgwtf.
  • In this year of our lord two thousand and sixteen you still need to merge in quasi-magic registry keys to remap capslock . If you want mousewheel scrolling to work in the same directions as two-finger scrolling, you need to fire up RegEdit.exe and know the magic incantations. What the hell.
  • It’s surprising how seemingly shallow the Win10 redesign is. The moment you go into the “advanced options” you’re looking at the the same dialogs you’ve known and loved since WinXP. It’s weird how unfinished it feels in places. Taskbar icons fire off on a single click, but you need to flip a checkbox five layers deep in one of those antiquated menus to make desktop icons do the same.  The smorgasbords you get for right-clicking things look like a room full of mismanaged PMs screaming at each other.
  • You also have to do a bunch of antiquated checkbox clickery to install the Unix subsystem too, but complaining about a dated UI when you’re standing up an ersatz Linux box seems like the chocolate-and-peanut-butter of neckbearded hypocrisy, so let’s just agree to not go there. You can get a Linux subsystem on Windows now, which basically means you can have Linux and modern hardware with working power management and graphics drivers at the same time, which is pretty nice.
  • Pairing Apple’s multitouch trackpads with Windows only gets you one- and two-fingered gestures. C’mon. Really?
  • This is a common consensus here, after asking around a bit. Perplexity that Microsoft would put an enormous (and ultimately successful) effort into re-pinning and hardening the foundations underneath the house, recladding it and putting in an amazing kitchen, but on the 2nd floor the hinges are on the wrong side of the doors and there’s a stair missing on the way to the basement.
  • I’m not surprised the Windows Store isn’t the go-to installer mechanism yet – that’s true on Macs, too – but my goodness pickings there are pretty slim. Somehow I have to go visit all these dodgy-looking websites to get the basic-utilities stuff sorted out, and it feels like an outreach failure of some kind. This is vaguely related to my next point, that:
  • The selection of what does vs. doesn’t come preinstalled is… strange. I feel like Microsoft has space to do something really interesting here that they’re not capitalizing on for some reason. Antitrust fears? I dunno. I just feel like they could have shipped this with, say, Notepad++ and a few other common utilities preinstalled and made a lot of friends.
  • The breakaway power cables are fantastic. A power brick with fast-charge USB built in and freeing up slots on the machine proper is extremely civilized. You can be sitting with your legs crossed and have the power plugged in, which I sincerely miss being able to do with underpowered 1st-gen Macbook Air chargers back in the mists of prehistory.
  • The Surface Dock is basically perfect. Power, Ethernet, two DisplayPorts and four USB ports over that same breakaway cable is excellent. If you’ve ever used a vintage IBM Thinkpad docking station, this is something you’ve been wishing Apple would make for the better part of a decade.
  • I assumed “Skype Preview” was a preview version of Skype. I wanted (and pay for) the whole thing, so I immediately uninstalled that and installed normal Skype, which it turns out is really outdated-looking and ugly on Win10. I was bewildered about why a premiere Microsoft-owned thing like Skype would look ugly on their flagship OS, so I did some research and discovered that “Skype Preview” isn’t a preview version of Skype. It’s the prettified modern Win10 version. So I reinstalled it and uninstalled Skype. I’m sure this is somehow my fault for not understanding this but in my defense: words mean things.
  • This hardware is really nice. The hinge works great, eject to tablet is crisp and works well, reversing it to the easel setup is both surprisingly good and for-real useful.

Anyway, this is where I am so far. More notes as I think of them.


  • Definitely turn off the two-finger-tap-to-right-click option – if you don’t and you’ve got fat hands like mine, sometimes it will get into a state where everything is a right-click, which is inexplicable and upsetting.
  • I saw my first tripped-over USB-C cable send a Macbook crashing to the floor today. I suspect it will not be the last.

Further updates:

  • It turns out there’s a (baffling!) option to turn a click on the lower right corner of the trackpad into a right-click, which is just super-weird and infuriating if you don’t know it’s there and (apparently?) turned on by default.
  • The trick to reversing mousewheel scrolling only is here, and involves RegEdit, finding all the instances of FlipFlopWheel in the registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\HID\ and changing them from 0 to 1. Very user friendly.
  • A lot of network-related stuff in the Unix subsystem doesn’t work right or at all yet, but my understanding is that this is fixed in the Insider builds.
  • A nice as having the Unix subsystem is, the terminal thing you use to get to it is infuriating retro-bizarro DOS-window garbage.  [UPDATE: bwinton has introduced me to Cmder, a console emulator for Windows that is vastly better than the Ubuntu default in every observable respect. Use that instead.]
  • Unexpected but pleasant: CPU in the lid instead of the base means your lap doesn’t overheat.

Further-er updates:

  • A nice touch: searching for common OSX utility names with the taskbar brings you directly to their Windows counterparts, like “grab” brings you to the snippets tool.
  • It’s surprising how often the “how do I do [something]?” links in the Settings dialog box take you to the same undifferentiated and completely un-navigable Windows 10 support page. Really rookie stuff, like they fired the intern responsible three weeks into their placement and just forgot about it.
  • It’s really frustrating how both of those experiences coexist basically everywhere in this OS. Nice, elegantly-deployed and useful touches in some places, arbitrarily broken or ill-considered jank in others.

Further Updates 4: The Furthening;

  • There’s now a Surface Book User Guide, and it’s got some good information in it. For example, fn-del and fn-backspace adjust screen brightness, something I’ve missed from my Macbook. Also, fn-space for screenshots is nice enough, though the provided snipping tool is better (better than OSX Grab, too.)
  • You can use AutoHotKey scripts to remap what pen-clicking does, turning it into a passable presenter’s tool. Which is kind of neat.

Finally, one of the most upsetting things about Windows 10 is how power management just doesn’t reliably work at all. There’s no safe-sleep; running out of battery means state loss, potentially data loss, and a cold reboot. I’ve had to set it to hibernate on a lid closed because sometimes suspend just… doesn’t. Before I did that, I’d put it into my bag with the lid closed and it would mysteriously wake in my backpack, once hot enough that it was uncomfortable to touch. Despite the fact that my unmodified default settings say “critical power level is 6% and the action to take here is hibernate”, I routinely see 4%-power-remaining warnings and then hard shutdowns, and if I’m not careful hard reboots afterwards. Honestly, what the hell.

Last update: Well, this is unfortunate:


Postmortem: Still like Windows 10, but after putting up with that screen yellowing and an increasing number of baffling hangs (and the discovery that the backup software had been silently failing for weeks), this machine got RMA’ed. I’ll have another one soon; hopefully it was an isolated hardware problem, but I guess we’ll see.

October 8, 2016

Pageant Knight

Filed under: a/b,awesome,comics,documentation,lunacy,microfiction,weird — mhoye @ 11:45 pm

Sunset At The Beach

On September 17th, DC “celebrated” what they called “Batman Day”. I do not deploy scare quotes lightly, so let me get this out of the way: Batman is boring. Batman qua Batman as a hero, as a story and as the center of a narrative framework, all of those choices are pretty terrible. The typical Batman story arc goes something like:

  • Batman is the best at everything. But Gotham, his city, is full of terrible.
  • Batman broods over his city. The city is full of terrible but Batman is a paragon of brooding justice.
  • An enemy of justice is scheming at something. Batman detects the scheme, because he is the World’s Greatest Among Many Other Things Detective and intervenes.
  • Batman is a paragon of brooding justice.
  • Batman’s attempt to intervene fails! Batman may not be the best at everything!
  • Batman broods and/or has a bunch of feelings and/or upgrades one of his widgets.
  • Batman intervenes again, and Batman emerges triumphant! The right kind of punching and/or widgeting makes him the best at everything again.
  • Order is restored to Gotham.
  • Batman is a paragon of brooding justice.

If you’re interested in telling interesting stories Batman is far and away the least interesting thing in Gotham. So I took that opportunity to talk about the Batman story I’d write given the chance. The root inspiration for all this was a bout of protracted synesthesia brought on by discovering this take on Batman from Aaron Diaz, creator of Dresden Codak, at about the same time as I first heard Shriekback’s “Amaryllis In The Sprawl”.

The central thesis is this: if you really want a Gritty, Realistic Batman For The Modern Age, then Gotham isn’t an amped-up New York. It’s an amped-up New Orleans, or some sort of New-Orleans/Baltimore mashup. A city that’s full of life, history, culture, corruption and, thanks to relentlessly-cut tax rates, failing social and physical infrastructure. A New-Orleans/Baltimore metropolis in a coastal version of Brownback’s Kansas, a Gotham where garbage isn’t being collected and basic fire & police services are by and large not happening because tax rates and tax enforcement has been cut to the bone and the city can’t afford to pay its employees.

Bruce Wayne, wealthy philanthropist and Gotham native, is here to help. But this is Bruce Wayne via late-stage Howard Hughes; incredibly rich, isolated, bipolar and delusional, a razor-sharp business mind offset by a crank’s self-inflicted beliefs about nutrition and psychology. In any other circumstances he’d be the harmless high-society crackpot city officials kept at arm’s length if they couldn’t get him committed. But these aren’t any other circumstances: Wayne is far more than just generous, but he wants to burn this candle at both ends by helping the city through the Wayne Foundation by day and in his own very special, very extralegal way, fighting crime dressed in a cowl by night.

And he’s so rich that despite his insistence on dressing up his 55-year-old self in a bat costume and beating people up at night, the city needs that money so badly that to keep his daytime philanthropy flowing, six nights a week a carefully selected group of city employees stage another episode of “Batman, crime fighter”, a gripping Potemkin-noir pageant with a happy ending and a costumed Wayne in the lead role.

Robin – a former Arkham psych-ward nurse, a gifted young woman and close-combat prodigy in Wayne’s eyes – is a part of the show, conscripted by Mayor Cobblepot to keep an eye on Wayne and keep him out of real trouble. Trained up by retired SAS Sgt. Alfred Pennyworth behind Wayne’s back, in long-shuttered facilities beneath Wayne Manor that Wayne knows nothing about, she is ostensibly Batman’s sidekick in his fight against crime. But her real job is to protect Wayne on those rare occasions that he runs into real criminals and tries to intervene. She’s got a long, silenced rifle under that cloak with a strange, wide-mouthed second barrel and a collection of exotic munitions that she uses like a surgical instrument, not only to protect Wayne but more importantly to keep him convinced his fists & gadgets work at all.

She and Harleen Quinzel, another ex-Arkham staffer trained by Alfred, spend most of their days planning strategy. They have the same job; Quinn is the sidekick, shepherd and bodyguard of the former chief medical officer of Arkham. Quinn’s charge is also in his twilight years, succumbing to a manic psychosis accelerated by desperate self-administration of experimental and off-label therapies that aren’t slowing the degeneration of his condition, but sure are making him unpredictable. But he was brilliant once, also a philanthropist – the medical patents he owns are worth millions, bequeathed to Gotham and the patients of Arkham, provided the city care for him in his decline. Sometimes he’s still lucid; the brilliant, compassionate doctor everyone remembers. And other times – mostly at night – he’s somebody else entirely, somebody with a grievance and a dark sense of humor.

So Gotham – this weird, mercenary, vicious, beautiful, destitute Gotham – becomes the backdrop for this nightly pageant of two damaged, failing old men’s game of cat and mouse and the real story we’re following is Robin, Quinn, Alfred and the weird desperation of a city so strapped it has to let them play it out, night after dark, miserable night.

Update, January 2020: Revisiting this in light of this Metafilter thread, I have to admit that I’m still pretty happy about this idea, and even more so in light of that discussion on politics and the pernicious influence of Frank Miller on the entire genre. This modern incarnation of Grimdark Justice Batman is already the product of a hard-right crank’s power fantasies. Why shouldn’t that ultimately be the core identity of Grimdark Justice Batman himself?

September 20, 2015

The Bourne Aesthetic

“The difference between something that can go wrong and something that can’t possibly go wrong is that when something that can’t possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.”

–Douglas Adams

I’ve been trying to get this from draft to published for almost six months now. I might edit it later but for now, what the hell. It’s about James Bond, Jason Bourne, old laptops, economies of scale, design innovation, pragmatism at the margins and an endless supply of breadsticks.

You’re in, right?

Bond was a character that people in his era could identify with:

Think about how that works in the post war era. The office dwelling accountant/lawyer/ad man/salesman has an expense account. This covers some lunches at counters with clients, or maybe a few nice dinners. He flirts with the secretaries and receptionists and sometimes sleeps with them. He travels on business, perhaps from his suburb into Chicago, or from Chicago to Cleveland, or San Francisco to LA. His office issues him a dictaphone (he can’t type) or perhaps a rolling display case for his wares. He has a work car, maybe an Oldsmobile 88 if he’s lucky, or a Ford Falcon if he’s not. He’s working his way up to the top, but isn’t quite ready for a management slot. He wears a suit, tie and hat every day to the office. If he’s doing well he buys this downtown at a specialty men’s store. If he’s merely average, he picks this up at Macy’s, or Sears if he’s really just a regular joe. If he gets sick his employer has a nice PPO insurance plan for him.

Now look at Bond. He has an expense account, which covers extravagant dinners and breakfasts at the finest 4 star hotels and restaurants. He travels on business, to exotic places like Istanbul, Tokyo and Paris. He takes advantage of the sexual revolution (while continuing to serve his imperialist/nationalist masters) by sleeping with random women in foreign locations. He gets issued cool stuff by the office– instead of a big dictaphone that he keeps on his desk, Bond has a tiny dictaphone that he carries around with him in his pocket! He has a work car — but it’s an Aston Martin with machine guns! He’s a star, with a license to kill, but not management. Management would be boring anyways, they stay in London while Bond gets to go abroad and sleep with beautiful women. Bond always wears a suit, but they’re custom tailored of the finest materials. If he gets hurt, he has some Royal Navy doctors to fix him right up.

In today’s world, that organization man who looked up to James Bond as a kind of avatar of his hopes and dreams, no longer exists.

Who is our generations James Bond? Jason Bourne. He can’t trust his employer, who demanded ultimate loyalty and gave nothing in return. In fact, his employer is outsourcing his work to a bunch of foreign contractors who presumably work for less and ask fewer questions. He’s given up his defined benefit pension (Bourne had a military one) for an individual retirement account (safe deposit box with gold/leeching off the gf in a country with a depressed currency). In fact his employer is going to use him up until he’s useless. He can’t trust anyone, other than a few friends he’s made on the way while backpacking around. Medical care? Well that’s DIY with stolen stuff, or he gets his friends to hook him up. What kinds of cars does he have? Well no more company car for sure, he’s on his own on that, probably some kind of import job. What about work tools? Bourne is on is own there too. Sure, work initially issued him a weapon, but after that he’s got to scrounge up whatever discount stuff he can find, even when it’s an antique. He has to do more with less. And finally, Bourne survives as a result of his high priced, specialized education. He can do things few people can do – fight multiple opponents, hotwire a car, tell which guy in a restaurant can handle himself, hotwire cars, speak multiple languages and duck a surveillance tail. Oh, and like the modern, (sub)urban professional, Bourne had to mortgage his entire future to get that education. They took everything he had, and promised that if he gave himself up to the System, in return the System would take care of him.

It turned out to be a lie.

We’re all Jason Bourne now.

posted by wuwei at 1:27 AM on July 7, 2010

I think about design a lot these days, and I realize that’s about as fatuous an opener as you’re likely to read this week so I’m going to ask you to bear with me.

If you’re already rolling out your “resigned disappointment” face: believe me, I totally understand. I suspect we’ve both dealt with That Guy Who Calls Himself A Designer at some point, that particular strain of self-aggrandizing flake who’s parlayed a youth full of disdain for people who just don’t understand them into a career full of evidence they don’t understand anyone else. My current job’s many bright spots are definitely brighter for his absence, and I wish the same for you. But if it helps you get past this oddly-shaped lump of a lede, feel free to imagine me setting a pair of Raybans down next to an ornamental scarf of some kind, sipping a coffee with organic soy ingredients and a meaningless but vaguely European name, writing “Helvetica?” in a Moleskine notebook and staring pensively into the middle distance. Does my carefully manicured stubble convey the precise measure of my insouciance? Perhaps it does; perhaps I’m gazing at some everyday object nearby, pausing to sigh before employing a small gesture to convey that no, no, it’s really nothing. Insouciance is a french word, by the way. Like café. You should look it up. I know you’ve never been to Europe, I can tell.

You see? You can really let your imagination run wild here. Take the time you need to work through it. Once you’ve shaken that image off – one of my colleagues delightfully calls those guys “dribble designers” – let’s get rolling.

I think about design a lot these days, and I realize that’s about as fatuous an opener as you’re likely to read this week so I’m going to ask you to bear with me.

Very slightly more specifically I’ve been thinking about Apple’s latest Macbook, some recent retrospeculation from Lenovo, “timeless” design, spy movies and the fact that the Olive Garden at one point had a culinary institute. I promise this all makes sense in my head. If you get all the way through this and it makes sense to you too then something on the inside of your head resembles something on the inside of mine, and you’ll have to come to your own terms with that. Namasté, though. For real.

There’s an idea called “gray man” in the security business that I find interesting. They teach people to dress unobtrusively. Chinos instead of combat pants, and if you really need the extra pockets, a better design conceals them. They assume, actually, that the bad guys will shoot all the guys wearing combat pants first, just to be sure. I don’t have that as a concern, but there’s something appealingly “low-drag” about gray man theory: reduced friction with one’s environment.

– William Gibson, being interviewed at Rawr Denim

At first glance the idea that an Olive Garden Culinary Institute should exist at all squats on the line between bewildering and ridiculous. They use maybe six ingredients, and those ingredients need to be sourced at industrial scale and reliably assembled by a 22-year-old with most of a high-school education and all of a vicious hangover. How much of a culinary institute can that possibly take? In fact, at some remove the Olive Garden looks less like a restaurant chain than a supply chain that produces endless breadsticks; there doesn’t seem to be a ton of innovation here. Sure, supply chains are hard. But pouring prefab pomodoro over premade pasta, probably not.

Even so, for a few years the Tuscan Culinary Institute was a real thing, one of the many farming estates in Tuscany that have been resurrected to the service of regional gastrotourism booked by the company for a few weeks a year. Successful managers of the Garden’s ersatz-italian assembly lines could enjoy Tuscany on a corporate reward junket, and at a first glance amused disdain for the whole idea would seem to be on point.

There’s another way to look at the Tuscan Culinary Institute, though, that makes it seem valuable and maybe even inspired.

One trite but underappreciated part of the modern mid-tier supply-chain-and-franchise engine is how widely accessible serviceable and even good (if not great or world-beating) stuff has become. Coffee snobs will sneer at Starbucks, but the truck-stop tar you could get before their ascendance was dramatically worse. If you’ve already tried both restaurants in a town too remote to to be worth their while, a decent bowl of pasta, a bottle of inoffensive red and a steady supply of garlic bread starts to look like a pretty good deal.

This is one of the rare bright lights of the otherwise dismal grind of the capitalist exercise, this democratization of “good enough”. The real role of the Tuscan Culinary institute was to give chefs and managers a look at an authentic, three-star Tuscan dining experience and then ask them: with what we have to hand at the tail end of this supply chain, the pasta, the pomodoro, the breadsticks and wine, how can we give our customers 75% of that experience for 15% the cost?

It would be easy to characterize this as some sort of corporate-capitalist co-option of a hacker’s pragmatism – a lot of people have – but I don’t think that’s the right thing, or at least not the whole picture. This is a kind of design, and like any design exercise – like any tangible expression of what design is – we’re really talking about the expression and codification of values.

I don’t think it’s an accident that all the computers I bought between about 1998 about 2008 are either still in service or will still turn on if I flip the switch, but everything I’ve bought since lasts two or three years before falling over. There’s nothing magic about old tech, to be sure: in fact, the understanding that stuff breaks is baked right into their design. That’s why they’re still running: because they can be fixed. And thanks to the unfettered joys of standard interfaces some them are better today, with faster drives and better screens, than any computer I could have bought then.

The Macbook is the antithesis of this, of course. That’s what happened in 2008; the Macbook Pro started shipping with a non-removable battery.

If you haven’t played with one Apple’s flagship Macbooks, they are incredible pieces of engineering. They weigh approximately nothing. Every part of them seems like some fundamental advance in engineering and materials science. The seams are perfect; everything that can be removed, everything you can carve off a laptop and still have a laptop left, is gone.

As a result, it’s completely atomic, almost totally unrepairable. If any part of it breaks you’re hosed.

“Most people make the mistake of thinking design is what it looks like. People think it’s this veneer – that the designers are handed this box and told, ‘Make it look good!’ That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.” – Steve Jobs

This is true, kind of; it depends on what you believe your scope of responsibility is as a designer. The question of “how a device works” is a step removed from the question of “how does a person engage with this device”; our aforementioned designer-caricature aside, most of us get that. But far more important than that is the question of how the device helps that person engage the world. And that’s where this awful contradiction comes in, because whatever that device might be, the person will never be some static object, and the world is seven billion people swimming in a boiling froth of water, oil, guns, steel, race, sex, language, wisdom, secrets, hate, love, pain and TCP/IP.

Our time is finite, and entropy is relentless: knowing that, how long should somebody be responsible for their designs? Are you responsible for what becomes of what you’ve built, over the long term? Because if you have a better way to play the long game here than “be a huge pile of rocks” you should chisel it into something. Every other thing of any complexity, anything with two moving parts to rub together that’s still usable or exists at all today has these two qualities:

  1. It can be fixed, and
  2. When it breaks, somebody cares enough about it to fix it.

And that’s where minimalism that denies the complexity of the world, that lies to itself about entropy, starts feeling like willful blindness; design that’s a thin coat of paint over that device’s relationship with the world.

More to the point, this is why the soi-disant-designer snob we were (justly and correctly) ragging on at the beginning of this seemingly-interminable-but-it-finally-feels-like-we’re-getting-somewhere blog post comes across as such a douchebag. It’s not “minimalist” if you buy a new one every two years; it’s conspicuous consumption with chamfered edges. Strip away that veneer, that coat of paint, and there are the real values designer-guy and his venti decaf soy wankaccino hold dear.

Every day I feel a tiny bit more like I can’t really rely on something I can’t repair. Not just for environmentalism’s sake, not only for the peace of mind that standard screwdrivers and available source offers, but because tools designed by people who understand something might fall over are so much more likely to have built a way to stand them back up. This is why I got unreasonably excited by Lenovo’s retro-Thinkpad surveys, despite their recent experiments in throwing user security overboard wearing factory-installed cement boots. The prospect of a laptop with modern components that you can actually maintain, much less upgrade, has become a weird niche crank-hobbyist novelty somehow.

But if your long game is longer than your workweek or your support contract, this is what a total-cost-accounting of “reduced friction with your environment” looks like. It looks like not relying on the OEM, like DIY and scrounged parts and above all knowing that you’re not paralyzed if the rules change. It’s reduced friction with an uncertain future.

I have an enormous admiration for the work Apple does, I really do. But I spend a lot of time thinking about design now, not in terms of shapes and materials but in terms of the values and principles it embodies, and it’s painfully obvious when those values are either deeply compromised or (more typically) just not visible at all. I’ve often said that I wish that I could buy hardware fractionally as good from anyone else for any amount of money, but that’s not really true. As my own priorities make participating in Apple’s vision more and more uncomfortable, what I really want is for some other manufacturer to to show that kind of commitment to their own values and building hardware that expresses them. Even if I could get to (say) 75% of those values, if one of them was maintainability – if it could be fixed a bit at a time – I bet over the long term, it would come out to (say) 15% of the cost.

Late footnote: This post at War Is Boring is on point, talking about the effects of design at the operational and logistical levels.

December 13, 2014

Candy For Children

Filed under: a/b,digital,documentation,doom,future,interfaces,toys,vendetta — mhoye @ 9:44 pm

My impressions of Android 5 are excitingly career-limiting, as you might have guessed from the title, but what the hell. A few weeks of using it has not substantially dulled my initial impressions, so I might as well share them with you. Would you believe there are positive bits here? You’ll have work for them, obviously, panning for compliments in the effluent stream of my usual opinions of technology, but they’re in there. Here’s a gimme: it’s not ugly! So there’s that? On the other hand I haven’t been able to watch an entire video on their new “material design” approach without laughing out loud. So there is also definitely that.

It’s not so much that their designers all seem to speak with the same stunted cadence that ancient-aliens history channel guy has, though that’s part of it. The big reason is the realization – which is almost certainly not true, but they sure give you the impression it could be – that they edited out every fourth sentence, because it ended with “… and we were so high that day”.

Pre-4.4 Android was… bad. Some time ago I referred to KitKat as “technical debt that’s figured out how to call 911”, but despite my own first-impressions debacle I thought that 4.4 was moving in the right direction. Android was still visually a relic, though, and Conway’s Law was in full effect:

“[…] organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations” – M. Conway

In Google’s case this seems to mean that people can work on what they want to work on and nobody’s really in charge of making sure the entire package works right; it showed then and it still shows. For a long time it’s seemed like Android’s primary design constraints were “what can I convince disinterested engineers with self-diagnosed Aspergers’ and terrible taste to ship”, so it’s one-pixel borders and dark gray backgrounds and I’m busy buddy these barges full of RFID chips and QR/AR bridging aren’t going to talk to Glass^2 by themselves.

In that context even the slightest suggestions that a human might occasionally want to see colours now and then or maybe – and I know how crazy this sounds, but stay with me here – “experience joy” are more than welcome. So despite the delivery, Material Design looked like a pleasant if not revolutionary step forward.

And in a few important ways – I told you we’d get here! – it is. Application switching is smoother and prettier, the launcher is somewhat easier to get around and the reworked notification system is quite pleasant, despite Hangouts’ best efforts. It’s nice to see the rotation-lock toggle and tethering buttons right up front rather than buried four menus down in the settings where they used to be. There’s even a flashlight button in there with them, a nice built-in now rather than the third-party permission-creeper that spied on everything you touched that it used to be, so we’ve got that going for us dot gif.

App switching has improved as well, moving from the postage-stamp screenshots to a much more pleasantly scroll-y interface. Recency ordering there is nice, and makes much more sense in this cards-type display; infinite scroll there would be a welcome addition, but given the antecedent I’ll take it.

Most of Google’s apps, though, haven’t been substantively changed. Gmail, sure – and, um, wow – but most of the rest seem to have been recompiled with the new widget set without really putting a ton of thought into how they work or what they do. A lot of odd animations happen for no obvious reason, and places where an attempt to act like a “material” betrays itself in some oddly irritating way. Moving the lock screen on one axis now disallows you moving it on the other axis; touching some (but not all?) list items makes this odd radial “splash” thing happen, which looks like a printf they forgot to ifdef out before shipping.

There’s a lot of stuff like that, not often at the edges – Maps’ mad dash towards incomprehensibility seems to be picking up speed – and in that sense it’s business as usual. There isn’t really a coherent narrative or model or anything underpinning Material Design, just a bunch of random, disconnected stuff you’ve got to relearn by discovery and practice by rote. It’s novel and more colourful – which is nice, for real! – but so much of it doesn’t make intuitive sense that it’s hard to stay excited about Android’s prospects. Pulling down on this widget causes that other widget to move sideways, or some other circle to appear and then spin. Some icons just hover there disconnected from anything, perplexing iconography near-invisible against the wrong background. Scroll far enough and ominous shadows appear and seem to follow you briefly around, a subtle visual cue that you’re at the end of the list and Oh by the way death awaits us all. In fact, modulo some tentacles and chanting I have the nagging sense I’m looking at a Lovecraftian pop-up book, aiming for colourful intuitive fun, running aground on the black shoals of the arbitrary and incomprehensible.

Still better than it was, though, seriously. It’s a big improvement.

June 23, 2014

Vocoder Duet

Filed under: a/b,arcade,digital,doom,toys — mhoye @ 1:01 pm

You can think of them as the Fry and Laurie of malevolent synthetic intelligences that are going to murder you.

In a fortuitous coincidence, this video – a collection of communications from SHODAN, antagonist of the classic System Shock 2,

and this video, of GlaDOS‘ spoken dialogue from the first Portal,

… are both about 14 and a half minutes long.

You should listen to them both at the same time.

October 22, 2013

Citation Needed

I may revisit this later. Consider this a late draft. I’m calling this done.

“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle

Sometimes somebody says something to me, like a whisper of a hint of an echo of something half-forgotten, and it lands on me like an invocation. The mania sets in, and it isn’t enough to believe; I have to know.

I’ve spent far more effort than is sensible this month crawling down a rabbit hole disguised, as they often are, as a straightforward question: why do programmers start counting at zero?

Now: stop right there. By now your peripheral vision should have convinced you that this is a long article, and I’m not here to waste your time. But if you’re gearing up to tell me about efficient pointer arithmetic or binary addition or something, you’re wrong. You don’t think you’re wrong and that’s part of a much larger problem, but you’re still wrong.

For some backstory, on the off chance anyone still reading by this paragraph isn’t an IT professional of some stripe: most computer languages including C/C++, Perl, Python, some (but not all!) versions of Lisp, many others – are “zero-origin” or “zero-indexed”. That is to say, in an array A with 8 elements in it, the first element is A[0], and the last is A[7]. This isn’t universally true, though, and other languages from the same (and earlier!) eras are sometimes one-indexed, going from A[1] to A[8].

While it’s a relatively rare practice in modern languages, one-origin arrays certainly aren’t dead; there’s a lot of blood pumping through Lua these days, not to mention MATLAB, Mathematica and a handful of others. If you’re feeling particularly adventurous Haskell apparently lets you pick your poison at startup, and in what has to be the most lunatic thing I’ve seen on a piece of silicon since I found out the MIPS architecture had runtime-mutable endianness, Visual Basic (up to v6.0) featured the OPTION BASE flag, letting you flip that coin on a per-module basis. Zero- and one-origin arrays in different corners of the same program! It’s just software, why not?

All that is to say that starting at 1 is not an unreasonable position at all; to a typical human thinking about the zeroth element of an array doesn’t make any more sense than trying to catch the zeroth bus that comes by, but we’ve clearly ended up here somehow. So what’s the story there?

The usual arguments involving pointer arithmetic and incrementing by sizeof(struct) and so forth describe features that are nice enough once you’ve got the hang of them, but they’re also post-facto justifications. This is obvious if you take the most cursory look at the history of programming languages; C inherited its array semantics from B, which inherited them in turn from BCPL, and though BCPL arrays are zero-origin, the language doesn’t support pointer arithmetic, much less data structures. On top of that other languages that antedate BCPL and C aren’t zero-indexed. Algol 60 uses one-indexed arrays, and arrays in Fortran are arbitrarily indexed – they’re just a range from X to Y, and X and Y don’t even need to be positive integers.

So by the early 1960’s, there are three different approaches to the data structure we now call an array.

  • Zero-indexed, in which the array index carries no particular semantics beyond its implementation in machine code.
  • One-indexed, identical to the matrix notation people have been using for quite some time. It comes at the cost of a CPU instruction or disused word to manage the offset; usability isn’t free.
  • Arbitrary indices, in which the range is significant with regards to the problem you’re up against.

So if your answer started with “because in C…”, you’ve been repeating a good story you heard one time, without ever asking yourself if it’s true. It’s not about *i = a + n*sizeof(x) because pointers and structs didn’t exist. And that’s the most coherent argument I can find; there are dozens of other arguments for zero-indexing involving “natural numbers” or “elegance” or some other unresearched hippie voodoo nonsense that are either wrong or too dumb to rise to the level of wrong.

The fact of it is this: before pointers, structs, C and Unix existed, at a time when other languages with a lot of resources and (by the standard of the day) user populations behind them were one- or arbitrarily-indexed, somebody decided that the right thing was for arrays to start at zero.

So I found that person and asked him.

His name is Dr. Martin Richards; he’s the creator of BCPL, now almost 7 years into retirement; you’ve probably heard of one of his doctoral students Eben Upton, creator of the Raspberry Pi. I emailed him to ask why he decided to start counting arrays from zero, way back then. He replied that…

As for BCPL and C subscripts starting at zero. BCPL was essentially designed as typeless language close to machine code. Just as in machine code registers are typically all the same size and contain values that represent almost anything, such as integers, machine addresses, truth values, characters, etc. BCPL has typeless variables just like machine registers capable of representing anything. If a BCPL variable represents a pointer, it points to one or more consecutive words of memory. These words are the same size as BCPL variables. Just as machine code allows address arithmetic so does BCPL, so if p is a pointer p+1 is a pointer to the next word after the one p points to. Naturally p+0 has the same value as p. The monodic indirection operator ! takes a pointer as it’s argument and returns the contents of the word pointed to. If v is a pointer !(v+I) will access the word pointed to by v+I. As I varies from zero upwards we access consecutive locations starting at the one pointed to by v when I is zero. The dyadic version of ! is defined so that v!i = !(v+I). v!i behaves like a subscripted expression with v being a one dimensional array and I being an integer subscript. It is entirely natural for the first element of the array to have subscript zero. C copied BCPL’s approach using * for monodic ! and [ ] for array subscription. Note that, in BCPL v!5 = !(v+5) = !(5+v) = 5!v. The same happens in C, v[5] = 5[v]. I can see no sensible reason why the first element of a BCPL array should have subscript one. Note that 5!v is rather like a field selector accessing a field in a structure pointed to by v.

This is interesting for a number of reasons, though I’ll leave their enumeration to your discretion. The one that I find most striking, though, is that this is the earliest example I can find of the understanding that a programming language is a user interface, and that there are difficult, subtle tradeoffs to make between resources and usability. Remember, all this was at a time when everything about the future of human-computer interaction was up in the air, from the shape of the keyboard and the glyphs on the switches and keycaps right down to how the ones and zeros were manifested in paper ribbon and bare metal; this note by the late Dennis Ritchie might give you a taste of the situation, where he mentions that five years later one of the primary reasons they went with C’s square-bracket array notation was that it was getting steadily easier to reliably find square brackets on the world’s keyboards.

“Now just a second, Hoye”, I can hear you muttering. “I’ve looked at the BCPL manual and read Dr. Richards’ explanation and you’re not fooling anyone. That looks a lot like the efficient-pointer-arithmetic argument you were frothing about, except with exclamation points.” And you’d be very close to right. That’s exactly what it is – the distinction is where those efficiencies take place, and why.

BCPL was first compiled on an IBM 7094here’s a picture of the console, though the entire computer took up a large room – running CTSS – the Compatible Time Sharing System – that antedates Unix much as BCPL antedates C. There’s no malloc() in that context, because there’s nobody to share the memory core with. You get the entire machine and the clock starts ticking, and when your wall-clock time block runs out that’s it. But here’s the thing: in that context none of the offset-calculations we’re supposedly economizing are calculated at execution time. All that work is done ahead of time by the compiler.

You read that right. That sheet-metal, “wibble-wibble-wibble” noise your brain is making is exactly the right reaction.

Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.

Does it get better? Oh, it gets better:

IBM had been very generous to MIT in the fifties and sixties, donating or discounting its biggest scientific computers. When a new top of the line 36-bit scientific machine came out, MIT expected to get one. In the early sixties, the deal was that MIT got one 8-hour shift, all the other New England colleges and universities got a shift, and the third shift was available to IBM for its own use. One use IBM made of its share was yacht handicapping: the President of IBM raced big yachts on Long Island Sound, and these boats were assigned handicap points by a complicated formula. There was a special job deck kept at the MIT Computation Center, and if a request came in to run it, operators were to stop whatever was running on the machine and do the yacht handicapping job immediately.

Jobs on the IBM 7090, one generation behind the 7094, were batch-processed, not timeshared; you queued up your job along with a wall-clock estimate of how long it would take, and if it didn’t finish it was pulled off the machine, the next job in the queue went in and you got to try again whenever your next block of allocated time happened to be. As in any economy, there is a social context as well as a technical context, and it isn’t just about managing cost, it’s also about managing risk. A programmer isn’t just racing the clock, they’re also racing the possibility that somebody will come along and bump their job and everyone else’s out of the queue.

I asked Tom Van Vleck, author of the above paragraph and also now retired, how that worked. He replied in part that on the 7090…

“User jobs were submitted on cards to the system operator, stacked up in a big tray, and a rudimentary system read, loaded, and ran jobs in sequence. Typical batch systems had accounting systems that read an ID card at the beginning of a user deck and punched a usage card at end of job. User jobs usually specified a time estimate on the ID card, and would be terminated if they ran over. Users who ran too many jobs or too long would use up their allocated time. A user could arrange for a long computation to checkpoint its state and storage to tape, and to subsequently restore the checkpoint and start up again.

The yacht handicapping job pertained to batch processing on the MIT 7090 at MIT. It was rare — a few times a year.”

So: the technical reason we started counting arrays at zero is that in the mid-1960’s, you could shave a few cycles off of a program’s compilation time on an IBM 7094. The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.

There are a few points I want to make here.

The first thing is that as far as I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.

Part of the problem is access to the historical record, of course. I was in favor of Open Access publication before, but writing this up has cemented it: if you’re on the outside edge of academia, $20/paper for any research that doesn’t have a business case and a deep-pocketed backer is completely untenable, and speculative or historic research that might require reading dozens of papers to shed some light on longstanding questions is basically impossible. There might have been a time when this was OK and everyone who had access to or cared about computers was already an IEEE/ACM member, but right now the IEEE – both as a knowledge repository and a social network – is a single point of a lot of silent failure. “$20 for a forty-year-old research paper” is functionally indistinguishable from “gone”, and I’m reduced to emailing retirees to ask them what they remember from a lifetime ago because I can’t afford to read the source material.

The second thing is how profoundly resistant to change or growth this field is, and apparently has always been. If you haven’t seen Bret Victor’s talk about The Future Of Programming as seen from 1975 you should, because it’s exactly on point. Over and over again as I’ve dredged through this stuff, I kept finding programming constructs, ideas and approaches we call part of “modern” programming if we attempt them at all, sitting abandoned in 45-year-old demo code for dead languages. And to be clear: that was always a choice. Over and over again tools meant to make it easier for humans to approach big problems are discarded in favor of tools that are easier to teach to computers, and that decision is described as an inevitability.

This isn’t just Worse Is Better, this is “Worse Is All You Get Forever”. How many off-by-one disasters could we have avoided if the “foreach” construct that existed in BCPL had made it into C? How much more insight would all of us have into our code if we’d put the time into making Michael Chastain’s nearly-omniscient debugging framework – PTRACE_SINGLESTEP_BACKWARDS! – work in 1995? When I found this article by John Backus wondering if we can get away from Von Neumann architecture completely, I wonder where that ambition to rethink our underpinnings went. But the fact of it is that it didn’t go anywhere. Changing how you think is hard and the payoff is uncertain, so by and large we decided not to. Nobody wanted to learn how to play, much less build, Engelbart’s Violin, and instead everyone gets a box of broken kazoos.

In truth maybe somebody tried – maybe even succeeded! – but it would cost me hundreds of dollars to even start looking for an informed guess, so that’s the end of that.

It’s hard for me to believe that the IEEE’s membership isn’t going off a demographic cliff these days as their membership ages, and it must be awful knowing they’ve got decades of delicious, piping-hot research cooked up that nobody is ordering while the world’s coders are lining up to slurp watery gruel out of a Stack-Overflow-shaped trough and pretend they’re well-fed. You might not be surprised to hear that I’ve got a proposal to address both those problems; I’ll let you work out what it might be.

March 8, 2013

Narrative Paralysis

Filed under: a/b,interfaces,life — mhoye @ 9:33 am

Yesterday on the subway I watched a man write “KEY INSIGHTS” at the top of a page in his Moleskine, and then just stare at the page unmoving for the next six stops. He hadn’t budged when I stepped off to switch trains; I have to admit that as the minutes ticked by, I struggled not to start laughing right there. “ZOMG Thought Leadership Liek Woah”, I was thinking.

This morning I realized I’d been staring at an email window with a “To:” line, a title, and a cursor blinking away in an otherwise empty editor for at least five minutes, maybe more.

Sorry, key-insights-on-the-subway-guy. The inside of my head could have been a little more sympathetic, it turns out.

November 7, 2012

Flip All The Pronouns

Filed under: a/b,digital,hate,interfaces,parenting,toys,vendetta — mhoye @ 8:56 pm

On A Certain Island

Maya and I have been playing through Windwaker together; she likes sailing, scary birds and remembering to be brave, rescuing her little brother and finding out what’s happening to Medli and her dragon boat.

She’s the hero of the story, of course.

It’s annoying and awkward, to put it mildly, having to do gender-translation on the fly when Maya asks me to read what it says on the screen. You can pick your character’s name, of course – I always stick with Link, being a traditionalist – but all of the dialog insists that Link is a boy, and there’s apparently nothing to be done about it.

Well, there wasn’t anything to be done about it, certainly not anything easy, but as you might imagine I’m not having my daughter growing up thinking girls don’t get to be the hero and rescue their little brothers.

This isn’t particularly user-friendly; you’ll need to download the Dolphin emulator and find a Windwaker .GCM, the Gamecube disk image with this SHA-1 hash:

Original: 6b5f06c10d50ebb4099cded88217eb71e5bfbb4a

and then you’ll need to figure out how to use xdelta3 to apply a binary patch to that image.

This patch.

When you’re done the resulting disk image will have the following SHA-1 hash:

Result: 6a480ffd8ecb6c254f65c0eb8e0538f7b30cfaa7

… and all the dialog will now refer to Link as a young woman, rather than as a young man.

I think I’ve gotten this right – this was all done directly on the original disk image with a hex editor, so all the changes needed to be the same byte-for-byte length, in-place. I haven’t had time to play through the whole game to test it yet, and some of the constructions aren’t perfect. I’ve borrowed Donaldson’s “Swordmain” coinage to replace “Swordsman”, for example, and there’s lots of “milady” replacing “my lad” and “master”, because I couldn’t find a better way to rewrite them in exactly the amount of space allotted. If you come up with something better, I’m all ears.

I’m going to audit it shortly, and may update this post to reflect that. For now, though, here you go.

FemLink or you’re doing it wrong.

« Newer PostsOlder Posts »

Powered by WordPress