blarg?

April 10, 2019

Modern Problems, Etc.

Filed under: analog,awesome,future,interfaces,life,lunacy,weird — mhoye @ 10:51 am

genegraft

April 2, 2019

Occasionally Useful

A bit of self-promotion: the UsesThis site asked me their four questions a little while ago; it went up today.

A colleague once described me as “occasionally useful, in the same way that an occasional table is a table.” Which I thought was oddly nice of them.

May 5, 2017

Nerd-Cred Level-Up

Filed under: awesome,flickr,life,lunacy,weird — mhoye @ 9:13 am

P5052724

In 2007 I was an extra in an indie zombie movie called “Sunday Morning” that featured Ron Tarrant. Tarrant starred with Mark Slacke in a 2010 short called “The Painting In The House”, who in turn played a role in Cuba Gooding Jr.’s “Sacrifice”. Gooding, of course, played a role in A Few Good Men, as did Kevin Bacon.

Recently, I’ve co-authored a paper with Greg Wilson – “Do Software Developers Understand Open Source Licenses?” – principal authors are Daniel Almeida and Gail Murphy at UBC – that will be presented at ICPC 2017 later this year. Greg Wilson has previously co-authored a paper with Robert Sedgewick, who has co-authored a paper with Andrew Chi-Chih Yao, who has in turn co-authored a paper with Ronald L. Graham.

You can find all of Graham’s many collaborations with Paul Erdős, one of the most prolific mathematicians of the 20th century, on his homepage.

Which is all to say that I now have an Erdős-Bacon number of 9.

I’m unreasonably stoked about that for some reason.

February 6, 2017

The Scope Of The Possible

Filed under: digital,future,interfaces,life,lunacy,mozilla,want,weird,work — mhoye @ 5:34 pm

IMG_20170126_070957

This is a rough draft; I haven’t given it much in the way of polish, and it kind of just trails off. But a friend of mine asked me what I think web browsers look like in 2025 and I promised I’d let that percolate for a bit and then tell him, so here we go. For whatever disclaimers like this are worth, I don’t have my hands on any of the product direction levers here, and as far as the orgchart’s concerned I am a leaf in the wind. This is just my own speculation.

I’m a big believer in Conway’s Law, but not in the sense that I’ve heard most people talk about it. I say “most people”, like I’m the lone heretic of some secret cabal that convenes once a month to discuss a jokey fifty year old observation about software architecture, I get that, but for now just play along. Maybe I am? If I am, and I’m not saying one way or another, between you and me we’d have an amazing secret handshake.

So: Conway’s Law isn’t anything fancier than the observation that software is a collaborative effort, so the shape of large piece of software will end up looking a lot like the orgchart or communication channels of the people building it; this emerges naturally from the need to communicate and coordinate efforts between teams.

My particular heresy here is that I don’t think Conway’s Law needs to be the curse it’s made out to be. Communication will never not be expensive, but it’s also a subset of interaction. So if you look at how the nature of people’s interactions with and expectations from a communication channel are changing, you can use it as a kind of oracle to predict what the next evolutionary step of a product should look like.

At the highest level, some 23 years after Netscape Navigator 1.0 came out, the way we interact with a browser is pretty much the same as it ever was; we open it, poke around it and close it. Sure, we poke around a lot more things, and they’re way cooler and have a lot more people on far end of them but… for the most part, that’s it.

That was all that you could do in the 90’s, because that’s pretty much all that interacting with the web of the 90’s could let you do. The nature of the Web has changed profoundly since then, and like I’ve said before, the web is everywhere and in everything now. But despite that, and the fact that browsers are very different beasts now than they were when the Web was taking its first tentative steps, that high-level interaction model has stayed pretty much the same.

But if the web is everywhere and in everything, then an interaction that involves opening an app, looking through it and closing it again seems incredibly antiquated, like you’re looking out a porthole in the side of a steamship. Even the name is telling: you don’t “browse” the web anymore. You engage with it, you interact with it, and with people, groups and businesses through it.

Another way to say that is the next generation of web browser won’t look like a browser at all: it will be a service.

More specifically I think the next generation of what we currently call a web browser will be a hybrid web-access service; like the current Web, it lives partly on a machine somewhere and partly on whatever device or devices are next to you, and act as the intermediary – the user agent – that keeps you connected you to this modern, always-on Web.

The app model is almost, kind-of-partway there, but in so many ways it makes life more complicated and less interesting than it needs to be. For the most part, apps only ever want to connect you to one place or set of people. Maybe that’s fine and that’s where your people are. But maybe you have to juggle a bunch of different communities in your life across a bunch of apps that go out of their way to keep those communities from discovering each other, and they all seem to want different slices of your life, your time and data depending on what the ad revenue people think is trendy this week. And because companies want to cover their bases you end up with these strange brands-pretending-to-be-people everywhere. It’s a mess, and having to juggle a bunch of different apps and communities doesn’t make a ton of sense when we’ve already got a reliable way of shipping safe, powerful software on demand.

I think the right – and probably next – thing is to push that complexity away from their device, to this user-agent-as-a-service living out there on a serverin the cloud somewhere, just sitting there patiently paying attention. Notifications – a superset of messaging, and the other part of this picture – can come from anywhere and be anything, because internet, but your Agent can decide whether forward them on directly, filter or bounce them, as you like. And if you decide to go out there and get something – a video, a file, a page, whatever, then your Agent can do all sorts of interesting work for you in-flight. Maybe you want ad filtering, maybe you paid for an antivirus service to give that file a once-over, maybe your employer has security protocols in place to add X or strip out Y. There’s lots of room there for competing notification services, agent providers and in-agent services, a marketplace of ideas-that-are-also-machines.

There’s a couple of things that browsers, for all their warts and dated ideas, do better than any app or monolithic service; most of those have to do with user intent, the desire for safety and privacy, but also the desires for novelty, variety and unique humanity. I’ve talked about this before, the idea of engineering freedom in depth. I still think it’s possible to build human-facing systems that can – without compromise – mitigate the possibility of harm, and mount a positive defense of the scope of the possible. And I think maybe this is one way to do that.

(Updated: Typos, phrasing, added some links.)

October 8, 2016

Pageant Knight

Filed under: a/b,awesome,comics,documentation,lunacy,microfiction,weird — mhoye @ 11:45 pm

Sunset At The Beach

On September 17th, DC “celebrated” what they called “Batman Day”. I do not deploy scare quotes lightly, so let me get this out of the way: Batman is boring. Batman qua Batman as a hero, as a story and as the center of a narrative framework, all of those choices are pretty terrible. The typical Batman story arc goes something like:

  • Batman is the best at everything. But Gotham, his city, is full of terrible.
  • Batman broods over his city. The city is full of terrible but Batman is a paragon of brooding justice.
  • An enemy of justice is scheming at something. Batman detects the scheme, because he is the World’s Greatest Among Many Other Things Detective and intervenes.
  • Batman is a paragon of brooding justice.
  • Batman’s attempt to intervene fails! Batman may not be the best at everything!
  • Batman broods and/or has a bunch of feelings and/or upgrades one of his widgets.
  • Batman intervenes again, and Batman emerges triumphant! The right kind of punching and/or widgeting makes him the best at everything again.
  • Order is restored to Gotham.
  • Batman is a paragon of brooding justice.

If you’re interested in telling interesting stories Batman is far and away the least interesting thing in Gotham. So I took that opportunity to talk about the Batman story I’d write given the chance. The root inspiration for all this was a bout of protracted synesthesia brought on by discovering this take on Batman from Aaron Diaz, creator of Dresden Codak, at about the same time as I first heard Shriekback’s “Amaryllis In The Sprawl”.

The central thesis is this: if you really want a Gritty, Realistic Batman For The Modern Age, then Gotham isn’t an amped-up New York. It’s an amped-up New Orleans, or some sort of New-Orleans/Baltimore mashup. A city that’s full of life, history, culture, corruption and, thanks to relentlessly-cut tax rates, failing social and physical infrastructure. A New-Orleans/Baltimore metropolis in a coastal version of Brownback’s Kansas, a Gotham where garbage isn’t being collected and basic fire & police services are by and large not happening because tax rates and tax enforcement has been cut to the bone and the city can’t afford to pay its employees.

Bruce Wayne, wealthy philanthropist and Gotham native, is here to help. But this is Bruce Wayne via late-stage Howard Hughes; incredibly rich, isolated, bipolar and delusional, a razor-sharp business mind offset by a crank’s self-inflicted beliefs about nutrition and psychology. In any other circumstances he’d be the harmless high-society crackpot city officials kept at arm’s length if they couldn’t get him committed. But these aren’t any other circumstances: Wayne is far more than just generous, but he wants to burn this candle at both ends by helping the city through the Wayne Foundation by day and in his own very special, very extralegal way, fighting crime dressed in a cowl by night.

And he’s so rich that despite his insistence on dressing up his 55-year-old self in a bat costume and beating people up at night, the city needs that money so badly that to keep his daytime philanthropy flowing, six nights a week a carefully selected group of city employees stage another episode of “Batman, crime fighter”, a gripping Potemkin-noir pageant with a happy ending and a costumed Wayne in the lead role.

Robin – a former Arkham psych-ward nurse, a gifted young woman and close-combat prodigy in Wayne’s eyes – is a part of the show, conscripted by Mayor Cobblepot to keep an eye on Wayne and keep him out of real trouble. Trained up by retired SAS Sgt. Alfred Pennyworth behind Wayne’s back, in long-shuttered facilities beneath Wayne Manor that Wayne knows nothing about, she is ostensibly Batman’s sidekick in his fight against crime. But her real job is to protect Wayne on those rare occasions that he runs into real criminals and tries to intervene. She’s got a long, silenced rifle under that cloak with a strange, wide-mouthed second barrel and a collection of exotic munitions that she uses like a surgical instrument, not only to protect Wayne but more importantly to keep him convinced his fists & gadgets work at all.

She and Harleen Quinzel, another ex-Arkham staffer trained by Alfred, spend most of their days planning strategy. They have the same job; Quinn is the sidekick, shepherd and bodyguard of the former chief medical officer of Arkham. Quinn’s charge is also in his twilight years, succumbing to a manic psychosis accelerated by desperate self-administration of experimental and off-label therapies that aren’t slowing the degeneration of his condition, but sure are making him unpredictable. But he was brilliant once, also a philanthropist – the medical patents he owns are worth millions, bequeathed to Gotham and the patients of Arkham, provided the city care for him in his decline. Sometimes he’s still lucid; the brilliant, compassionate doctor everyone remembers. And other times – mostly at night – he’s somebody else entirely, somebody with a grievance and a dark sense of humor.

So Gotham – this weird, mercenary, vicious, beautiful, destitute Gotham – becomes the backdrop for this nightly pageant of two damaged, failing old men’s game of cat and mouse and the real story we’re following is Robin, Quinn, Alfred and the weird desperation of a city so strapped it has to let them play it out, night after dark, miserable night.

September 2, 2016

Brought To You By The Letter U

Filed under: awesome,lunacy,microfiction,weird — mhoye @ 12:04 pm
Being a global organization, Mozilla employees periodically send out all-hands emails notifying people of upcoming regional holidays. With Labour Day coming up in Canada, this was my contribution to the cause:

The short version: Monday is Labour Day, a national holiday in Canada – expect Canadian offices to be closed and our Canadian colleagues to be either slow to respond or completely unresponsive, depending on how much fun they’ve had.

The longer version:

On Monday, Canadians will be celebrating Labour Day by not labouring; as many of you know, this is one of Canada’s National Contradictions, one of only two to appear on a calendar*.

Canada’s labour day has its origin in the Toronto Typographical Union’s strike for a 58-hour work-week in 1872, brought on by the demands of the British government for large quantities of the letter U. At the time, Us were aggressively recirculated to the British colonies to defend Imperial syntactic borders and maintain grammatical separation between British and American English. In fact, British grammarian propaganda from this period is the origin of the phrase “Us and Them”.

At the time, Canadian Us were widely recognized as the highest quality Us available, but the hard labour of the vowel miners and the artisans whose skill and patience made the Canadian Us the envy of western serifs is largely lost to history; few people today realize that “usability” once described something that would suffice in the absence of an authentic Canadian U.

Imperial demands placed on Union members at the time were severe. Indeed, in the weeks leading up to the 1872 strike the TTU twice had to surrender their private Us to make the imperial quota, and were known as the Toronto Typographical Onion in the weeks leading up to the strike. While success of the Onion’s strike dramatically improved working conditions for Canadian labourers, this was the beginning of a dramatic global U shortage; from 1873 until the late early 1900s, global demand for Us outstripped supply, and most Us had been refurbished and reused many times over; “see U around” was a common turn of phrase describing this difficult time.

Early attempts at meeting the high demand for U were only somewhat successful. In the 1940s the British “v for victory” campaign was only partially successful in addressing British syntactic shortages that were exacerbated by extensive shipping losses due to sunken U-boats. The Swedish invention of the umlaut – “u” meaning “u” and “mlaut” meaning “kinda” – intended to paper over the problem, was likewise unsuccessful. It wasn’t until the electronic typography of the late seventies that U demand could easily be fulfilled and words like Ubiquity could be typed casually, without the sense of “overuse” that had plagued authors for most of a century.

Despite a turnaround that lexical economists refer to as “The Great U-Turn”, the damage was done. Regardless of their long status as allies the syntactic gap between American and British Englishes was a bridge too far; anticipated American demand for Us never materialized, and American English remains unusual to this day.

Today, Labour Day is effectively a day Canada spends to manage, and indeed revel in the fact, that there are a lot of Us; travellers at this time of year will remark on the number of U-Hauls on the road, carting Us around the country in celebration. This is all to say that we’ll be celebrating our labour heritage this upcoming Monday. Canadians everywhere may be seen duing any number of thungs to commumurate this uccasiun: swumming, canuing, guardening, vusuting neighbours, and spunding tume at the couttage

Thunk you, und see you all un Tuesday.

– mhuye

* – The other being the Spring National Resignation, where Canadians repeatedly declare Hockey their national sport while secretly enjoying watching the Leafs choke away another promising start.

May 27, 2016

Developers Are The New Mainframes

Filed under: documentation,future,interfaces,lunacy,mozilla,science,weird,work — mhoye @ 3:20 pm

This is another one of those rambling braindump posts. I may come back for some fierce editing later, but in the meantime, here’s some light weekend lunacy. Good luck getting through it. I believe in you.

I said that thing in the title with a straight face the other day, and not without reason. Maybe not good reasons? I like the word “reason”, I like the little sleight-of-hand it does by conflating “I did this on purpose” and “I thought about this beforehand”. It may not surprise you to learn that in my life at least those two things are not the same at all. In any case this post by Moxie Marlinspike was rattling around in the back of my head when somebody asked me on IRC why it’s hard-and-probably-impossible to make a change to a website in-browser and send a meaningful diff back to the site’s author, so I rambled for a bit and ended up here.

This is something I’ve asked for in the past myself: something like dom-diff and dom-merge, so site users could share changes back with creators. All the “web frameworks” I’ve ever seen are meant to make development easier and more manageable but at the end of the day what goes over the wire is a pile of minified angle-bracket hamburger that has almost no connection the site “at rest” on the filesystem. The only way share a usable change with a site author, if it can be done at all, is to stand up a containerized version of the entire site and edit that. This disconnect between the scale of the change and the work needed to make it is, to put it mildly, a huge barrier to somebody who wants to correct a typo, tweak a color or add some alt-text to an image.

I ranted about this for a while, about how JavaScript has made classic View Source obsolete and how even if you had dom-diff and dom-merge you’d need a carefully designed JS framework underneath designed specifically to support them, and how it makes me sad that I don’t have the skill set or free time to make that happen. But I think that if you dig a little deeper, there are some cold economics underneath that whole state of affairs that are worth thinking about.

I think that the basic problem here is the misconception that federation is a feature of distributed systems. I’m pretty confident that it’s not; specifically, I believe that federated systems are a byproduct of computational scarcity.

Building and deploying federated systems has a bunch of hard tradeoffs around development, control and speed of iteration that people are stuck with when computation is so expensive that no single organization can have or do enough of it to give a service global reach. Usenet, XMPP, email and so forth were products of this mainframe-and-minicomputer era; the Web is the last and best of them.

Protocol consensus is hard, but not as hard or expensive as a room full of $40,000 or $4,000,000 computers, so you do that work and accept the fact that what you gain in distributed stability you lose in iteration speed and design flexibility. The nature of those costs means the pressure to get it pretty close to right on the first try is very high, because real opportunities to revisit will be rare and costly. You’re fighting your own established success at that point, and nothing in tech has more inertia than a status quo whose supporters think is good enough. (See also: how IPV6 has been “right around the corner” for 20 years.)

But that’s just not true anymore. If you need a few thousand more CPUs, you twiddle the dials on your S3 page and go back to unified deployment, rapid experimental iteration and trying to stay ahead of everyone else who’s doing the same. That’s how WhatsApp can deploy end to end encryption with one software update, just like that. It’s how Facebook can update a billion users’ experiences whenever they feel like it, and presumably how Twitter does whatever the hell Twitter’s doing this week. They don’t ask permission or seek consensus because they don’t have to; they deploy, test and iterate.

So the work that used to enable, support and improve federated systems now mostly exists where domain-computation is still scarce and expensive: the development process itself. Specifically the inside of developers heads, developers who stubbornly and despite our best efforts remain expensive, high-maintenance and relatively low-bandwidth, with lots of context and application-reasoning locked up in their heads and poorly distributed.

Which is to say: developers are the new mainframes.

Right now great majority of what they’re “connected” to from a development-on-device perspective are de-facto dumb terminals. Apps, iPads, Android phones. Web pages you can’t meaningfully modify for values of “meaningful” that involve upstreaming a diff. From a development perspective those are the endpoints of one-way transmissions, and there’s no way to duplex that line to receive development-effort back.

So, if that’s the trend – that is, if in general centralized-then-federated systems get reconsolidated in socially-oriented verticals, (and that’s what issue trackers are when compared to mailing lists) – then development as a practice is floating around the late middle step, but development as an end product – via cheap CPU and hackable IoT devices – that’s just getting warmed up. The obvious Next Thing in that space will be a resurgence of something like the Web, made of little things that make little decisions – effectively distributing, commodifying and democratizing programming as a product, duplexing development across those newly commodified development-nodes.

That’s the real revolution that’s coming, not the thousand-dollar juicers or the bluetooth nosehair trimmers, but the mess of tiny hackable devices that start to talk to each other via decentralized, ultracommodified feedback loops. We’re missing a few key components – bug trackers aren’t quite source-code-managers or social-ey, IoT build tools aren’t one-click-to-deploy and so forth, but eventually there will be a single standard for how these things communicate and run despite everyone’s ongoing efforts to force users into the current and very-mainframey vendor lock-in, the same way there were a bunch of proprietary transport protocols before TCP/IP settled the issue. Your smarter long-game players will be the ones betting on JavaScript to come out on top there, though it’s possible there will be other contenders.

The next step will be the social one, though “tribal” might be a better way of putting it – the eventual recentralization of this web of thing-code into cultural-preference islands making choices about how they speak to the world around them and the world speaks back. Basically a hardware scripting site with a social aspect built in, communities and trusted sources building social/subscriber model out for IoT agency. What the Web became and is still in a lot of ways becoming as we figure the hard part – the people at scale part, out. The Web of How Stuff Works.

Anyway, if you want to know what the next 15-20 years will look like, that’s the broad strokes. Probably more like 8-12, on reflection. Stuff moves pretty quick these days, but like I said, building consensus is hard. The hard part is always people. This is one of the reasons I think Mozilla’s mission is only going to get more important for the foreseeable future; the Web was the last and best of the federated systems, worth fighting for on those grounds alone, and we’re nowhere close to done learning everything it’s got to teach us about ourselves, each other and what it’s possible for us to become. It might be the last truly open, participatory system we get, ever. Consensus is hard and maybe not necessary anymore, so if we can’t keep the Web and the lessons we’ve learned and can still learn from it alive long enough to birth its descendants, we may never get a chance to build another system like it.

[minor edits since first publication. -mhoye]

September 20, 2015

The Bourne Aesthetic

“The difference between something that can go wrong and something that can’t possibly go wrong is that when something that can’t possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.”

–Douglas Adams

I’ve been trying to get this from draft to published for almost six months now. I might edit it later but for now, what the hell. It’s about James Bond, Jason Bourne, old laptops, economies of scale, design innovation, pragmatism at the margins and an endless supply of breadsticks.

You’re in, right?

Bond was a character that people in his era could identify with:

Think about how that works in the post war era. The office dwelling accountant/lawyer/ad man/salesman has an expense account. This covers some lunches at counters with clients, or maybe a few nice dinners. He flirts with the secretaries and receptionists and sometimes sleeps with them. He travels on business, perhaps from his suburb into Chicago, or from Chicago to Cleveland, or San Francisco to LA. His office issues him a dictaphone (he can’t type) or perhaps a rolling display case for his wares. He has a work car, maybe an Oldsmobile 88 if he’s lucky, or a Ford Falcon if he’s not. He’s working his way up to the top, but isn’t quite ready for a management slot. He wears a suit, tie and hat every day to the office. If he’s doing well he buys this downtown at a specialty men’s store. If he’s merely average, he picks this up at Macy’s, or Sears if he’s really just a regular joe. If he gets sick his employer has a nice PPO insurance plan for him.

Now look at Bond. He has an expense account, which covers extravagant dinners and breakfasts at the finest 4 star hotels and restaurants. He travels on business, to exotic places like Istanbul, Tokyo and Paris. He takes advantage of the sexual revolution (while continuing to serve his imperialist/nationalist masters) by sleeping with random women in foreign locations. He gets issued cool stuff by the office– instead of a big dictaphone that he keeps on his desk, Bond has a tiny dictaphone that he carries around with him in his pocket! He has a work car — but it’s an Aston Martin with machine guns! He’s a star, with a license to kill, but not management. Management would be boring anyways, they stay in London while Bond gets to go abroad and sleep with beautiful women. Bond always wears a suit, but they’re custom tailored of the finest materials. If he gets hurt, he has some Royal Navy doctors to fix him right up.

In today’s world, that organization man who looked up to James Bond as a kind of avatar of his hopes and dreams, no longer exists.

Who is our generations James Bond? Jason Bourne. He can’t trust his employer, who demanded ultimate loyalty and gave nothing in return. In fact, his employer is outsourcing his work to a bunch of foreign contractors who presumably work for less and ask fewer questions. He’s given up his defined benefit pension (Bourne had a military one) for an individual retirement account (safe deposit box with gold/leeching off the gf in a country with a depressed currency). In fact his employer is going to use him up until he’s useless. He can’t trust anyone, other than a few friends he’s made on the way while backpacking around. Medical care? Well that’s DIY with stolen stuff, or he gets his friends to hook him up. What kinds of cars does he have? Well no more company car for sure, he’s on his own on that, probably some kind of import job. What about work tools? Bourne is on is own there too. Sure, work initially issued him a weapon, but after that he’s got to scrounge up whatever discount stuff he can find, even when it’s an antique. He has to do more with less. And finally, Bourne survives as a result of his high priced, specialized education. He can do things few people can do – fight multiple opponents, hotwire a car, tell which guy in a restaurant can handle himself, hotwire cars, speak multiple languages and duck a surveillance tail. Oh, and like the modern, (sub)urban professional, Bourne had to mortgage his entire future to get that education. They took everything he had, and promised that if he gave himself up to the System, in return the System would take care of him.

It turned out to be a lie.

We’re all Jason Bourne now.

posted by wuwei at 1:27 AM on July 7, 2010

I think about design a lot these days, and I realize that’s about as fatuous an opener as you’re likely to read this week so I’m going to ask you to bear with me.

If you’re already rolling out your “resigned disappointment” face: believe me, I totally understand. I suspect we’ve both dealt with That Guy Who Calls Himself A Designer at some point, that particular strain of self-aggrandizing flake who’s parlayed a youth full of disdain for people who just don’t understand them into a career full of evidence they don’t understand anyone else. My current job’s many bright spots are definitely brighter for his absence, and I wish the same for you. But if it helps you get past this oddly-shaped lump of a lede, feel free to imagine me setting a pair of Raybans down next to an ornamental scarf of some kind, sipping a coffee with organic soy ingredients and a meaningless but vaguely European name, writing “Helvetica?” in a Moleskine notebook and staring pensively into the middle distance. Does my carefully manicured stubble convey the precise measure of my insouciance? Perhaps it does; perhaps I’m gazing at some everyday object nearby, pausing to sigh before employing a small gesture to convey that no, no, it’s really nothing. Insouciance is a french word, by the way. Like café. You should look it up. I know you’ve never been to Europe, I can tell.

You see? You can really let your imagination run wild here. Take the time you need to work through it. Once you’ve shaken that image off – one of my colleagues delightfully calls those guys “dribble designers” – let’s get rolling.

I think about design a lot these days, and I realize that’s about as fatuous an opener as you’re likely to read this week so I’m going to ask you to bear with me.

Very slightly more specifically I’ve been thinking about Apple’s latest Macbook, some recent retrospeculation from Lenovo, “timeless” design, spy movies and the fact that the Olive Garden at one point had a culinary institute. I promise this all makes sense in my head. If you get all the way through this and it makes sense to you too then something on the inside of your head resembles something on the inside of mine, and you’ll have to come to your own terms with that. Namasté, though. For real.

There’s an idea called “gray man” in the security business that I find interesting. They teach people to dress unobtrusively. Chinos instead of combat pants, and if you really need the extra pockets, a better design conceals them. They assume, actually, that the bad guys will shoot all the guys wearing combat pants first, just to be sure. I don’t have that as a concern, but there’s something appealingly “low-drag” about gray man theory: reduced friction with one’s environment.

– William Gibson, being interviewed at Rawr Denim

At first glance the idea that an Olive Garden Culinary Institute should exist at all squats on the line between bewildering and ridiculous. They use maybe six ingredients, and those ingredients need to be sourced at industrial scale and reliably assembled by a 22-year-old with most of a high-school education and all of a vicious hangover. How much of a culinary institute can that possibly take? In fact, at some remove the Olive Garden looks less like a restaurant chain than a supply chain that produces endless breadsticks; there doesn’t seem to be a ton of innovation here. Sure, supply chains are hard. But pouring prefab pomodoro over premade pasta, probably not.

Even so, for a few years the Tuscan Culinary Institute was a real thing, one of the many farming estates in Tuscany that have been resurrected to the service of regional gastrotourism booked by the company for a few weeks a year. Successful managers of the Garden’s ersatz-italian assembly lines could enjoy Tuscany on a corporate reward junket, and at a first glance amused disdain for the whole idea would seem to be on point.

There’s another way to look at the Tuscan Culinary Institute, though, that makes it seem valuable and maybe even inspired.

One trite but underappreciated part of the modern mid-tier supply-chain-and-franchise engine is how widely accessible serviceable and even good (if not great or world-beating) stuff has become. Coffee snobs will sneer at Starbucks, but the truck-stop tar you could get before their ascendance was dramatically worse. If you’ve already tried both restaurants in a town too remote to to be worth their while, a decent bowl of pasta, a bottle of inoffensive red and a steady supply of garlic bread starts to look like a pretty good deal.

This is one of the rare bright lights of the otherwise dismal grind of the capitalist exercise, this democratization of “good enough”. The real role of the Tuscan Culinary institute was to give chefs and managers a look at an authentic, three-star Tuscan dining experience and then ask them: with what we have to hand at the tail end of this supply chain, the pasta, the pomodoro, the breadsticks and wine, how can we give our customers 75% of that experience for 15% the cost?

It would be easy to characterize this as some sort of corporate-capitalist co-option of a hacker’s pragmatism – a lot of people have – but I don’t think that’s the right thing, or at least not the whole picture. This is a kind of design, and like any design exercise – like any tangible expression of what design is – we’re really talking about the expression and codification of values.

I don’t think it’s an accident that all the computers I bought between about 1998 about 2008 are either still in service or will still turn on if I flip the switch, but everything I’ve bought since lasts two or three years before falling over. There’s nothing magic about old tech, to be sure: in fact, the understanding that stuff breaks is baked right into their design. That’s why they’re still running: because they can be fixed. And thanks to the unfettered joys of standard interfaces some them are better today, with faster drives and better screens, than any computer I could have bought then.

The Macbook is the antithesis of this, of course. That’s what happened in 2008; the Macbook Pro started shipping with a non-removable battery.

If you haven’t played with one Apple’s flagship Macbooks, they are incredible pieces of engineering. They weigh approximately nothing. Every part of them seems like some fundamental advance in engineering and materials science. The seams are perfect; everything that can be removed, everything you can carve off a laptop and still have a laptop left, is gone.

As a result, it’s completely atomic, almost totally unrepairable. If any part of it breaks you’re hosed.

“Most people make the mistake of thinking design is what it looks like. People think it’s this veneer – that the designers are handed this box and told, ‘Make it look good!’ That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.” – Steve Jobs

This is true, kind of; it depends on what you believe your scope of responsibility is as a designer. The question of “how a device works” is a step removed from the question of “how does a person engage with this device”; our aforementioned designer-caricature aside, most of us get that. But far more important than that is the question of how the device helps that person engage the world. And that’s where this awful contradiction comes in, because whatever that device might be, the person will never be some static object, and the world is seven billion people swimming in a boiling froth of water, oil, guns, steel, race, sex, language, wisdom, secrets, hate, love, pain and TCP/IP.

Our time is finite, and entropy is relentless: knowing that, how long should somebody be responsible for their designs? Are you responsible for what becomes of what you’ve built, over the long term? Because if you have a better way to play the long game here than “be a huge pile of rocks” you should chisel it into something. Every other thing of any complexity, anything with two moving parts to rub together that’s still usable or exists at all today has these two qualities:

  1. It can be fixed, and
  2. When it breaks, somebody cares enough about it to fix it.

And that’s where minimalism that denies the complexity of the world, that lies to itself about entropy, starts feeling like willful blindness; design that’s a thin coat of paint over that device’s relationship with the world.

More to the point, this is why the soi-disant-designer snob we were (justly and correctly) ragging on at the beginning of this seemingly-interminable-but-it-finally-feels-like-we’re-getting-somewhere blog post comes across as such a douchebag. It’s not “minimalist” if you buy a new one every two years; it’s conspicuous consumption with chamfered edges. Strip away that veneer, that coat of paint, and there are the real values designer-guy and his venti decaf soy wankaccino hold dear.

Every day I feel a tiny bit more like I can’t really rely on something I can’t repair. Not just for environmentalism’s sake, not only for the peace of mind that standard screwdrivers and available source offers, but because tools designed by people who understand something might fall over are so much more likely to have built a way to stand them back up. This is why I got unreasonably excited by Lenovo’s retro-Thinkpad surveys, despite their recent experiments in throwing user security overboard wearing factory-installed cement boots. The prospect of a laptop with modern components that you can actually maintain, much less upgrade, has become a weird niche crank-hobbyist novelty somehow.

But if your long game is longer than your workweek or your support contract, this is what a total-cost-accounting of “reduced friction with your environment” looks like. It looks like not relying on the OEM, like DIY and scrounged parts and above all knowing that you’re not paralyzed if the rules change. It’s reduced friction with an uncertain future.

I have an enormous admiration for the work Apple does, I really do. But I spend a lot of time thinking about design now, not in terms of shapes and materials but in terms of the values and principles it embodies, and it’s painfully obvious when those values are either deeply compromised or (more typically) just not visible at all. I’ve often said that I wish that I could buy hardware fractionally as good from anyone else for any amount of money, but that’s not really true. As my own priorities make participating in Apple’s vision more and more uncomfortable, what I really want is for some other manufacturer to to show that kind of commitment to their own values and building hardware that expresses them. Even if I could get to (say) 75% of those values, if one of them was maintainability – if it could be fixed a bit at a time – I bet over the long term, it would come out to (say) 15% of the cost.

Late footnote: This post at War Is Boring is on point, talking about the effects of design at the operational and logistical levels.

October 3, 2014

Rogue Cryptocurrency Bootstrapping Robots

Cuban Shoreline

I tried to explain to my daughter why I’d had a strange day.

“Why was it strange?”

“Well… There’s a thing called a cryptocurrency. ‘Currency’ is another word for money; a cryptocurrency is a special kind of money that’s made out of math instead of paper or metal.”

That got me a look. Money that’s made out of made out of math, right.

“… and one of the things we found today was somebody trying to make a new cryptocurrency. Now, do you know why money is worth anything? It’s a coin or a paper with some ink on it – what makes it ‘money’?”

“… I don’t know.”

“The only answer we have is that it’s money if enough people think it is. If enough people think it’s real, it becomes real. But making people believe in a new kind of money isn’t easy, so what this guy did was kind of clever. He decided to give people little pieces of his cryptocurrency for making contributions to different software projects. So if you added a patch to one of the projects he follows, he’d give you a few of these math coins he’d made up.”

“Um.”

“Right. Kind of weird. And then whoever he is, he wrote a program to do that automatically. It’s like a little robot – every time you change one of these programs, you get a couple of math coins. But the problem is that we update a lot of those programs with our robots, too. Our scripts run, our robots, and then his robots try to give our robots some of his pretend money.”

“…”

“So that’s why my day was weird. Because we found somebody else’s programs trying to give our programs made-up money, in the hope that this made-up money would someday become real.”

“Oh.”

“What did you to today?”

“I painted different animals and gave them names.”

“What kind of names?”

“French names like zaval.”

“Cheval. Was it a good day?”

“Yeah, I like painting.”

“Good, good.”

(Charlie Stross warned us about this. It’s William Gibson’s future, but we still need to clean up after it.)

May 18, 2014

Optics

Filed under: awesome,fail,interfaces,toys,weird — mhoye @ 1:25 pm

Well, we have to get back to making jokes at some point. I bought some glasses from the internet.

I bought new glasses from the internet.

It didn’t go exactly as I’d hoped.

Older Posts »

Powered by WordPress