October 25, 2020

Navigational Instruments

Filed under: digital,documentation,interfaces,mozilla,toys,work — mhoye @ 11:03 am

A decade ago I got to sit in on a talk by one of the designers of Microsoft Office who’d worked on the transition to the new Ribbon user interface. There was a lot to learn there, but the most interesting thing was when he explained the core rationale for the redesign: of the top ten new feature requests for Office, every year, six to eight of them were already features built into the product, and had been for at least one previous version. They’d already built all this stuff people kept saying they wanted, and nobody could find it to use it.

It comes up periodically at my job that we have the same problem; there are so many useful features in Firefox that approximately nobody knows about, even people who’ve been using the browser every day and soaking in the codebase for years. People who work here still find themselves saying “wait, you can do that?” when a colleague shows them some novel feature or way to get around the browser that hasn’t seen a lot of daylight.

In the hopes of putting this particular peeve to bed, I did a casual survey the other day of people’s favorite examples of underknown or underappreciated features in the product, and I’ve collected a bunch of them here. These aren’t Add-ons, as great as they are; this is what you get from Firefox out of the proverbial box. I’m going to say “Alt” and “Ctrl” a lot here, because I live in PC land, but if you’re on a Mac those are “Option” and “Command” respectively.

Starting at the top, one of the biggest differences between Firefox and basically everything else out there is right there at the top of the window, the address bar that we call the Quantumbar.

Most of the chromium-client-state browsers seem to be working hard to nerf out the address bar, and URLs in general. It’s my own paranoia, maybe, but I suspect the ultimate goal here is to make it easier to hide how much of that sweet, sweet behavioral data this will help companies siphon up unsupervised. Hoarding the right to look over your shoulder forever seems to be the name of the game in that space, and I’ve got a set of feelings about that you might be able to infer from this paragraph. It’s true that there’s a lot of implementation detail being exposed there, and it’s true that most people might not care so why show it, but being able to see into the guts of a process so you can understand and trust it is just about the whole point of the open-source exercise. Shoving that already-tiny porthole all the way back into the bowels of the raw codebase – particularly when the people doing the shoving have entire identities, careers and employers none of which would exist at all if they hadn’t leveraged the privileges of open software for themselves – is galling to watch, very obviously a selfish, bad-faith exercise. It reduces clicking a mouse around the Web to little more than clicking a TV remote, what Douglas Adams use to call the “point and grunt interface”.

Fortunately the spirit of the command line, in all its esoteric and hidden power, lives on in a few places in Firefox. Most notably in a rich set of Quantumbar shortcuts you can use to get around your browser state and history:

  • Start typing your search with ^ to show only matches in your browsing history.
  • * to show only matches in your bookmarks.
  • + to show only matches in bookmarks you’ve tagged.
  • % to show only matches in your currently open tabs.
  • # to show only matches where every search term is part of the title or part of a tag.
  • $ to show only matches where every search term is part of the web address (URL). The text “https://” or “http://” in the URL is ignored but not “file:///”.
  • Add ? to show only search suggestions.
  • Hitting Ctrl-enter in the URL bar works like autocomplete;”mozilla” go straight to, for example. Shift-enter will open a URL in a new tab.

Speaking of the Quantumbar, you can customize it by right-clicking any of the options in the three-dot “Page Options” pulldown menu, and adding them to the address bar. The screenshot tool is pretty great, but one of my personal favorites in that pile is Reader Mode. Did you know there’s text-to-speech built into Reader Mode? It surprised me, too. Click those headphones, see how it goes.

It’s sort of Quantumbar-adjacent, but once you’ve been using it for a few hours the Search Keyword feature is one of those things you just don’t go back to not having. If you right-click or a search field on just about any site, “Add a Keyword for this Search” is one of the options. give it a simple term or letter, then “<term or letter> <search term>” in the Quantumbar and you’re immediately doing that search. A lot of us have that set up for Bugzilla, Github, or Stack Overflow, but just about any search box on just about any site works. If you’re finding yourself searching particular forums, or anywhere search engines can’t reach, this is a fantastic feature.

There are a lot of other small navigation tricks that come in surprisingly handy:

  • Holding down Alt while selecting text allows you to select text within a link without triggering the link
  • Shift-right-click will show Firefox’s context menu even on sites that override it. This is great for Picture-In-Picture most video sites, and getting your expected context menu back from GDocs. (PiP is another feature I’m fond of.)
  • Clicking and dragging down on the forward and back buttons will show a list of previous or next pages this tab has visited.
  • You can use Ctrl-click and middle-mouseclick on most toolbar buttons to open whatever they point at in a new tab; Ctrl-reload  duplicates your current tab. You can use this trick to pop stuff out of the middle of your back and forward history stack into new tabs.
  • You can do this trick with the “view image”  option in the right-click menu, too – Ctrl-clicking that menu item will open that image in its own new tab.
  • New Tab then Undo – Ctrl-T then Ctrl-Z – will populate the address bar with the URL of the previously focused tab, and it’s useful to duplicate the current tab from the keyboard.
  • You can right click an iframe and use the This Frame option to open the iframe in a tab of its own, then access the URL and other things.
  • Ctrl+Shift+N will reopen the most recently closed window, Ctrl+Shift+T the most recently closed tab. The tabs are a history stack, so you can keep re-opening them.
  • Knowing you can use Ctrl-M to mute a tab is invaluable.

If you’re a tab-hoarder like me, there’s a lot here to make your life better; Ctrl-# for some N 1 to 8 will switch you to the Nth tab, and Ctrl-9 takes you to the rightmost tab (in left-to-right language layouts, it’s mirrored in RTL). You might want to look over the whole list of keyboard shortcuts, if that’s your thing. There are a lot of them. But probably the most underappreciated is that you can select multiple tabs by using Shift-click, so you can work on the as a group. Ctrl-click will also let you select non-adjacent tabs, as you might expect, and once you’ve selected a few you can:

  • Move them as a group, left, right, new window, into Container tabs, you name it.
  • Pin them (Pinned tabs are another fantastic feature, and the combination of pinned tabs and ctrl-# is very nice.)
  • Mute a bunch of tabs at once.
  • If you’ve got Sync set up – and if you’ve got more than one device, seriously, make your life better and set up sync! – you can right-click and send them all to a different device. If you’ve got Firefox on your phone, “send these ten tabs to my phone” is one click. That action is privacy-respecting, too – nobody can see what you’re sending over, not even Mozilla.

I suspect it’s also not widely appreciated that you can customize Firefox in some depth, another option not widely available in other browsers. Click that three-bar menu in the upper right, click customize; there’s a lot there.

  • You get light, dark and Alpenglow themes stock, and you can find a bunch more on AMO to suit your taste.
  • There’s a few buttons in there for features you didn’t know Firefox had, and you can put them wherever
  • Density is a nice tweak, and removing the title bar is great for squeezing more real estate out of smaller laptop screens.
  • Overflow menu is a great place to put lightly used extensions or buttons
  • There’s a few Easter eggs in there, too, I’m told?

You can also play some games with named profiles that a lot of people doing web development find useful as well. By modifyingyour desktop shortcuts to add “-P [profile name]” –no-remote” after the firefox.exe bit, you can have “personal Firefox” and “work Firefox” running independently and fully separately from each other. That’s getting a bit esoteric, but if you do a lot of webdev or testing you might find it helpful.

So, there you go, I hope it’s helpful.

I’ll keep that casual survey running for a while, but if your personal favorite pet feature isn’t in there, feel free to email me. I know there are more.

October 8, 2020

Control Keys Redux

Filed under: arcade,digital,documentation,interfaces,life,linux,science,toys — mhoye @ 5:29 pm

A long overdue followup.

One of my favourite anecdotes in Kernighan’s “Unix, A History And A Memoir” is the observation that the reason early Unix commands are so often truncated – rm, mv, ls, and so on – was that the keyboards of the day were so terrible that they hurt to type on for any length of time.

I wish more people thought about keyboards. This is the primary interface to these devices we spend so much of our time on, and it baffles me that people just stick with whatever ten dollar keyboard came in the box. It makes as much sense to me a runner buying one-size-fits-all shoes.

There are a lot of people who do think about keyboards, of course, but even so what I’m aiming for isn’t part of that conversation, and often feels like lonely work. Most of the mechanical keyboard fetishists that I can find are in it for the aesthetics, assembling these switches and those keycaps, and while the results can be beautiful they aren’t structurally all that different, still not quite something truly built to be truly personal. Kailh bronze switches are made of joy, sure, but if my wrists are still contorting to use the keyboard, that fantastic popcorn keyspring texture isn’t going to be durably great for me.

I’ve had this plan in mind for a while now, and have finally gotten around to setting up a keyboard the way I’ve long intended to thanks to a friend who introduced me to Cardellini ball clamps and mounting plates. Those were the missing pieces I needed to set up the keyboard I’m typing this on now, a Kinesis Ergo Edge split mechanical keyboard.

Some minor gripes about this specific device include:

    • Manufacturers of split keyboards absolutely refuse, for reasons I cannot figure out, to allow the halves of the keyboard to overlap. I want the 6TGB line and 7YHN columns on both halves! I’d much rather have that than macros or illumination gimmicks.
    • The stands you can order for it, like the wrist rests in the box, are a waste of time. I’m using neither so it’s not a big deal, but seeing a nice product ship with cheap plastic greebling is always a shame.
    • The customization software that comes with it is… somewhat opaque. I’ll find a use for those macro keys, but for now meh. Remapping that ridiculous panic button thing in the upper left to “lock my screen” was straightforward enough, which was nice.
    • Keys on these split keyboards are never ortholinear – meaning, never in a regular old, non-offset grid, like you’d expect on a tool being used by people without diagonal fingers. Standard keyboard layouts make zero sense and haven’t in fifty years; we don’t need to genuflect to a layout forced on us by mechanical typewriter levers and haven’t since before Unix was invented! Get it together, manufacturers! But here we are.

But the nice things about it – the action on these delightfully clicky Cherry MX Blue switches,  the fact that most of the keys are in the right places, the split cable being elegantly tucked away – they outweigh all of the gripes, and so far I’m reasonably happy with the setup, but that’s not really because of what came in the box.

It’s because the setup is this:


Like I say, I’ve had this in mind for a while – an A shape hanging off the front of the desk, each half of the keyboard with a ball head sticking out the plates on the bottom and a third ball head sticking out of the standing desk at about belt level, bolted into the underside of my standing desk. All of it is held together with a surprisingly rigid three-way ball clamp – the film industry doesn’t like having lights or cameras just topple to the floor for no reason, funny story – and the result is a standing desk where I can type with my hands in a very relaxed, natural position all day, without craning my wrists or resting them awkwardly on anything. The key surface is all facing away from me, which takes some getting used to, but hooking my thumbs on the side of the spacebars gives me a good enough home key experience that my typing error rate is getting back down to the usual “merely poor” levels I’m long accustomed to.

It’s a good feeling so far, even if I’m making microadjustments all the time and sort of reteaching myself how to type. I’m starting to suspect that any computer-related ergonomics setup that preassumes a desk and chair is starting from a unrecoverable condition of sin; humans are shaped like neither of those things, and tools should be made to fit humans.

Update: A few people have asked me for a parts list. It is:


April 27, 2020

Side Scroller

Filed under: arcade,documentation,interfaces,linux,toys — mhoye @ 8:35 am

I’ve never met Ian Albert, but years ago he painstakingly scraped and pasted together a set of maps and backgrounds from a various oldschool games, an effort that’s helped me in a bunch of odd little ways over the years and for which I’m grateful. Of particular interest today are the original Super Mario Brothers maps; for the sake of this exercise, let’s start with world 1, level 1.

ImageMagick and FFMpeg are a pair of “classically-Linux” command-line tools, in terms of how insanely complex and opaque they appear until you’ve worked with them for a bit and can sort of see the logic of their approaches. Even then the documentation takes some getting used to – the man page should just say “don’t bother, go to the website” – and even then you’ve gotta kind of fumble your way towards competence if you want to use them day to day.

Well, maybe you don’t, but I sure do. In any case once you know they exist you muddle your way to doing a lot with them. In particular, “convert” from the ImageMagick tool suite lets you upscale some of those Mario-level gifs to PNGs, like so:

$> convert mario-1-1.gif -scale 300% mario-1-1.png

We’re doing this conversion because FFMpeg (apparently?) doesn’t like to pan over gifs as an input stream but is happy to do that with PNGs, and scaling it up gets you an image size better suited for modern screens. We’re admittedly scaling up and then compressing something that eventually gets upscaled again, which looks like it should bea waste of effort. I’ve tested it, though, and on this machine at least it looks like movie upscaling comes out a lot mushier than static image upscaling and this approach is quite a bit crisper.

In any case, then you run “file” on that resulting image to see how big it is:

$> file ./mario-1-1.png
./mario-1-1.png: PNG image data, 10152 x 672, 4-bit colormap, non-interlaced

Do a bit of loose math to figure out your frame width and subtract 16/9 * 672 – that is, the aspect ration of your monitor times the height of the image – from the length – to get the number you need to work with next – in my case rounding to 1200, it’s 8952.

That’s the number of frames you’re going to tell FFMpeg to pan across, like so:

$> ffmpeg -loop 1 -framerate 5 -i mario-1-1.png -vf crop=1200:672:n:0 -frames:v 8952 -pix_fmt yuv420p mario-1-1.mp4

Now, order of operations and operation context both matters in FFMpeg usage, which adds a degree of complexity to figuring out wtf you’re doing with it, but walking through that command:

the “-loop” option is specific to the image processing part of ffpmeg, and in turn specific to some image-processing formats, so “loop 1” might or might not error out saying “unrecognized option”, depending on where you put it in the command line and which image types you’re choosing to process, which is not super helpful. In this case, it works for .png input files, and it means “go through this set of input images once”. We’ll get back to “-framerate” in a moment.

“-i” is input the png of the mario level we made earlier. The rest of this command is where the proverbial action is.

“-vf” means “create a filtergraph”, which is FFMpeg-ese for “transform the set of input images you’ve decoded in the following way”. “The following” can get pretty crazy, as you might imagine, but fortunately for us this will be reasonably simple in intent, despite the somewhat daunting syntax.

In this case, it means “crop out a sub-image from the given input image, of with 1200 and height 672, starting at horizontal offset “n” and vertical offset 0. “n” in this case is implicitly provided by the “frames” part, as we iterate over the frames from zero to the value of “-frames:v”

The “-pix_fmt yuv420p” part – “pixel format”, is what that means – I don’t really understand, beyond the fact that FFMpeg can encode videos in way more formats than browsers can easily decode, and its’ default idea of “best” doesn’t work everywhere. This incantation seems to fix that, which isn’t particularly satisfying but is definitely part of the whole fumbling-towards-competence part I mentioned.

In any case, the “-framerate 5” part is the interesting bit. That’s there because about nine thousand frames – 8952 specifically – divided by the number of seconds in a 30 minute meeting is very close to five. Five frames per second is really slow, so the resulting output video is, as predicted by our basic arithmetic, a lazy 29 minutes and 50 seconds long:

… and that’s the story of how you make a videoconference background that scrolls slowly through a Mario level over the course of half an hour.

A few notes:

  • If you leave out the framerate option and just want to see it scroll by at a default 25 frames per second, the movie is five minutes and change, which is amusingly a few seconds longer than the best speedruns of the entire game.
  • That crop=1200:672:n:0 option elides a lot of possible complexity; there’s an entire mathematical-expression interpreter under the hood of of crop and all the other FFMPeg filters, so if you want a 1080p movie panning diagonally across some of the many classic and modern works of art that are available now from any number of places, you can roll your own with relative ease.
  • The temptation to edit these to say something like “Thank you, Mario! But Peach went to another meeting.” is strong; if I get around to that, the fonts are here or maybe here.
  • I really need to get out of the house more. I guess we all do?

Update: A friend points me at FFMprovisr:

“FFmpeg is a powerful tool for manipulating audiovisual files. Unfortunately, it also has a steep learning curve, especially for users unfamiliar with a command line interface. This app helps users through the command generation process so that more people can reap the benefits of FFmpeg.”

Thank you, Sumana!

March 6, 2020

Brace For Impact

I don’t spend a lot of time in here patting myself on the back, but today you can indulge me.

In the last few weeks it was a ghost town, and that felt like a victory. From a few days after we’d switched it on to Monday, I could count the number of human users on any of our major channels on one hand. By the end, apart from one last hurrah the hour before shutdown, there was nobody there but bots talking to other bots. Everyone – the company, the community, everyone – had already voted with their feet.

About three weeks ago, after spending most of a month shaking out some bugs and getting comfortable in our new space we turned on federation, connecting Mozilla to the rest of the Matrix ecosystem. Last Monday we decommissioned for good, closing the book on a 22-year-long chapter of Mozilla’s history as we started a new one in our new home on Matrix.

I was given this job early last year but the post that earned it, I’m guessing, was from late 2018:

I’ve mentioned before that I think it’s a mistake to think of federation as a feature of distributed systems, rather than as consequence of computational scarcity. But more importantly, I believe that federated infrastructure – that is, a focus on distributed and resilient services – is a poor substitute for an accountable infrastructure that prioritizes a distributed and healthy community. […] That’s the other part of federated systems we don’t talk about much – how much the burden of safety shifts to the individual.

Some inside baseball here, but if you’re wondering: that’s why I pushed back on the idea of federation from the beginning, for all invective that earned me. That’s why I refused to include it as a requirement and held the line on that for the entire process. The fact that on classically-federated systems distributed access and non-accountable administration means that the burden of personal safety falls entirely on the individual. That’s not a unique artifact of federated systems, of course – Slack doesn’t think you should be permitted to protect yourself either, and they’re happy to wave vaguely in the direction of some hypothetical HR department and pretend that keeps their hands clean, as just one example of many – but it’s structurally true of old-school federated systems of all stripes. And bluntly, I refuse to let us end up in a place where asking somebody to participate in the Mozilla project is no different from asking them to walk home at night alone.

And yet here we are, opting into the Fediverse. It’s not because I’ve changed my mind.

One of the strongest selling points of Matrix is the combination of powerful moderation and safety tooling that hosting organizations can operate with robust tools for personal self-defense available in parallel. Critically, these aren’t half-assed tools that have been grafted on as an afterthought; they’re first-class features, robust enough that we can not only deploy them with confidence, but can reasonably be held accountable by our colleagues and community for their use. In short, we can now have safe, accountable infrastructure that complements, rather than comes at the cost, of individual user agency.

That’s not the best thing, though, and I’m here to tell you about my favorite Matrix feature that nobody knows about: Federated auto-updating blocklist sharing.

If you decide you trust somebody else’s decisions, at some other organization – their judgment calls about who is and is not welcome there – those decisions can be immediately and automatically reflected in your own. When a site you trust drops the hammer on some bad actor that ban can be adopted almost immediately by your site and your community as well. You don’t have to have ever seen that person or have whatever got them banned hit you in the eyes. You don’t even need to know they exist. All you need to do is decide you trust that other site judgment and magically someone persona non grata on their site is precisely that grata on yours.

Another way to say that is: among people or communities who trust each other in these decisions, an act of self-defense becomes, seamlessly and invisibly, an act of collective defense. No more everyone needing to fight their own fights alone forever, no more getting isolated and picked off one at a time, weakest first; shields-up means shields-up for everyone. Effective, practical defensive solidarity; it’s the most important new idea I’ve seen in social software in years. Every federated system out should build out their own version, and it’s very clear to me, at least, that is going to be the table stakes of a federated future very soon.

So I feel pretty good about where we’ve ended up, and where we’re going.

In the long term, I see that as the future of Mozilla’s responsibility to the Web; not here merely to protect the Web, not merely to defend your freedom to participate in the Web, but to mount a positive defense of people’s opportunities to participate. And on the other side of that coin, to build accountable tools, systems and communities that promise not only freedom from arbitrary harassment, but even freedom from the possibility of that harassment.

I’ve got a graph here that’s pointing up and to the right, and it’s got nothing to do with scraping fractions of pennies out of rageclicks and misery; just people making a choice to go somewhere better, safer and happier. Maybe, just maybe, we can salvage this whole internet thing. Maybe all is not yet lost, and the future is not yet written.

February 5, 2020


Filed under: arcade,documentation,interfaces,life,microfiction,toys,weird — mhoye @ 11:17 am

Karl Germain once said that “Magic is the only honest profession. A magician promises to deceive you and he does.”

This is sort of a review of a game, I guess. It’s called Superliminal.

“Every magic trick consists of three parts, or acts. The first part is called the pledge. The magician shows you something ordinary: a deck of cards, a bird, or a man. He shows you this object, and pledges to you its utter normality. Perhaps he asks you to inspect it to see that it is indeed real, unaltered, normal. But, of course, it probably isn’t. The second act is called the turn. The magician takes the ordinary something and makes it do something extraordinary. Now you’re looking for the secret. But you won’t find it, because of course you’re not really looking. You don’t really want to know. You want to be fooled. But you wouldn’t clap yet. Because making something disappear isn’t enough. You have to bring it back. That’s why every magic trick has a third act. The hardest part. The part we call the Prestige.”

— Cutter (Michael Caine), “The Prestige

You’ve probably heard David Foster Wallace’s speech to the graduates of Kenyon College in 2005, the “This Is Water” speech.

It is about the real value of a real education, which has almost nothing to do with knowledge, and everything to do with simple awareness; awareness of what is so real and essential, so hidden in plain sight all around us, all the time, that we have to keep reminding ourselves over and over: “This is water, this is water.”

It’s possibly the greatest College Graduation Speech of all time, both in its mastery of the form and the surgical precision of its own self-serving subversion of that same form. “This is a standard requirement of US commencement speeches, the deployment of didactic little parable-ish stories”, right after the parable-ish story. “I am not the wise old fish”, followed by an explanation of the whole point of the fish story. Over and over again, throughout, we’re shown the same sleight of hand:.

Tell your audiences that they’re too smart to want a certain thing and give it to them anyway. Remind everyone that they’re too hip for corny dad sermonizing and then double down on the corny dad sermonizing. This is a great way to write a commencement speech—not by avoiding platitudes, but by drawing an enchanted circle around yourself where the things we thought were platitudes can be revealed as dazzling truths. Where all of us can be consoled, if only for an instant, by the notion that the insight we lack has been here all along! Just hiding inside of our clichés.

I don’t think Harnette’s cynicism in that LitHub article, pointed at the pernicious consequences of Wallace’s “cult of sincerity” is the whole story. She’s not wrong, but there’s more: if you’ve got the right eyes to see it the outline of the Prestige is there, the empty space where the third act didn’t happen. The part where this long, drawn out paean, begging for sincerity and authenticity and “simple awareness” reveals itself for what it is, a cry for help from somebody whose inner monologue does not shut up or take its foot off the gas so much as a millimeter for anyone or any reason ever. A plea for a simplicity from somebody whose mind simply won’t, that nobody saw.

Because of course you’re not really looking. You don’t really want to know; you want to be fooled. I know that this stuff probably doesn’t sound fun and breezy or grandly inspirational the way a video game review is supposed to sound. But it’s just about capital-T Time to stop using this gag.

[ – Superliminal teaser trailer ]

I’ve played through Superliminal twice now, and I spent a lot of that time thinking about Wallace’s call for simple awareness as the game hammers on its tagline that perception is reality. I’ve got mixed feelings about it.

Superliminal opens in an obvious homage to both Portal and The Stanley Parable, guns on the mantle that never go off; in some ways it feels like the first Assassin’s Creed, an excellent tech demo that paved the way for the great AC2. It’s brilliant and frustrating, playing with the nature of constructed realities in ways that are sometimes trite, sometimes – the knife, the parking lot – unsettling and sometimes genuinely distressing. Like Portal it’s only a few hours long but they aren’t wasted hours, novel conceits and engaging mechanics flourishing through the iteration and conceptual degradation of the dreamscapes you traverse.

But I can’t shake the feeling like some part of the game is missing, that there’s a third act we haven’t been allowed to see.

As with Wallace’s Kenyon speech it’s the final conceit – in Superliminal, the psychologists’ summary – that ties the game together in a way that feels thematically complete, grandly inspirational and woefully unearned; where all of us can be empowered, if only for an instant, by the notion that the insight we lack has been here all along, if – perception being reality – we could only see it. And just like the Kenyon graduation speech, I can’t shake the sense that the same sleight-of-hand has happened: that what we’re not seeing, what we’re choosing not to see, is that this sincere inspirational anecdote isn’t really something meant to inspire us but something the author desperately wishes they could believe themselves, a rousing sermon from a preacher desperate to escape their own apostasy.

And how hard could that be, really? All you’ve got to do, after all, is wake up.

It’s a pretty good game. You should play it. I hope whoever made it gets the help they need.

December 26, 2019

Intrasective Subversions

I often wonder where we’d be if Google had spent their don’t-be-evil honeymoon actually interviewing people for some sort moral or ethical framework instead of teaching a generation of new hires that the important questions are all about how many piano tuners play ping pong on the moon.

You might have seen the NYTimes article on hypertargeted product placement, one of those new magical ideas that look totally reasonable in an industry where CPU cycles are cheap and principles are expensive.

I just wanted to make sure we all understood that one extremely intentional byproduct of that will breathe new life into the old documnent-canary trick of tailoring sensitive text with unique punctuation or phrasing in particularly quotable passages to identify leakers, and has been purpose-built as a way to precision-target torrent seeders or anyone else who shares media. “We only showed this combination of in-product signal to this specific person, therefore they’re the guilty party” is where this is going, and that’s not an accident.

The remedy, of course, is going to be cooperation. Robust visual diffs, scene hashes and smart muting (be sure to refer to They Live for placeholder inspiration) will be more than enough to fuzz out discoverability for even a moderately-sized community. As it frequently is, the secret ingredient is smart people working together.

In any case, I’m sure that all right thinking people can agree that ads are the right place to put graffiti. So I’m looking forward to all the shows that are turned into hijacked art-project torrents the moment they’re released, and seeing


in the background of the pirated romcoms of 2021.

December 17, 2019

Poor Craft

Filed under: future,interfaces,linux,microfiction,toys,want,weird,work — mhoye @ 1:53 pm


“It’s a poor craftsman that blames his tools” is an old line, and it took me a long time to understand it.

[ ]

A friend of mine sent me this talk. And while I want to like it a lot, it reminded me uncomfortably of Dabblers and Blowhards, the canon rebuttal to “Hackers And Painters”, an early entry in Paul Graham’s long-running oeuvre elaborating how special and magical it is to be just like Paul Graham.

It’s surprisingly hard to pin Paul Graham down on the nature of the special bond he thinks hobbyist programmers and painters share. In his essays he tends to flit from metaphor to metaphor like a butterfly, never pausing long enough to for a suspicious reader to catch up with his chloroform jar. […] You can safely replace “painters” in this response with “poets”, “composers”, “pastry chefs” or “auto mechanics” with no loss of meaning or insight. There’s nothing whatsoever distinctive about the analogy to painters, except that Paul Graham likes to paint, and would like to feel that his programming allows him a similar level of self-expression.

There’s an old story about Soundcloud (possibly Spotify? DDG tends to the literal these days and Google is just all chaff) that’s possibly apocryphal but too good not to turn into a metaphor, about how for a long time their offices were pindrop-quiet. About how during that rapid-growth phase they hired people in part for their love of and passion for music, and how that looked absolutely reasonable until they realized their people didn’t love music: they loved their music. Your music, obviously, sucks. So everyone there wears fantastic headphones, nobody actually talks to each other, and all you can hear is in their office is keyboard noise and the HVAC.

I frequently wonder if the people who love Lisp or Smalltalk fall into that same broad category: that they don’t “love Lisp” so much as they love their Lisp, the Howl’s Moving Memory Palaces they’ve built for themselves, tailored to the precise cut of their own idiosyncracies. That if you really dig in and ask them you’ll find that other people’s Lisp, obviously, sucks.

It seems like an easy trap to fall in to, but I suspect it means we collectively spend a lot of time genuflecting this magical yesteryear and its imagined perfect crystal tools when the fact of it is that we spend almost all of our time in other people’s code, not our own.

I feel similarly about Joel Spolsky’s notion of “leaky abstractions”; maybe those abstractions aren’t “leaking” or “failing”. Instead it’s that you’ve found the point where your goals, priorities or assumptions have diverged from those of the abstraction’s author, and that’s ultimately not a problem with the abstraction.

The more time I spend in front of a keyboard, the more I think my core skills here aren’t any more complicated than humility, empathy and patience; that if you understand its authors the code will reveal itself. I’ve mentioned before that programming is, a lot more than most people realize, inherently political. You’re making decisions about how to allocate scarce resources in ways that affect other people; there’s no other word for it. So when you’re building on other people’s code, you’re inevitably building on their assumptions and values as well, and if that’s true – that you spend most of your time as a programmer trying to work with other people’s values and decisions – then it’s guaranteed that it’s a lot more important to think about how to best spend that time, or optimize those tools and interactions, rather than championing tools that amount to applied reminiscence, a nostalgia with a grammar. In any other context we’d have a term for that, we’d recognize it for what it is, and it’s unflattering.

What does a programming language optimized for ease-of-collaboration or even ease-of-empathy look like, I wonder? What does that development environment do, and how many of our assumptions about best collaborative practices are just accidental emergent properties of the shortcomings of our tools? Maybe compiler pragmas up front as expressions of preferred optimizations, and therefore priorities? Culture-of-origin tags, demarking the shared assumptions of developers? “Reds and yellows are celebratory colors here, recompile with western sensibilities to swap your alert and default palettes with muted blues/greens.” Read, Eval, Print looping feels for all its usefulness like a huge missed opportunity, an evolutionary dead end that was just the best model we could come up with forty years ago, and maybe we’ve accidentally spent a lot of time looking backwards without realizing it.

Long Term Support

Filed under: a/b,digital,future,interfaces,linux,toys,want,work — mhoye @ 11:34 am

I bought a cordless drill from DeWalt a few years before they standardized on their current 20 volt form factor. Today the drill part of the drill is still in good shape, but its batteries won’t hold a charge – don’t store your batteries in the shed over the winter, folks, that’s rookie mistake – and I can’t replace them; they just don’t make them anymore. Nobody does.

I was thoroughly prepared to be annoyed about this, but it turns out DeWalt makes an adapter that slots right into my old drill and lets me use their new standard batteries. I’ll likely get another decade out of it as a result, and if the drill gives up the ghost in the meantime I’ll be able to use those batteries in its replacement.

Does any computer manufacturer out there anywhere care about longevity like that, today? The Cadillac answer to that used to be “Thinkpad”, but those days are long gone and as far as I can tell there’s nothing else in this space. I don’t care about thin or light at all. I’m happy to carry a few extra pounds; these are my tools, and if that’s the price of durable, maintainable and resilient tools means a bit of extra weight in the bag I’ll pay it and smile. I just want to be able to fix it; I want something I can strip all the way down to standard parts with a standard screwdriver and replace piecemeal when it needs piecemeal replacing. Does anyone make anything like this anymore, a tradesman’s machine? The MNTRE people are giving it a shot. Is anyone else, anywhere?

October 23, 2019


Every now and then, my brain clamps on to obscure trivia like this. It takes so much time. “Because the paper beds of banknote presses in 1860 were 14.5 inches by 16.5 inches, a movie industry cartel set a standard for theater projectors based on silent film, and two kilobytes is two kilobytes” is as far back as I have been able to push this, but let’s get started.

In August of 1861, by order of the U.S. Congress and in order to fund the Union’s ongoing war efforts against the treasonous secessionists of the South, the American Banknote Company started printing what were then called “Demand Notes”, but soon widely known as “greenbacks”.

It’s difficult to research anything about the early days of American currency on Wikipedia these days; that space has been thoroughly colonized by the goldbug/sovcit cranks. You wouldn’t notice it from a casual examination, which is of course the plan; that festering rathole is tucked away down in the references, where articles will fold a seemingly innocuous line somewhere into the middle, tagged with an exceptionally dodgy reference. You’ll learn that “the shift from demand notes to treasury notes meant they could no longer be redeemed for gold coins[1]” – which is strictly true! – but if you chase down that footnote you wind up somewhere with a name like “Lincoln’s Treason – Fiat Currency, Maritime Law And The U.S. Treasury’s Conspiracy To Enslave America”, which I promise I am only barely exaggerating about.

It’s not entirely clear if this is a deliberate exercise in coordinated crank-wank or just years of accumulated flotsam from the usual debate-club dead-enders hanging off the starboard side of the Overton window. There’s plenty of idiots out there that aren’t quite useful enough to work the k-cups at the Heritage Institute, and I guess they’re doing something with their time, but the whole thing has a certain sinister elegance to it that the Randroid crowd can’t usually muster. I’ve got my doubts either way, and I honestly don’t care to dive deep enough into that sewer to settle them. Either way, it’s always good to be reminded that the goldbug/randroid/sovcit crank spectrum shares a common ideological klancestor.

Mercifully that is not what I’m here for. I am here because these first Demand Notes, and the Treasury Notes that came afterwards, were – on average, these were imprecise times – 7-3/8” wide by 3-1/4” tall.

I haven’t been able to precisely answer the “why” of that – I believe, but do not know, that that this is because of the size of the specific dimensions of the presses they were printed on. Despite my best efforts I haven’t been able to find the exact model and specifications of that device. I’ve asked the U.S. Congressional Research Service for some help with this, but between them and the Bureau of Engraving and Printing, we haven’t been able to pin it down. From my last correspondence with them:

Unfortunately, we don’t have any materials in the collection identifying the specific presses and their dimension for early currency production. The best we can say is that the presses used to print currency in the 1860s varied in size and model. These presses went by a number of names, including hand presses, flat-bed presses, and spider presses. They also were capable of printing sheets of paper in various sizes. However, the standard size for printing securities and banknotes appears to have been 14.5 inches by 16.5 inches. We hope this bit of information helps.

… which is unfortunate, but it does give us some clarity. A 16.5″ by 14.5″ printing sheet lets you print eight 7-3/8” by 3-1/4″ sheets to size, with a fraction of an inch on either side for trimming.

The answer to that question starts to matter about twenty years later on the heels of the 1880 American Census. Mandated to be performed once a decade, the United States population had grown some 30% since the previous census, and even with enormous effort the final tabulations weren’t finished until 1888, an unacceptable delay.

One of the 1880 Census’ early employees was a man named Herman Hollerith, a recent graduate of the Columbia School of Mines who’d been invited to join the Census efforts early on by one of his professors. The Census was one of the most important social and professional networking exercises of the day, and Hollerith correctly jumped at the opportunity:

The absence of a permanent institution meant the network of individuals with professional census expertise scattered widely after each census. The invitation offered a young graduate the possibility to get acquainted with various members of the network, which was soon to be dispersed across the country.

As an aside, that invitation letter is one of the most important early documents in the history of computing for lots of reasons, including this one:

The machine in that picture was the third generation of the “Hollerith Tabulator”, notable for the replaceable plugboard that made it reprogrammable. I need to find some time to dig further into this, but that might be the first multipurpose, if not “general purpose” as we’ve come to understand it, electronic computation device. This is another piece of formative tech that emerged from this era, one that led to directly to the removable panels (and ultimately the general componentization) of later computing hardware.

Well before the model 3, though, was the original 1890 Hollerith Census Tabulator that relied on punchcards much like this one.

Hollerith took the inspiration for those punchcards from the “punch photographs” used by some railways at the time to make sure that tickets belonged to the passengers holding them. You can see a description of one patent for them here dating to 1888, but Hollerith relates the story from a few years earlier:

One thing that helped me along in this matter was that some time before I was traveling in the west and I had a ticket with what I think was called a punch photograph. When the ticket was first presented to a conductor he punched out a description of the individual, as light hair, dark eyes, large nose etc. So you see I only made a punch photograph of each person.

Tangentially: this is the birth of computational biometrics. And as you can see from this extract from The Railway News (Vol. XLVIII, No. 1234 , published Aug. 27, 1887) people have been concerned about harassment because of unfair assessment by the authorities from day one:


After experimenting with a variety of card sizes Hollerith decided that to save on production costs he’d use the same boxes the U.S. Treasury was using for the currency of the day: the Demand Note. Punch cards stayed about that shape, punched with devices that looked a lot like this for about 20 years until Thomas Watson Sr. (IBM’s first CEO, from whom the Watson computer gets its name) asked Clair D. Lake and J. Royden Peirce to develop a new, higher data-density card format.

Tragically, this is the part where I need to admit an unfounded assertion. I’ve got data, the pictures line up and numbers work, but I don’t have a citation. I wish I did.

Take a look at “Type Design For Typewriters: Olivetti, written by Maria Ramos Silvia. (You can see a historical talk from her on the history of typefaces here that’s also pretty great.)

Specifically, take a look on page 46 at Mikron Piccolo, Mikron Condensed. The fonts don’t precisely line up – see the different “4”, for example, when comparing it to the typesetting of IBM’s cards – but the size and spacing do. In short: a line of 80 characters, each separated by a space, is the largest round number of digits that the tightest typesetting of the day would allow to be fit on a single 7-3/8” wide card: a 20-point condensed font.

I can’t find a direct citation for this; that’s the only disconnect here. But the spacing all fits, the numbers all work, and I’d bet real money on this: that when Watson gave Lake the task of coming up with a higher information-density punch card, Lake looked around at what they already had on the shelf – a typewriter with the highest-available character density of the day, on cards they could manage with existing and widely-available tooling – and put it all together in 1928. The fact that a square hole – a radical departure from the standard circular punch – was a patentable innovation at the time was just icing on the cake.

The result of that work is something you’ll certainly recognize, the standard IBM punchcard, though of course there’s lot more to it than that. Witness the full glory of the Card Stock Acceptance Procedure, the protocol for measuring folding endurance, air resistance, smoothness and evaluating the ash content, moisture content and pH of the paper, among many other things.

At one point sales of punchcards and related tooling constituted a completely bonkers 30% of IBM’s annual profit margin, so you can understand that IBM had a lot invested in getting that consistently, precisely correct.

At around this time John Logie Baird invented the first “mechanical television”; like punchcards, the first television cameras were hand-cranked devices that relied on something called a Nipkow disk, a mechanical tool for separating images into sequential scan lines, a technique that survives in electronic form to this day. By linearizing the image signal Baird could transmit the image’s brightness levels via a simple radio signal and in 1926 he did just that, replaying that mechanically encoded signal through a CRT and becoming the inventor of broadcast television. He would go on to pioneer colour television – originally called Telechrome, a fantastic name I’m sad we didn’t keep – but that’s a different story.

Baird’s original “Televisor” showed its images on a 7:3 aspect ration vertically oriented cathode ray tube, intended to fit the head and shoulders of a standing person, but that wouldn’t last.

For years previously, silent films had been shot on standard 35MM stock, but the addition of a physical audio track to 35MM film stock didn’t leave enough space left over for the visual area. So – after years of every movie studio having its own preferred aspect ratio, which required its own cameras, projectors, film stock and tools (and and and) – in 1929 the movie industry agreed to settle on the Society of Motion Picture And Television Engineers’ proposed standard of 0.8 inches by 0.6 inches, what became known as the Academy Ratio, or as we better know it today, 4:3.

Between 1932 and 1952, when widescreen for cinemas came into vogue as a differentiator from standard television, just about all the movies made in the world were shot in that aspect ratio, and just about every cathode ray tube made came in that shape, or one that could display it reliably. In 1953 studios started switching to a wider “Cinemascope”, to aggressively differentiate themselves from television, but by then television already had a large, thoroughly entrenched install base, and 4:3 remained the standard for in-home displays – and CRT manufacturers – until widescreen digital television came to market in the 1990s.

As computers moved from teleprinters – like, physical, ink-on-paper line printers – to screens, one byproduct of that standardization was that if you wanted to build a terminal, you either used that aspect ratio or you started making your own custom CRTs, a huge barrier to market entry. You can do that if you’re IBM, and you’re deeply reluctant to if you’re anyone else. So when DEC introduced their VT52 terminal, a successor to the VT50 and earlier VT05 that’s what they shipped, and with only 1Kb of display ram (one kilobyte!) it displayed only twelve rows of widely-spaced text. Math is unforgiving, and 80×12=960; even one more row breaks the bank. The VT52 and its successor the VT100, though, doubled that capacity giving users the opulent luxury of two entire kilobytes of display memory, laid out with a font that fit nicely on that 4:3 screen. The VT100 hit the market in August of 1978, and DEC sold more than six million of them over the product’s lifespan.

You even got an extra whole line to spare! Thanks to the magic of basic arithmetic 80×25 just sneaks under that opulent 2k limit with 48 bytes to spare.

This is another point where direct connections get blurry, because 1976 to 1984 was an incredibly fertile time in the history of computing history. After a brief period where competing terminal standards effectively locked software to the hardware that it shipped on, the VT100 – being the first terminal to market fully supporting the recently codified ANSI standard control and escape sequences – quickly became the de-facto standard, and soon afterwards the de-jure, codified in ANSI-X3.64/ECMA-48. CP/M, soon to be replaced with PC-DOS and then MS-DOS came from this era, with ANSI.SYS being the way DOS programs talked to the display from DOS 2.0 through to beginning of Windows. Then in 1983 the Apple IIe was introduced, the first Apple computer to natively support an 80×24 text display, doubling the 40×24 default of their earlier hardware. The original XTerm, first released in 1984, was also created explicitly for VT100 compatibility.

Fascinatingly, the early versions of the ECMA-48 standard specify that this standard isn’t solely meant for displays, specifying that “examples of devices conforming to this concept are: an alpha-numeric display device, a printer or a microfilm output device.”

A microfilm output device! This exercise dates to a time when microfilm output was a design constraint! I did not anticipate that cold-war spy-novel flavor while I was dredging this out, but it’s there and it’s magnificent.

It also dates to a time that the market was shifting quickly from mainframes and minicomputers to microcomputers – or, as we call them today, “computers” – as reasonably affordable desktop machines that humans might possibly afford and that companies might own a large number of, meaning this is also where the spectre of backcompat starts haunting the industry – This moment in a talk from the Microsoft developers working on the Windows Subsystem for Linux gives you a sense of the scale of that burden even today. In fact, it wasn’t until the fifth edition of ECMA-48 was published in 1991, more than a decade after the VT100 hit the market, that the formal specification for terminal behavior even admitted the possibility (Appendix F) that a terminal could be resized at all, meaning that the existing defaults were effectively graven in stone during what was otherwise one of the most fertile and formative periods in the history of computing.

As a personal aside, my two great frustrations with doing any kind of historical CS research remain the incalculable damage that academic paywalls have done to the historical record, and the relentless insistence this industry has on justifying rather than interrogating the status quo. This is how you end up on Stack Overflow spouting unresearched nonsense about how “4 pixel wide fonts are untidy-looking”. I’ve said this before, and I’ll say it again: whatever we think about ourselves as programmers and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize, and by telling and retelling these unsourced, inaccurate just-so stories without ever doing the work of finding the real truth, we’re betraying ourselves, our history and our future. But it’s pretty goddamned difficult to convince people that they should actually look things up instead of making up nonsense when actually looking things up, even for a seemingly simple question like this one, can cost somebody on the outside edge of an academic paywall hundreds or thousands of dollars.

So, as is now the usual in these things:

  • There are technical reasons,
  • There are social reasons,
  • It’s complicated, and
  • Open access publication or GTFO.

But if you ever wondered why just about every terminal in the world is eighty characters wide and twenty-five characters tall, there you go.

October 19, 2019


Filed under: awesome,business,toys,weird — mhoye @ 7:43 am

I made a thing and somebody said “I want that on a shirt”. If that was you, here’s your chance.

It’s a great game, incidentally.

Older Posts »

Powered by WordPress