blarg?

January 20, 2019

Super Mario Telemachy

Filed under: a/b,arcade,awesome,beauty,digital,documentation,future,interfaces — mhoye @ 10:29 pm
This way to art.

One thing I love about the Hyrule of Breath of the Wild is how totally unbothered it is by our hero’s presence in it. Cliffs you can’t climb, monsters you have no real shot at beating, characters wandering about who aren’t there as side-quest farmers or undifferentiated foils for your inevitable progress. Even the weather will inconvenience, injure or outright murder you if you walk out into it dressed wrong, and in large ways and small this mattered. I’d seen lighting strikes in the game before – and getting one-shotted by the rain after I missed the memo about not wearing metal out in a storm was startling enough, lemme tell you – but the first time I saw one hit water, saw a handful of stunned fish floating to the surface, that put my jaw on the floor. The rain that made this hill too slippery to climb gave that world the sense of a being a world, one that for all your power and fate and destiny just didn’t revolve around you.

Super Mario Odyssey is the precise, exact opposite of that, and at first I really didn’t get it. I couldn’t get into it.

It’s surprisingly hard to enjoy an entire world carefully and forgivingly tuned to precisely fit your exact capacities at all times, to the point that if you’ve done much platforming in your life there’s no real challenge to navigating Odyssey, much less risk. A “death” that costs you about six of the abundant, constantly replenished gold coins that litter the landscape hardly even counts as a setback – you’re likely to restart next to eight or ten of them! – so my first impressions were that it amounted to a hoarder’s brightly coloured to-do list. I decided to grind through it to see the New Donk City I’d been studiously avoiding spoilers for, hearing only that it was the best and weirdest part of the game, but it was definitely a grind.

But after watching my kids play it, and helping them through the parts they’ve been hung up on, I realized something: Odyssey is a bad single-player game because it’s not a single-player game, at least not a single adult player. It’s a children’s book, a children’s experience; it’s Mario Disneyland. And once I discovered the game I was actually supposed to be playing, the whole experience changed.

With fresh eyes and unskilled hands involved, this sprawling, tedious fan-service buffet becomes an entirely different thing, a chance to show my kids around a game world I grew up with. Even the 2D sidescroller diversions, eye-rollingly retro on their own, become a conversation. Most amazingly, to me at least, the two-player option – one player driving Mario, the other driving his ghost hat companion Cappy – stops looking like a silly gimmick and starts looking like a surprisingly good execution of a difficult idea I’ve wanted for a long time. Odyssey is the only game I’ve ever seen that has cooperative, same-couch multiplayer that’s accessible to people of wildly different skill levels. Another way to say that is, it’s a game I can play with my kids; not versus, not taking turns, but “with” for real, and it’s kind of great.

So, playing Odyssey alone by myself? Sure: unchallenging, rote and if we’re honest enough to admit it, a little sad. But with my kids’ playing it, playing along together? Definitely. Not only good but good fun, maybe even a meaningful experience. Sign me up.

January 8, 2019

Feature Request

Filed under: digital,documentation,fail,interfaces,linux,toys,vendetta,want — mhoye @ 9:50 am

If I’m already in a Linux, ideally a Debian-esque Linux, is there a way for me to say “turn this new external hard drive into a bootable Linux that’s functionally identical to this current machine”? One that doesn’t involve any of dd, downloading an ISO or rebooting? It’s hard to believe this is as difficult as it seems, or that this isn’t a standard tool yet, but if it is I sure can’t find it.

Every installer I’ve seen since the first time I tried the once-magical Knoppix has let you boot into a workable Linux on its own and install that Linux to the hard drive if you like, but I can’t a standalone tool that does the same from a running system.

What I’m after is a tool (I briefly wrote “ideally graphical”, but yeah. Let’s be real here.) that you point at a hard drive, that:

  • Formats this new hard drive in some rough approximation of what you’ve already got and making it bootable. (grub-whatever?)
  • Installs the same packages onto that drive as are on the host system, and
  • Optionally copies over account information required, /home/*, passwords, whatever else is in /etc or /opt; skip (or be smart about?) stuff like the hostname or iptables, maybe.

…and ends with a hard drive I can plug into another system, boot and log into as comfortably-preconfigured me.

debootstrap almost gets part of the way there, and you can sort of convince that or multistrap to do the job if you scrape out your current config, pour it into a config file and rsync over a bunch of other stuff. But then I’m back in roll-it-yourself-land where I started.

Ideally this apparently-hypothetical magic clone tool would be able to do this with minimal network traffic, too – it’s likely I’ve already got many or most of those packages cached, no? And alternatively, it’d also be nice to able to keep my setup largely intact while migrating across architectures.

I go looking for this every few years, even though it puts me briefly back on the “why would you ever do it like that? You should switch distros!” You Asked A Linux Question On The Internet Treadmill. But I haven’t found a decent answer yet beyond rolling my own.

Welp.

(Comments are off permanently. You’re welcome to mail me, though?)

December 16, 2018

Control Keys

Filed under: a/b,digital,documentation,fail,future,hate,interfaces,linux — mhoye @ 10:36 pm

I spend a lot of time thinking about keyboards, and I wish more people did.

I’ve got more than my share of computational idiosyncrasies, but the first thing I do with any computer I’m going to be using for any length of time is remap the capslock key to control (or command, if I find myself in the increasingly “what if Tattoine, but Candy Crush” OSX-land). I’ve made a number of arguments about why I do this over the years, but I think they’re mostly post-facto justifications. The real reason, if there is such a thing, is likely that the first computer I ever put my hands on was an Apple ][c. On the ][c keyboard “control” is left of “A” and capslock is off in a corner. I suspect that whatever arguments I’ve made since, the fact of it is that my muscle memory has been comfortably stuck in that groove ever since.

It’s more than just bizarre how difficult it is to reassign any key to anything these days; it’s weird and saddening, especially given how awful the standard keyboard layout is in almost every respect. Particularly if you want to carry your idiosyncrasies across operating systems, and if I’m anything about anything these days, it’s particular.

I’m not even mad about the letter layout – you do you, Dvorak weirdos – but that we give precious keycap real estate to antiquated arcana and pedestrian novelty at the expense of dozens of everyday interactions, and as far as I can tell we mostly don’t even notice it.

  • This laptop has dedicated keys to let me select, from levels zero to three, how brightly my keyboard is backlit. If I haven’t remapped control to caps I need to twist my wrist awkwardly to cut, copy or paste anything.
  • I’ve got two alt keys, but undo and redo are chords each half a keyboard away from each other. Redo might not exist, or the key sequence could be just about anything depending on the program; sometimes all you can do is either undo, or undo the undo?
  • On typical PC keyboards Pause/Break and Scroll Lock, vestigial remnants a serial protocol of ages past, both have premium real estate all to themselves. “Find” is a chord. Search-backwards may or may not be a thing that exists depending on the program, but getting there is an exercise. Scroll lock even gets a capslock-like LED some of the time; it’s that important!
  • The PrtScn key that once upon a time would dump the contents of your terminal to a line printer – and who doesn’t want that? – is now given over to screencaps, which… I guess? I’m kind of sympathetic to this one, I have to admit. Social network interoperability is such a laughable catastrophe that sharing pictures of text is basically the only thing that works, which should be one of this industry’s most shameful embarrassments but here we are. I guess this can stay.
  • My preferred tenkeyless keyboards have thankfully shed the NumLock key I can’t remember ever hitting on purpose, but it’s still a stock feature of OEM keyboards, and it might be the most baffling of the bunch. If I toggle NumLock I can… have the keys immediately to the left of the number pad, again? Sure, why not.
  • “Ins” –  insert – is a dedicated key for the “what if delete, but backwards and slowly” option that only exists at all because mainframes are the worst. Are there people who toggle this on purpose? Has anyone asked them if they’re OK? I can’t select a word, sentence or paragraph with a keystroke; control-A lets me either select everything or nothing.
  • Finally, SysRq – short for “System Request” – gets its own button too, and it almost always does nothing because the one thing it does when it works – “press here to talk directly to the hardware” – is a security disaster only slightly obscured by a usability disaster.

It’s sad and embarrassing how awkwardly inconsiderate and anti-human these things are, and the fact that a proper fix – a human-hand-shaped keyboard whose outputs you get to choose for yourself – costs about as much as a passable computer is appalling.

Anyway, here’s a list of how you remaps capslock to control on various popular OSes, in a roughly increasing order of lunacy:

  • OSX: Open keyboard settings and click a menu.
  • Linux: setxkboptions, I think. Maybe xmodmap? Def. something in an .*rc file somewhere though. Or maybe .profile? Does gnome-tweak-tool still work, or is it called ubuntu-tweak-tool or just tweak-tool now? This seriously used to be a checkbox, not some 22nd-century CS-archaeology doctoral thesis. What an embarrassment.
  • Windows: Make a .reg file full of magic hexadecimal numbers. You’ll have to figure out how on your own, because exactly none of that documentation is trustworthy. Import it as admin with regedit. Reboot probably? This is ok. This is fine.
  • iOS: Ive says that’s where the keys go so that’s where the keys go. Think of it as minimalism except for the number of choices you’re allowed to make. Learn to like it or get bent, pleb.
  • Android: Buy an app. Give it permission to access all your keystrokes, your location, your camera and maybe your heart rate. The world’s most profitable advertising company says that’s fine.

December 13, 2018

Looking Skyward

Filed under: awesome,beauty,documentation,flickr,future,life,science — mhoye @ 12:43 pm

PC050781

PC050776

Space

November 19, 2018

Faint Signal

Filed under: awesome,beauty,digital,documentation,future,interfaces,life,work — mhoye @ 11:34 am

P2270158 (2)

It’s been a little over a decade since I first saw Clay Shirky lay out his argument about what he called the “cognitive surplus”, but it’s been on my mind recently as I start to see more and more people curtail or sever their investments in always-on social media, and turn their attentions to… something.

Something Else.

I was recently reminded of some reading I did in college, way back in the last century, by a British historian arguing that the critical technology, for the early phase of the industrial revolution, was gin.

The transformation from rural to urban life was so sudden, and so wrenching, that the only thing society could do to manage was to drink itself into a stupor for a generation. The stories from that era are amazing– there were gin pushcarts working their way through the streets of London.

And it wasn’t until society woke up from that collective bender that we actually started to get the institutional structures that we associate with the industrial revolution today. Things like public libraries and museums, increasingly broad education for children, elected leaders–a lot of things we like–didn’t happen until having all of those people together stopped seeming like a crisis and started seeming like an asset.

It wasn’t until people started thinking of this as a vast civic surplus, one they could design for rather than just dissipate, that we started to get what we think of now as an industrial society.

– Clay Shirky, “Gin, Television and the Cognitive Surplus“, 2008.

P2060122

I couldn’t figure out what it was at first – people I’d thought were far enough ahead of the curve to bend its arc popping up less often or getting harder to find; I’m not going to say who, of course, because who it is for me won’t be who it is for you. But you feel it too, don’t you? That quiet, empty space that’s left as people start dropping away from hyperconnected. The sense of getting gently reacquainted with loneliness and boredom as you step away from the full-court vanity press and stop synchronizing your panic attacks with the rest of the network. The moment of clarity, maybe, as you wake up from that engagement bender and remember the better parts of your relationship with absence and distance.

How, on a good day, the loneliness set your foot on the path, how the boredom could push you to push yourself.

I was reading the excellent book MARS BY 1980 in bed last night and this term just popped into my head as I was circling sleep. I had to do that thing where you repeat it in your head twenty times so that I’d remember it in the morning. I have no idea what refuture or refuturing really means, except that “refuturing” connects it in my mind with “rewilding.” The sense of creating new immediate futures and repopulating the futures space with something entirely divorced from the previous consensus futures.

Refuture. Refuturing. I don’t know. I wanted to write it down before it went away.

Which I guess is what we do with ideas about the future anyway.

Warren Ellis, August 21, 2018.

Maybe it’s just me. I can’t quite see the shape of it yet, but I can hear it in the distance, like a radio tuned to a distant station; signal in the static, a song I can’t quite hear but I can tell you can dance to. We still have a shot, despite everything; whatever’s next is coming.

I think it’s going to be interesting.

November 10, 2018

Tunnels

Filed under: a/b,analog,documentation,interfaces,life,travel — mhoye @ 2:13 am

Toronto’s oldest subway line, and the newest. This the view east from the Bloor Station platform:

Subway Tunnel, Bloor Station

… and this is the view north from York University:

PA250713

November 8, 2018

A Summer Of Code Question

Filed under: digital,documentation,future,interfaces,mozilla,work — mhoye @ 1:43 pm

This is a lightly edited response to a question we got on IRC about how to best apply to participate in Google’s “Summer Of Code” program. this isn’t company policy, but I’ve been the one turning the crank on our GSOC application process for the last while, so maybe it counts as helpful guidance.

We’re going to apply as an organization to participate in GSOC 2019, but that process hasn’t started yet. This year it kicked off in the first week of January, and I expect about the same in 2019.

You’re welcome to apply to multiple positions, but I strongly recommend that each application be a focused effort; if you send the same generic application to all of them it’s likely they’ll all be disregarded. I recognize that this seems unfair, but we get a tidal wave of redundant applications for any position we open, so we have to filter them aggressively.

Successful GSOC applicants generally come in two varieties – people who put forward a strong application to work on projects that we’ve proposed, and people that have put together their own GSOC proposal in collaboration with one or more of our engineers.

The latter group are relatively rare, comparatively – they generally are people we’ve worked through some bugs and had some useful conversations with, who’ve done the work of identifying the “good GSOC project” bugs and worked out with the responsible engineers if they’d be open to collaboration, what a good proposal would look like, etc.

None of those bugs or conversations are guarantees of anything, perhaps obviously – some engineers just don’t have time to mentor a GSOC student, some of the things you’re interested in doing won’t make good GSOC projects, and so forth.

One of the things I hope to do this year is get better at clarifying what a good GSOC project proposal looks like, but broadly speaking they are:

  • Nice-to-have features, but non-blocking and non-critical-path. A struggling GSOC student can’t put a larger project at risk.
  • Few (good) or no (better) dependencies, on external factors, whether they’re code, social context or other people’s work. A good GSOC project is well-contained.
  • Clearly defined yes-or-no deliverables, both overall and as milestones throughout the summer. We need GSOC participants to be able to show progress consistently.
  • Finally, broad alignment with Mozilla’s mission and goals, even if it’s in a supporting role. We’d like to be able to draw a straight line between the project you’re proposing and Mozilla being incrementally more effective or more successful. It doesn’t have to move any particular needle a lot, but it has to move the needle a bit, and it has to be a needle we care about moving.

It’s likely that your initial reaction to this “that is a lot, how do I find all this out, what do I do here, what the hell”, and that’s a reasonable reaction.

The reason that this group of applicants is comparatively rare is that people who choose to go that path have mostly been hanging around the project for a bit, soaking up the culture, priorities and so on, and have figured out how to navigate from “this is my thing that I’m interested in and want to do” to “this is my explanation of how my thing fits into Mozilla, both from product engineering and an organizational mission perspective, and this is who I should be making that pitch to”.

This is not to say that it’s impossible, just that there’s no formula for it. Curiosity and patience are your most important tools, if you’d like to go down that road, but if you do we’d definitely like to hear from you. There’s no better time to get started than now.

October 15, 2018

Quality Speakings

Filed under: documentation,future,interfaces,linux,mozilla,work — mhoye @ 7:32 pm

Unfortunately my suite of annoying verbal tics – um right um right um, which I continue to treat like Victor Borge’s phonetic punctuation – are on full display here, but I guess we’ll have to live with that. Here’s a talk I gave at the GTA Linux User Group on “The State Of Mozilla”, split into the main talk and the Q&A sections. I could probably have cut a quarter of that talk out by just managing those twitches better, but I guess that’s a project for 2019. In the meantime:


The talk:


The Q&A afterwards:

The preview on that second one is certainly unflattering. It ends on a note I’m pretty proud of, though, around the 35 minute mark.

I should go back make a note of all the “ums” and “rights” in this video and graph them out. I bet it’s some sort of morse-coded left-brain cry for help.

August 15, 2018

Time Dilation

Filed under: academic,digital,documentation,interfaces,lunacy,mozilla,science,work — mhoye @ 11:17 am


[ https://www.youtube.com/embed/JEpsKnWZrJ8 ]

I riffed on this a bit over at twitter some time ago; this has been sitting in the drafts folder for too long, and it’s incomplete, but I might as well get it out the door. Feel free to suggest additions or corrections if you’re so inclined.

You may have seen this list of latency numbers every programmer should know, and I trust we’ve all seen Grace Hopper’s classic description of a nanosecond at the top of this page, but I thought it might be a bit more accessible to talk about CPU-scale events in human-scale transactional terms. So: if a single CPU cycle on a modern computer was stretched out as long as one of our absurdly tedious human seconds, how long do other computing transactions take?

If a CPU cycle is 1 second long, then:

  • Getting data out of L1 cache is about the same as getting your data out of your wallet; about 3 seconds.
  • At 9 to 10 seconds, getting data from L2 cache is roughly like asking your friend across the table for it.
  • Fetching data from the L3 cache takes a bit longer – it’s roughly as fast as having an Olympic sprinter bring you your data from 400 meters away.
  • If your data is in RAM you can get it in about the time it takes to brew a pot of coffee; this is how long it would take a world-class athlete to run a mile to bring you your data, if they were running backwards.
  • If your data is on an SSD, though, you can have it six to eight days, equivalent to having it delivered from the far side of the continental U.S. by bicycle, about as fast as that has ever been done.
  • In comparison, platter disks are delivering your data by horse-drawn wagon, over the full length of the Oregon Trail. Something like six to twelve months, give or take.
  • Network transactions are interesting – platter disk performance is so poor that fetching data from your ISP’s local cache is often faster than getting it from your platter disks; at two to three months, your data is being delivered to New York from Beijing, via container ship and then truck.
  • In contrast, a packet requested from a server on the far side of an ocean might as well have been requested from the surface of the moon, at the dawn of the space program – about eight years, from the beginning of the Apollo program to Armstrong, Aldrin and Collin’s successful return to earth.
  • If your data is in a VM, things start to get difficult – a virtualized OS reboot takes about the same amount of time as has passed between the Renaissance and now, so you would need to ask Leonardo Da Vinci to secretly encode your information in one of his notebooks, and have Dan Brown somehow decode it for you in the present? I don’t know how reliable that guy is, so I hope you’re using ECC.
  • That’s all if things go well, of course: a network timeout is roughly comparable to the elapsed time between the dawn of the Sumerian Empire and the present day.
  • In the worst case, if a CPU cycle is 1 second, cold booting a racked server takes approximately all of recorded human history, from the earliest Indonesian cave paintings to now.

August 13, 2018

Licensing Edgecases

Filed under: digital,documentation,interfaces,linux,mozilla,work — mhoye @ 4:37 pm

While I’m not a lawyer – and I’m definitely not your lawyer – licensing questions are on my plate these days. As I’ve been digging into one, I’ve come across what looks like a strange edge case in GPL licensing compliance that I’ve been trying to understand. Unfortunately it looks like it’s one of those Affero-style, unforeseen edge cases that (as far as I can find…) nobody’s tested legally yet.

I spent some time trying to understand how the definition of “linking” applies in projects where, say, different parts of the codebase use disparate, potentially conflicting open source licenses, but all the code is interpreted. I’m relatively new to this area, but generally speaking outside of copying and pasting, “linking” appears to be the critical threshold for whether or not the obligations imposed by the GPL kick in and I don’t understand what that means for, say, Javascript or Python.

I suppose I shouldn’t be surprised by this, but it’s strange to me how completely the GPL seems to be anchored in early Unix architectural conventions. Per the GPL FAQ, unless we’re talking about libraries “designed for the interpreter”, interpreted code is basically data. Using libraries counts as linking, but in the eyes of the GPL any amount of interpreted code is just a big, complicated config file that tells the interpreter how to run.

At a glance this seems reasonable but it seems like a pretty strange position for the FSF to take, particularly given how much code in the world is interpreted, at some level, by something. And honestly: what’s an interpreter?

The text of the license and the interpretation proposed in the FAQ both suggest that as long as all the information that a program relies on to run is contained in the input stream of an interpreter, the GPL – and if their argument sticks, other open source licenses – simply… doesn’t apply. And I can’t find any other major free or open-source licenses that address this question at all.

It just seems like such a weird place for an oversight. And given the often-adversarial nature of these discussions, given the stakes, there’s no way I’m the only person who’s ever noticed this. You have to suspect that somewhere in the world some jackass with a very expensive briefcase has an untested legal brief warmed up and ready to go arguing that a CPU’s microcode is an “interpreter” and therefore the GPL is functionally meaningless.

Whatever your preferred license of choice, that really doesn’t seem like a place we want to end up; while this interpretation may be technically correct it’s also very-obviously a bad-faith interpretation of both the intent of the GPL and that of the authors in choosing it.

The position I’ve taken at work is that “are we technically allowed to do this” is a much, much less important question than “are we acting, and seen to be acting, as good citizens of the larger Open Source community”. So while the strict legalities might be blurry, seeing the right thing to do is simple: we treat the integration of interpreted code and codebases the same way we’d treat C/C++ linking, respecting the author’s intent and the spirit of the license.

Still, it seems like something the next generation of free and open-source software licenses should explicitly address.

Older Posts »

Powered by WordPress