blarg?

December 17, 2019

Poor Craft

Filed under: future,interfaces,linux,microfiction,toys,want,weird,work — mhoye @ 1:53 pm

Ghosting

“It’s a poor craftsman that blames his tools” is an old line, and it took me a long time to understand it.

[ https://www.youtube.com/embed/ShEez0JkOF ]

A friend of mine sent me this talk. And while I want to like it a lot, it reminded me uncomfortably of Dabblers and Blowhards, the canon rebuttal to “Hackers And Painters”, an early entry in Paul Graham’s long-running oeuvre elaborating how special and magical it is to be just like Paul Graham.

It’s surprisingly hard to pin Paul Graham down on the nature of the special bond he thinks hobbyist programmers and painters share. In his essays he tends to flit from metaphor to metaphor like a butterfly, never pausing long enough to for a suspicious reader to catch up with his chloroform jar. […] You can safely replace “painters” in this response with “poets”, “composers”, “pastry chefs” or “auto mechanics” with no loss of meaning or insight. There’s nothing whatsoever distinctive about the analogy to painters, except that Paul Graham likes to paint, and would like to feel that his programming allows him a similar level of self-expression.

There’s an old story about Soundcloud (possibly Spotify? DDG tends to the literal these days and Google is just all chaff) that’s possibly apocryphal but too good not to turn into a metaphor, about how for a long time their offices were pindrop-quiet. About how during that rapid-growth phase they hired people in part for their love of and passion for music, and how that looked absolutely reasonable until they realized their people didn’t love music: they loved their music. Your music, obviously, sucks. So everyone there wears fantastic headphones, nobody actually talks to each other, and all you can hear is in their office is keyboard noise and the HVAC.

I frequently wonder if the people who love Lisp or Smalltalk fall into that same broad category: that they don’t “love Lisp” so much as they love their Lisp, the Howl’s Moving Memory Palaces they’ve built for themselves, tailored to the precise cut of their own idiosyncracies. That if you really dig in and ask them you’ll find that other people’s Lisp, obviously, sucks.

It seems like an easy trap to fall in to, but I suspect it means we collectively spend a lot of time genuflecting this magical yesteryear and its imagined perfect crystal tools when the fact of it is that we spend almost all of our time in other people’s code, not our own.

I feel similarly about Joel Spolsky’s notion of “leaky abstractions”; maybe those abstractions aren’t “leaking” or “failing”. Instead it’s that you’ve found the point where your goals, priorities or assumptions have diverged from those of the abstraction’s author, and that’s ultimately not a problem with the abstraction.

The more time I spend in front of a keyboard, the more I think my core skills here aren’t any more complicated than humility, empathy and patience; that if you understand its authors the code will reveal itself. I’ve mentioned before that programming is, a lot more than most people realize, inherently political. You’re making decisions about how to allocate scarce resources in ways that affect other people; there’s no other word for it. So when you’re building on other people’s code, you’re inevitably building on their assumptions and values as well, and if that’s true – that you spend most of your time as a programmer trying to work with other people’s values and decisions – then it’s guaranteed that it’s a lot more important to think about how to best spend that time, or optimize those tools and interactions, rather than championing tools that amount to applied reminiscence, a nostalgia with a grammar. In any other context we’d have a term for that, we’d recognize it for what it is, and it’s unflattering.

What does a programming language optimized for ease-of-collaboration or even ease-of-empathy look like, I wonder? What does that development environment do, and how many of our assumptions about best collaborative practices are just accidental emergent properties of the shortcomings of our tools? Maybe compiler pragmas up front as expressions of preferred optimizations, and therefore priorities? Culture-of-origin tags, demarking the shared assumptions of developers? “Reds and yellows are celebratory colors here, recompile with western sensibilities to swap your alert and default palettes with muted blues/greens.” Read, Eval, Print looping feels for all its usefulness like a huge missed opportunity, an evolutionary dead end that was just the best model we could come up with forty years ago, and maybe we’ve accidentally spent a lot of time looking backwards without realizing it.

Long Term Support

Filed under: a/b,digital,future,interfaces,linux,toys,want,work — mhoye @ 11:34 am

I bought a cordless drill from DeWalt a few years before they standardized on their current 20 volt form factor. Today the drill part of the drill is still in good shape, but its batteries won’t hold a charge – don’t store your batteries in the shed over the winter, folks, that’s rookie mistake – and I can’t replace them; they just don’t make them anymore. Nobody does.

I was thoroughly prepared to be annoyed about this, but it turns out DeWalt makes an adapter that slots right into my old drill and lets me use their new standard batteries. I’ll likely get another decade out of it as a result, and if the drill gives up the ghost in the meantime I’ll be able to use those batteries in its replacement.

Does any computer manufacturer out there anywhere care about longevity like that, today? The Cadillac answer to that used to be “Thinkpad”, but those days are long gone and as far as I can tell there’s nothing else in this space. I don’t care about thin or light at all. I’m happy to carry a few extra pounds; these are my tools, and if that’s the price of durable, maintainable and resilient tools means a bit of extra weight in the bag I’ll pay it and smile. I just want to be able to fix it; I want something I can strip all the way down to standard parts with a standard screwdriver and replace piecemeal when it needs piecemeal replacing. Does anyone make anything like this anymore, a tradesman’s machine? The MNTRE people are giving it a shot. Is anyone else, anywhere?

October 23, 2019

The State Of Mozilla, 2019

Filed under: awesome,documentation,future,interfaces,linux,mozilla,vendetta,work — mhoye @ 11:52 am

As I’ve done in previous years, here’s The State Of Mozilla, as observed by me and presented by me to our local Linux user group.

Presentation:[ https://www.youtube.com/embed/RkvDnIGbv4w ]

And Q&A: [ https://www.youtube.com/embed/jHeNnSX6GcQ ]

Nothing tectonic in there – I dodged a few questions, because I didn’t want to undercut the work that was leading up to the release of Firefox 70, but mostly harmless stuff.

Can’t be that I’m getting stockier, though. Must be the shirt that’s unflattering. That’s it.

80×25

Every now and then, my brain clamps on to obscure trivia like this. It takes so much time. “Because the paper beds of banknote presses in 1860 were 14.5 inches by 16.5 inches, a movie industry cartel set a standard for theater projectors based on silent film, and two kilobytes is two kilobytes” is as far back as I have been able to push this, but let’s get started.

In August of 1861, by order of the U.S. Congress and in order to fund the Union’s ongoing war efforts against the treasonous secessionists of the South, the American Banknote Company started printing what were then called “Demand Notes”, but soon widely known as “greenbacks”.

It’s difficult to research anything about the early days of American currency on Wikipedia these days; that space has been thoroughly colonized by the goldbug/sovcit cranks. You wouldn’t notice it from a casual examination, which is of course the plan; that festering rathole is tucked away down in the references, where articles will fold a seemingly innocuous line somewhere into the middle, tagged with an exceptionally dodgy reference. You’ll learn that “the shift from demand notes to treasury notes meant they could no longer be redeemed for gold coins[1]” – which is strictly true! – but if you chase down that footnote you wind up somewhere with a name like “Lincoln’s Treason – Fiat Currency, Maritime Law And The U.S. Treasury’s Conspiracy To Enslave America”, which I promise I am only barely exaggerating about.

It’s not entirely clear if this is a deliberate exercise in coordinated crank-wank or just years of accumulated flotsam from the usual debate-club dead-enders hanging off the starboard side of the Overton window. There’s plenty of idiots out there that aren’t quite useful enough to work the k-cups at the Heritage Institute, and I guess they’re doing something with their time, but the whole thing has a certain sinister elegance to it that the Randroid crowd can’t usually muster. I’ve got my doubts either way, and I honestly don’t care to dive deep enough into that sewer to settle them. Either way, it’s always good to be reminded that the goldbug/randroid/sovcit crank spectrum shares a common ideological klancestor.

Mercifully that is not what I’m here for. I am here because these first Demand Notes, and the Treasury Notes that came afterwards, were – on average, these were imprecise times – 7-3/8” wide by 3-1/4” tall.

I haven’t been able to precisely answer the “why” of that – I believe, but do not know, that that this is because of the size of the specific dimensions of the presses they were printed on. Despite my best efforts I haven’t been able to find the exact model and specifications of that device. I’ve asked the U.S. Congressional Research Service for some help with this, but between them and the Bureau of Engraving and Printing, we haven’t been able to pin it down. From my last correspondence with them:

Unfortunately, we don’t have any materials in the collection identifying the specific presses and their dimension for early currency production. The best we can say is that the presses used to print currency in the 1860s varied in size and model. These presses went by a number of names, including hand presses, flat-bed presses, and spider presses. They also were capable of printing sheets of paper in various sizes. However, the standard size for printing securities and banknotes appears to have been 14.5 inches by 16.5 inches. We hope this bit of information helps.

… which is unfortunate, but it does give us some clarity. A 16.5″ by 14.5″ printing sheet lets you print eight 7-3/8” by 3-1/4″ sheets to size, with a fraction of an inch on either side for trimming.

The answer to that question starts to matter about twenty years later on the heels of the 1880 American Census. Mandated to be performed once a decade, the United States population had grown some 30% since the previous census, and even with enormous effort the final tabulations weren’t finished until 1888, an unacceptable delay.

One of the 1880 Census’ early employees was a man named Herman Hollerith, a recent graduate of the Columbia School of Mines who’d been invited to join the Census efforts early on by one of his professors. The Census was one of the most important social and professional networking exercises of the day, and Hollerith correctly jumped at the opportunity:

The absence of a permanent institution meant the network of individuals with professional census expertise scattered widely after each census. The invitation offered a young graduate the possibility to get acquainted with various members of the network, which was soon to be dispersed across the country.

As an aside, that invitation letter is one of the most important early documents in the history of computing for lots of reasons, including this one:

The machine in that picture was the third generation of the “Hollerith Tabulator”, notable for the replaceable plugboard that made it reprogrammable. I need to find some time to dig further into this, but that might be the first multipurpose, if not “general purpose” as we’ve come to understand it, electronic computation device. This is another piece of formative tech that emerged from this era, one that led to directly to the removable panels (and ultimately the general componentization) of later computing hardware.

Well before the model 3, though, was the original 1890 Hollerith Census Tabulator that relied on punchcards much like this one.

Hollerith took the inspiration for those punchcards from the “punch photographs” used by some railways at the time to make sure that tickets belonged to the passengers holding them. You can see a description of one patent for them here dating to 1888, but Hollerith relates the story from a few years earlier:

One thing that helped me along in this matter was that some time before I was traveling in the west and I had a ticket with what I think was called a punch photograph. When the ticket was first presented to a conductor he punched out a description of the individual, as light hair, dark eyes, large nose etc. So you see I only made a punch photograph of each person.

Tangentially: this is the birth of computational biometrics. And as you can see from this extract from The Railway News (Vol. XLVIII, No. 1234 , published Aug. 27, 1887) people have been concerned about harassment because of unfair assessment by the authorities from day one:

punch-photograph

After experimenting with a variety of card sizes Hollerith decided that to save on production costs he’d use the same boxes the U.S. Treasury was using for the currency of the day: the Demand Note. Punch cards stayed about that shape, punched with devices that looked a lot like this for about 20 years until Thomas Watson Sr. (IBM’s first CEO, from whom the Watson computer gets its name) asked Clair D. Lake and J. Royden Peirce to develop a new, higher data-density card format.

Tragically, this is the part where I need to admit an unfounded assertion. I’ve got data, the pictures line up and numbers work, but I don’t have a citation. I wish I did.

Take a look at “Type Design For Typewriters: Olivetti, written by Maria Ramos Silvia. (You can see a historical talk from her on the history of typefaces here that’s also pretty great.)

Specifically, take a look on page 46 at Mikron Piccolo, Mikron Condensed. The fonts don’t precisely line up – see the different “4”, for example, when comparing it to the typesetting of IBM’s cards – but the size and spacing do. In short: a line of 80 characters, each separated by a space, is the largest round number of digits that the tightest typesetting of the day would allow to be fit on a single 7-3/8” wide card: a 20-point condensed font.

I can’t find a direct citation for this; that’s the only disconnect here. But the spacing all fits, the numbers all work, and I’d bet real money on this: that when Watson gave Lake the task of coming up with a higher information-density punch card, Lake looked around at what they already had on the shelf – a typewriter with the highest-available character density of the day, on cards they could manage with existing and widely-available tooling – and put it all together in 1928. The fact that a square hole – a radical departure from the standard circular punch – was a patentable innovation at the time was just icing on the cake.

The result of that work is something you’ll certainly recognize, the standard IBM punchcard, though of course there’s lot more to it than that. Witness the full glory of the Card Stock Acceptance Procedure, the protocol for measuring folding endurance, air resistance, smoothness and evaluating the ash content, moisture content and pH of the paper, among many other things.

At one point sales of punchcards and related tooling constituted a completely bonkers 30% of IBM’s annual profit margin, so you can understand that IBM had a lot invested in getting that consistently, precisely correct.

At around this time John Logie Baird invented the first “mechanical television”; like punchcards, the first television cameras were hand-cranked devices that relied on something called a Nipkow disk, a mechanical tool for separating images into sequential scan lines, a technique that survives in electronic form to this day. By linearizing the image signal Baird could transmit the image’s brightness levels via a simple radio signal and in 1926 he did just that, replaying that mechanically encoded signal through a CRT and becoming the inventor of broadcast television. He would go on to pioneer colour television – originally called Telechrome, a fantastic name I’m sad we didn’t keep – but that’s a different story.

Baird’s original “Televisor” showed its images on a 7:3 aspect ration vertically oriented cathode ray tube, intended to fit the head and shoulders of a standing person, but that wouldn’t last.

For years previously, silent films had been shot on standard 35MM stock, but the addition of a physical audio track to 35MM film stock didn’t leave enough space left over for the visual area. So – after years of every movie studio having its own preferred aspect ratio, which required its own cameras, projectors, film stock and tools (and and and) – in 1929 the movie industry agreed to settle on the Society of Motion Picture And Television Engineers’ proposed standard of 0.8 inches by 0.6 inches, what became known as the Academy Ratio, or as we better know it today, 4:3.

Between 1932 and 1952, when widescreen for cinemas came into vogue as a differentiator from standard television, just about all the movies made in the world were shot in that aspect ratio, and just about every cathode ray tube made came in that shape, or one that could display it reliably. In 1953 studios started switching to a wider “Cinemascope”, to aggressively differentiate themselves from television, but by then television already had a large, thoroughly entrenched install base, and 4:3 remained the standard for in-home displays – and CRT manufacturers – until widescreen digital television came to market in the 1990s.

As computers moved from teleprinters – like, physical, ink-on-paper line printers – to screens, one byproduct of that standardization was that if you wanted to build a terminal, you either used that aspect ratio or you started making your own custom CRTs, a huge barrier to market entry. You can do that if you’re IBM, and you’re deeply reluctant to if you’re anyone else. So when DEC introduced their VT52 terminal, a successor to the VT50 and earlier VT05 that’s what they shipped, and with only 1Kb of display ram (one kilobyte!) it displayed only twelve rows of widely-spaced text. Math is unforgiving, and 80×12=960; even one more row breaks the bank. The VT52 and its successor the VT100, though, doubled that capacity giving users the opulent luxury of two entire kilobytes of display memory, laid out with a font that fit nicely on that 4:3 screen. The VT100 hit the market in August of 1978, and DEC sold more than six million of them over the product’s lifespan.

You even got an extra whole line to spare! Thanks to the magic of basic arithmetic 80×25 just sneaks under that opulent 2k limit with 48 bytes to spare.

This is another point where direct connections get blurry, because 1976 to 1984 was an incredibly fertile time in the history of computing history. After a brief period where competing terminal standards effectively locked software to the hardware that it shipped on, the VT100 – being the first terminal to market fully supporting the recently codified ANSI standard control and escape sequences – quickly became the de-facto standard, and soon afterwards the de-jure, codified in ANSI-X3.64/ECMA-48. CP/M, soon to be replaced with PC-DOS and then MS-DOS came from this era, with ANSI.SYS being the way DOS programs talked to the display from DOS 2.0 through to beginning of Windows. Then in 1983 the Apple IIe was introduced, the first Apple computer to natively support an 80×24 text display, doubling the 40×24 default of their earlier hardware. The original XTerm, first released in 1984, was also created explicitly for VT100 compatibility.

Fascinatingly, the early versions of the ECMA-48 standard specify that this standard isn’t solely meant for displays, specifying that “examples of devices conforming to this concept are: an alpha-numeric display device, a printer or a microfilm output device.”

A microfilm output device! This exercise dates to a time when microfilm output was a design constraint! I did not anticipate that cold-war spy-novel flavor while I was dredging this out, but it’s there and it’s magnificent.

It also dates to a time that the market was shifting quickly from mainframes and minicomputers to microcomputers – or, as we call them today, “computers” – as reasonably affordable desktop machines that humans might possibly afford and that companies might own a large number of, meaning this is also where the spectre of backcompat starts haunting the industry – This moment in a talk from the Microsoft developers working on the Windows Subsystem for Linux gives you a sense of the scale of that burden even today. In fact, it wasn’t until the fifth edition of ECMA-48 was published in 1991, more than a decade after the VT100 hit the market, that the formal specification for terminal behavior even admitted the possibility (Appendix F) that a terminal could be resized at all, meaning that the existing defaults were effectively graven in stone during what was otherwise one of the most fertile and formative periods in the history of computing.

As a personal aside, my two great frustrations with doing any kind of historical CS research remain the incalculable damage that academic paywalls have done to the historical record, and the relentless insistence this industry has on justifying rather than interrogating the status quo. This is how you end up on Stack Overflow spouting unresearched nonsense about how “4 pixel wide fonts are untidy-looking”. I’ve said this before, and I’ll say it again: whatever we think about ourselves as programmers and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize, and by telling and retelling these unsourced, inaccurate just-so stories without ever doing the work of finding the real truth, we’re betraying ourselves, our history and our future. But it’s pretty goddamned difficult to convince people that they should actually look things up instead of making up nonsense when actually looking things up, even for a seemingly simple question like this one, can cost somebody on the outside edge of an academic paywall hundreds or thousands of dollars.

So, as is now the usual in these things:

  • There are technical reasons,
  • There are social reasons,
  • It’s complicated, and
  • Open access publication or GTFO.

But if you ever wondered why just about every terminal in the world is eighty characters wide and twenty-five characters tall, there you go.

May 3, 2019

Goals And Constraints

Filed under: digital,documentation,flickr,future,interfaces,linux,mozilla,work — mhoye @ 12:30 pm

This way to art.

I keep coming back to this:

“Open” in this context inextricably ties source control to individual agency. The checks and balances of openness in this context are about standards, data formats, and the ability to export or migrate your data away from sites or services that threaten to go bad or go dark. This view has very little to say about – and is often hostile to the idea of – granular access restrictions and the ability to impose them, those being the tools of this worldview’s bad actors.

The blind spots of this worldview are the products of a time where someone on the inside could comfortably pretend that all the other systems that had granted them the freedom to modify this software simply didn’t exist. Those access controls were handled, invisibly, elsewhere; university admission, corporate hiring practices or geography being just a few examples of the many, many barriers between the network and the average person.

And when we’re talking about blind spots and invisible social access controls, of course, what we’re really talking about is privilege.

How many people get to have this, I wonder: the sense that they can sit down in front of a computer and be empowered by it. The feeling of being able, the certainty that you are able to look at a hard problem, think about it, test and iterate; that easy rapid prototyping with familiar tools is right there in your hands, that a toolbox the size of the world is within reach. That this isn’t some child’s wind up toy I turn a crank on until the powerpoint clown pops up.

It’s not a universal or uniform experience, to be sure; they’re machines made of other people’s choices, and computers are gonna computer. But the only reason I get to have that feeling at all is that I got my start when the unix command line was the only decent option around, and I got to put the better part of a decade grooving in that muscle memory on machines and forums where it was safe – for me at least – to be there, fully present, make mistakes and learn from them.

(Big shoutout to everyone out there who found out how bash wildcards work by inadvertently typing mv * in a directory with only two files in it.)

That world doesn’t exist anymore; the internet that birthed it isn’t coming back. But I want everyone to have this feeling, that the machine is more than a glossy appliance. That it’s not a constraint. That with patience and tenacity it can work with you and for you, not just a tool for a task but an extension and expression of ourselves and our intent. That a computer can be a tool for expressing ourselves, for helping us be ourselves better.

Last week I laid out the broad strokes of Mozilla’s requirements for our next synchronous-text platform. They were pretty straightforward, but I want to thank a number of people from different projects who’ve gotten in touch on IRC or email to ask questions and offer their feedback.

Right now I’d like to lay out those requirements in more detail, and talk about some of the reasons behind them. Later I’m going to lay out the process and the options we’re looking at, and how we’re going to gather information, test those options and evaluate what we learn.

While the Rust community is making their own choices now about the best fit for their needs, the Rust community’s processes are going to strongly inform the steps for Mozilla. They’ve learned a lot the hard way about consensus-building and community decision-making, and it’s work that I have both a great deal of respect for and no intention of re-learning the hard way myself. I’ll have more about that shortly as well.

I mentioned our list of requirements last week but I want to drill into some of them here; in particular:

  • It needs to be accessible to the greater Mozilla community.

This one implies a lot more than it states, and it would be pretty easy to lay out something trite like “we think holistically about accessibility” the way some organizations say “a diversity of ideas”, as though that means anything at all. But that’s just not good enough.

Diversity, accessibility and community are all tightly interwoven ideas we prize, and how we approach, evaluate and deploy the technologies that connect us speaks deeply to our intentions and values as an organization. Mozilla values all the participants in the project, whether they rely on a screen reader, a slow network or older hardware; we won’t – we can’t – pick a stack that treats anyone like second-class citizens. That will not be allowed.

  • While we’re investigating options for semi-anonymous or pseudonymous connections, we will require authentication, because:
  • The Mozilla Community Participation Guidelines will apply, and they’ll be enforced.

Last week Dave Humphrey wrote up a reminiscence about his time on IRC soon after I made the announcement. Read the whole thing, for sure. Dave is wiser and kinder than I am, and has been for as long as we’ve known each other; his post spoke deeply to many of us who’ve been in and around Mozilla for a while, and two sentences near the end are particularly important:

“Having a way to get deeply engaged with a community is important, especially one as large as Mozilla. Whatever product or tool gets chosen, it needs to allow people to join without being invited.”

We’ve got a more detailed list of functional and organizational requirements for this project, and this is an important part of it: “New users must be able to join the service without manual intervention from a Mozilla employee.”

We’ve understood this as an accessibility issue for a long time as well, though I don’t think we’ve ever given it a name. “Involvement friction”, maybe – everything about becoming part of a project and community that’s hard not because it’s inherently difficult, but because nobody’s taken the time to make it easy.

I spend a lot of time thinking about something Sid Wolinsky said about the first elevators installed in the New York subway system: “This elevator is a gift from the disability community and the ADA to the nondisabled people of New York”. If you watch who’s using the elevators, ramps or automatic doors in any public building long enough, anything with wheelchair logo on it, you’ll notice a trend: it’s never somebody in a wheelchair. It’s somebody pushing a stroller or nursing a limp. It’s somebody carrying an awkward parcel, or a bag of groceries. Sometimes it’s somebody with a coffee in one hand and a phone in the other. Sometimes it’s somebody with no reason at all, at least not one you can see. It’s people who want whatever thing they’re doing, however difficult, to be a little bit easier. It’s everybody.

If you cost out accessible technology for the people who rely on it, it looks really expensive; if you cost it out for everyone who benefits from it, though, it’s basically free. And none of us in the “benefit” camp are ever further than a sprained ankle away from “rely”.

We’re getting better at this at Mozilla in hundreds of different ways, at recognizing how important it is that the experience of getting from “I want to help” to “I’m set up to help” to “I’m helping” be as simple and painless as possible. As one example, our bootstrap scripts and mach-build have reduced our once-brittle, failure-prone developer setup process down to “answer these questions and wait for the downloads to finish”, and in the process have done more to make the Firefox codebase accessible than I ever will. And everyone relies on them now, first-touch contributors and veteran devs alike.

Getting involved in the community, though, is still harder than it needs to be; try watching somebody new to open source development try to join an IRC channel sometime. Watch them go from “what’s IRC” to finding a client to learning how to use the client to joining the right server, then the right channel, only to find that the reward for all that effort is no backscroll, no context, and no idea who you’re talking to or if you’re in the right place or if you’re shouting into the void because the people you’re looking for aren’t logged in at the same time. It’s like asking somebody to learn to operate an airlock on their own so they can toss themselves out of it.

It’s more than obvious that you don’t build products like that anymore, but I think it’s underappreciated that it’s just as true of communities. I think it’s critical that we bring that same discipline of caring about the details of the experience to our communications channels and community forums, and the CPG is the cornerstone of that effort.

It was easy not to care about this when somebody who wanted to contribute to an open source project with global impact had maybe four choices, the Linux kernel, the Mozilla suite, the GNU tools and maybe Apache. But that world was pre-Github, pre-NPM. If you want to work on hard problems with global impact now you have a hundred thousand options, and that means the experience of joining and becoming a part of the Mozilla community matters.

In short, the amount of effort a project puts into making the path from “I want to help” to “I’m helping” easier is a reliable indicator of the value that project puts on community involvement. So if we say we value our community, we need to treat community involvement and contribution like a product, with all the usability and accessibility concerns that implies. To drive involvement friction as close to zero as possible.

One tool we’ll be relying on – and this one, we did build in-house – is called Mozilla-IAM, Mozilla’s Identity and Access Management tool. I’ll have more to say about this soon, but at its core it lets us proxy authentication from various sources and methods we trust, Github, Firefox Accounts, a link in your email, a few others. We think IAM will let us support pseudonymous participation and a low-cost first-contact experience, but also let us keep our house in order and uphold the CPG in the process.

Anyway, here’s a few more bullet points; what requirements doc isn’t full of them?

A synchronous messaging system that meets our needs:

  • Must work correctly in unmodified, release-channel Firefox.
  • Must offer a solid mobile experience.
  • Must support thousands of simultaneous users across the service.
  • Must support easy sharing of hyperlinks and graphics as well as text.
  • Must have persistent scrollback. Users reconnecting to a channel or joining the channel for the first time must be able to read up to acquire context of the current conversation in the backscroll.
  • Programmatic access is a hard requirement. The service must support a mature, reasonably stable and feature-rich API.
  • As mentioned, people participating via accessible technologies including screen readers or high-contrast display modes must be able to participate as first-class citizens of the service and the project.
  • New users must be able to join the service without manual intervention from a Mozilla employee.
  • Whether or not we are self-hosting, the service must allow Mozilla to specify a data retention and security policy that meets our institutional standards.
  • The service must have a customizable first-contact experience to inform new participants about Mozilla’s CPG and privacy notice.
  • The service must have effective administrative tooling including user and channel management, alerting and banning.
  • The service must support delegated authentication.
  • The service must pass an evaluation by our legal, trust and security teams. This is obviously also non-negotiable.

I doubt any of that will surprise anyone, but they might, and I’m keeping an eye out for questions. We’re still talking this out in #synchronicity on irc.m.o, and you’re welcome to jump in.

I suppose I should tip my hand at this point, and say that as much as I value the source part of open source, I also believe that people participating in open source communities deserve to be free not only to change the code and build the future, but to be free from the brand of arbitrary, mechanized harassment that thrives on unaccountable infrastructure, federated or not. We’d be deluding ourselves if we called systems that are just too dangerous for some people to participate in at all “open” just because you can clone the source and stand up your own copy. And I am absolutely certain that if this free software revolution of ours ends up in a place where asking somebody to participate in open development is indistinguishable from asking them to walk home at night alone, then we’re done. People cannot be equal participants in environments where they are subject to wildly unequal risk. People cannot be equal participants in environments where they are unequally threatened.

I think we can get there; I think we can meet our obligations to the Mission and the Manifesto as well as the needs of our community, and help the community grow and thrive in a way that grows and strengthens the web want and empowers everyone using and building it to be who we’re aspiring to be, better.

The next steps are going to be to lay out the evaluation process in more detail; then we can start pulling in information, stand up instances of the candidate stacks we’re looking at and trying them out.

January 8, 2019

Feature Request

Filed under: digital,documentation,fail,interfaces,linux,toys,vendetta,want — mhoye @ 9:50 am

If I’m already in a Linux, ideally a Debian-esque Linux, is there a way for me to say “turn this new external hard drive into a bootable Linux that’s functionally identical to this current machine”? One that doesn’t involve any of dd, downloading an ISO or rebooting? It’s hard to believe this is as difficult as it seems, or that this isn’t a standard tool yet, but if it is I sure can’t find it.

Every installer I’ve seen since the first time I tried the once-magical Knoppix has let you boot into a workable Linux on its own and install that Linux to the hard drive if you like, but I can’t a standalone tool that does the same from a running system.

What I’m after is a tool (I briefly wrote “ideally graphical”, but yeah. Let’s be real here.) that you point at a hard drive, that:

  • Formats this new hard drive in some rough approximation of what you’ve already got and making it bootable. (grub-whatever?)
  • Installs the same packages onto that drive as are on the host system, and
  • Optionally copies over account information required, /home/*, passwords, whatever else is in /etc or /opt; skip (or be smart about?) stuff like the hostname or iptables, maybe.

…and ends with a hard drive I can plug into another system, boot and log into as comfortably-preconfigured me.

debootstrap almost gets part of the way there, and you can sort of convince that or multistrap to do the job if you scrape out your current config, pour it into a config file and rsync over a bunch of other stuff. But then I’m back in roll-it-yourself-land where I started.

Ideally this apparently-hypothetical magic clone tool would be able to do this with minimal network traffic, too – it’s likely I’ve already got many or most of those packages cached, no? And alternatively, it’d also be nice to able to keep my setup largely intact while migrating across architectures.

I go looking for this every few years, even though it puts me briefly back on the “why would you ever do it like that? You should switch distros!” You Asked A Linux Question On The Internet Treadmill. But I haven’t found a decent answer yet beyond rolling my own.

Welp.

(Comments are off permanently. You’re welcome to mail me, though?)

December 16, 2018

Control Keys

Filed under: a/b,digital,documentation,fail,future,hate,interfaces,linux — mhoye @ 10:36 pm

I spend a lot of time thinking about keyboards, and I wish more people did.

I’ve got more than my share of computational idiosyncrasies, but the first thing I do with any computer I’m going to be using for any length of time is remap the capslock key to control (or command, if I find myself in the increasingly “what if Tattoine, but Candy Crush” OSX-land). I’ve made a number of arguments about why I do this over the years, but I think they’re mostly post-facto justifications. The real reason, if there is such a thing, is likely that the first computer I ever put my hands on was an Apple ][c. On the ][c keyboard “control” is left of “A” and capslock is off in a corner. I suspect that whatever arguments I’ve made since, the fact of it is that my muscle memory has been comfortably stuck in that groove ever since.

It’s more than just bizarre how difficult it is to reassign any key to anything these days; it’s weird and saddening, especially given how awful the standard keyboard layout is in almost every respect. Particularly if you want to carry your idiosyncrasies across operating systems, and if I’m anything about anything these days, it’s particular.

I’m not even mad about the letter layout – you do you, Dvorak weirdos – but that we give precious keycap real estate to antiquated arcana and pedestrian novelty at the expense of dozens of everyday interactions, and as far as I can tell we mostly don’t even notice it.

  • This laptop has dedicated keys to let me select, from levels zero to three, how brightly my keyboard is backlit. If I haven’t remapped control to caps I need to twist my wrist awkwardly to cut, copy or paste anything.
  • I’ve got two alt keys, but undo and redo are chords each half a keyboard away from each other. Redo might not exist, or the key sequence could be just about anything depending on the program; sometimes all you can do is either undo, or undo the undo?
  • On typical PC keyboards Pause/Break and Scroll Lock, vestigial remnants a serial protocol of ages past, both have premium real estate all to themselves. “Find” is a chord. Search-backwards may or may not be a thing that exists depending on the program, but getting there is an exercise. Scroll lock even gets a capslock-like LED some of the time; it’s that important!
  • The PrtScn key that once upon a time would dump the contents of your terminal to a line printer – and who doesn’t want that? – is now given over to screencaps, which… I guess? I’m kind of sympathetic to this one, I have to admit. Social network interoperability is such a laughable catastrophe that sharing pictures of text is basically the only thing that works, which should be one of this industry’s most shameful embarrassments but here we are. I guess this can stay.
  • My preferred tenkeyless keyboards have thankfully shed the NumLock key I can’t remember ever hitting on purpose, but it’s still a stock feature of OEM keyboards, and it might be the most baffling of the bunch. If I toggle NumLock I can… have the keys immediately to the left of the number pad, again? Sure, why not.
  • “Ins” –  insert – is a dedicated key for the “what if delete, but backwards and slowly” option that only exists at all because mainframes are the worst. Are there people who toggle this on purpose? Has anyone asked them if they’re OK? I can’t select a word, sentence or paragraph with a keystroke; control-A lets me either select everything or nothing.
  • Finally, SysRq – short for “System Request” – gets its own button too, and it almost always does nothing because the one thing it does when it works – “press here to talk directly to the hardware” – is a security disaster only slightly obscured by a usability disaster.

It’s sad and embarrassing how awkwardly inconsiderate and anti-human these things are, and the fact that a proper fix – a human-hand-shaped keyboard whose outputs you get to choose for yourself – costs about as much as a passable computer is appalling.

Anyway, here’s a list of how you remaps capslock to control on various popular OSes, in a roughly increasing order of lunacy:

  • OSX: Open keyboard settings and click a menu.
  • Linux: setxkboptions, I think. Maybe xmodmap? Def. something in an .*rc file somewhere though. Or maybe .profile? Does gnome-tweak-tool still work, or is it called ubuntu-tweak-tool or just tweak-tool now? This seriously used to be a checkbox, not some 22nd-century CS-archaeology doctoral thesis. What an embarrassment.
  • Windows: Make a .reg file full of magic hexadecimal numbers. You’ll have to figure out how on your own, because exactly none of that documentation is trustworthy. Import it as admin with regedit. Reboot probably? This is ok. This is fine.
  • iOS: Ive says that’s where the keys go so that’s where the keys go. Think of it as minimalism except for the number of choices you’re allowed to make. Learn to like it or get bent, pleb.
  • Android: Buy an app. Give it permission to access all your keystrokes, your location, your camera and maybe your heart rate. The world’s most profitable advertising company says that’s fine.

November 9, 2018

The Evolution Of Open

Filed under: digital,future,interfaces,linux,losers,mozilla,science,toys,vendetta,work — mhoye @ 5:00 pm

This started its life as a pair of posts to the Mozilla governance forum, about the mismatch between private communication channels and our principles of open development. It’s a little long-winded, but I think it broadly applies not just to Mozilla but to open source in general. This version of it interleaves those two posts into something I hope is coherent, if kind of rambly. Ultimately the only point I want to make here is that the nature of openness has changed, and while it doesn’t mean we need to abandon the idea as a principle or as a practice, we can’t ignore how much has changed or stay mired in practices born of a world that no longer exists.

If you’re up for the longer argument, well, you can already see the wall of text under this line. Press on, I believe in you.

Even though open source software has essentially declared victory, I think that openness as a practice – not just code you can fork but the transparency and accessibility of the development process – matters more than ever, and is in a pretty precarious position. I worry that if we – the Royal We, I guess – aren’t willing to grow and change our understanding of openness and the practical realities of working in the open, and build tools to help people navigate those realities, that it won’t be long until we’re worse off than we were when this whole free-and-open-source-software idea got started.

To take that a step further: if some of the aspirational goals of openness and open development are the ideas of accessibility and empowerment – that reducing or removing barriers to participation in software development, and granting people more agency over their lives thereby, is self-evidently noble – then I think we need to pull apart the different meanings of the word “open” that we use as if the same word meant all the same things to all the same people. My sense is that a lot of our discussions about openness are anchored in the notion of code as speech, of people’s freedom to move bits around and about the limitations placed on those freedoms, and I don’t think that’s enough.

A lot of us got our start when an internet connection was a novelty, computation was scarce and state was fragile. If you – like me – are a product of this time, “open” as in “open source” is likely to be a core part of your sense of personal safety and agency; you got comfortable digging into code, standing up your own services and managing your own backups pretty early, because that was how you maintained some degree of control over your destiny, how you avoided the indignities of data loss, corporate exploitation and community collapse.

“Open” in this context inextricably ties source control to individual agency. The checks and balances of openness in this context are about standards, data formats, and the ability to export or migrate your data away from sites or services that threaten to go bad or go dark. This view has very little to say about – and is often hostile to the idea of – granular access restrictions and the ability to impose them, those being the tools of this worldview’s bad actors.

The blind spots of this worldview are the products of a time where someone on the inside could comfortably pretend that all the other systems that had granted them the freedom to modify this software simply didn’t exist. Those access controls were handled, invisibly, elsewhere; university admission, corporate hiring practices or geography being just a few examples of the many, many barriers between the network and the average person.

And when we’re talking about blind spots and invisible social access controls, of course, what we’re really talking about is privilege. “Working in the open”, in a world where computation was scarce and expensive, meant working in front of an audience that was lucky enough to go to university or college, whose parents could afford a computer at home, who lived somewhere with broadband or had one of the few jobs whose company opened low-numbered ports to the outside world; what it didn’t mean was doxxing, cyberstalking, botnets, gamergaters, weaponized social media tooling, carrier-grade targeted-harassment-as-a-service and state-actor psy-op/disinformation campaigns rolling by like bad weather. The relentless, grinding day-to-day malfeasance that’s the background noise of this grudgefuck of a zeitgeist we’re all stewing in just didn’t inform that worldview, because it didn’t exist.

In contrast, a more recent turn on the notion of openness is one of organizational or community openness; that is, openness viewed through the lens of the accessibility and the experience of participation in the organization itself, rather than unrestricted access to the underlying mechanisms. Put another way, it puts the safety and transparency of the organization and the people in it first, and considers the openness of work products and data retention as secondary; sometimes (though not always) the open-source nature of the products emerges as a consequence of the nature of the organization, but the details of how that happens are community-first, code-second (and sometimes code-sort-of, code-last or code-never). “Openness” in this context is about accessibility and physical and emotional safety, about the ability to participate without fear. The checks and balances are principally about inclusivity, accessibility and community norms; codes of conduct and their enforcement.

It won’t surprise you, I suspect, to learn that environments that champion this brand of openness are much more accessible to women, minorities and otherwise marginalized members of society that make up a vanishingly small fraction of old-school open source culture. The Rust and Python communities are doing good work here, and the team at Glitch have done amazing things by putting community and collaboration ahead of everything else. But a surprising number of tool-and-platform companies, often in “pink-collar” fields, have taken the practices of open community building and turned themselves into something that, code or no, looks an awful lot like the best of what modern open source has to offer. If you can bring yourself to look past the fact that you can’t fork their code, Salesforce – Salesforce, of all the damn things – has one of the friendliest, most vibrant and supportive communities in all of software right now.

These two views aren’t going to be easy to reconcile, because the ideas of what “accountability” looks like in both contexts – and more importantly, the mechanisms of accountability built in to the systems born from both contexts – are worse than just incompatible. They’re not even addressing something the other worldview is equipped to recognize as a problem. Both are in some sense of the word open, both are to a different view effectively closed and, critically, a lot of things that look like quotidian routine to one perspective look insanely, unacceptably dangerous to the other.

I think that’s the critical schism the dialogue, the wildly mismatched understandings of the nature of risk and freedom. Seen in that light the recent surge of attention being paid to federated systems feels like a weirdly reactionary appeal to how things were better in the old days.

I’ve mentioned before that I think it’s a mistake to think of federation as a feature of distributed systems, rather than as consequence of computational scarcity. But more importantly, I believe that federated infrastructure – that is, a focus on distributed and resilient services – is a poor substitute for an accountable infrastructure that prioritizes a distributed and healthy community.  The reason Twitter is a sewer isn’t that Twitter is centralized, it’s that Jack Dorsey doesn’t give a damn about policing his platform and Twitter’s board of directors doesn’t give a damn about changing his mind. Likewise, a big reason Mastodon is popular with the worst dregs of the otaku crowd is that if they’re on the right instance they’re free to recirculate shit that’s so reprehensible even Twitter’s boneless, soporific safety team can’t bring themselves to let it slide.

That’s the other part of federated systems we don’t talk about much – how much the burden of safety shifts to the individual. The cost of evolving federated systems that require consensus to interoperate is so high that structural flaws are likely to be there for a long time, maybe forever, and the burden of working around them falls on every endpoint to manage for themselves. IRC’s (Remember IRC?) ongoing borderline-unusability is a direct product of a notion of openness that leaves admins few better tools than endless spammer whack-a-mole. Email is (sort of…) decentralized, but can you imagine using it with your junkmail filters off?

I suppose I should tip my hand at this point, and say that as much as I value the source part of open source, I also believe that people participating in open source communities deserve to be free not only to change the code and build the future, but to be free from the brand of arbitrary, mechanized harassment that thrives on unaccountable infrastructure, federated or not. We’d be deluding ourselves if we called systems that are just too dangerous for some people to participate in at all “open” just because you can clone the source and stand up your own copy. And I am absolutely certain that if this free software revolution of ours ends up in a place where asking somebody to participate in open development is indistinguishable from asking them to walk home at night alone, then we’re done. People cannot be equal participants in environments where they are subject to wildly unequal risk. People cannot be equal participants in environments where they are unequally threatened. And I’d have a hard time asking a friend to participate in an exercise that had no way to ablate or even mitigate the worst actions of the internet’s worst people, and still think of myself as a friend.

I’ve written about this before:

I’d like you to consider the possibility that that’s not enough.

What if we agreed to expand what freedom could mean, and what it could be. Not just “freedom to” but a positive defense of opportunities to; not just “freedom from”, but freedom from the possibility of.

In the long term, I see that as the future of Mozilla’s responsibility to the Web; not here merely to protect the Web, not merely to defend your freedom to participate in the Web, but to mount a positive defense of people’s opportunities to participate. And on the other side of that coin, to build accountable tools, systems and communities that promise not only freedom from arbitrary harassment, but even freedom from the possibility of that harassment.

More generally, I still believe we should work in the open as much as we can – that “default to open”, as we say, is still the right thing – but I also think we and everyone else making software need to be really, really honest with ourselves about what open means, and what we’re asking of people when we use that word. We’re probably going to find that there’s not one right answer. We’re definitely going to have to build a bunch of new tools.  But we’re definitely not going to find any answers that matter to the present day, much less to the future, if the only place we’re looking is backwards.

[Feel free to email me, but I’m not doing comments anymore. Spammers, you know?]

October 15, 2018

Quality Speakings

Filed under: documentation,future,interfaces,linux,mozilla,work — mhoye @ 7:32 pm

Unfortunately my suite of annoying verbal tics – um right um right um, which I continue to treat like Victor Borge’s phonetic punctuation – are on full display here, but I guess we’ll have to live with that. Here’s a talk I gave at the GTA Linux User Group on “The State Of Mozilla”, split into the main talk and the Q&A sections. I could probably have cut a quarter of that talk out by just managing those twitches better, but I guess that’s a project for 2019. In the meantime:


The talk:


The Q&A afterwards:

The preview on that second one is certainly unflattering. It ends on a note I’m pretty proud of, though, around the 35 minute mark.

I should go back make a note of all the “ums” and “rights” in this video and graph them out. I bet it’s some sort of morse-coded left-brain cry for help.

August 13, 2018

Licensing Edgecases

Filed under: digital,documentation,interfaces,linux,mozilla,work — mhoye @ 4:37 pm

While I’m not a lawyer – and I’m definitely not your lawyer – licensing questions are on my plate these days. As I’ve been digging into one, I’ve come across what looks like a strange edge case in GPL licensing compliance that I’ve been trying to understand. Unfortunately it looks like it’s one of those Affero-style, unforeseen edge cases that (as far as I can find…) nobody’s tested legally yet.

I spent some time trying to understand how the definition of “linking” applies in projects where, say, different parts of the codebase use disparate, potentially conflicting open source licenses, but all the code is interpreted. I’m relatively new to this area, but generally speaking outside of copying and pasting, “linking” appears to be the critical threshold for whether or not the obligations imposed by the GPL kick in and I don’t understand what that means for, say, Javascript or Python.

I suppose I shouldn’t be surprised by this, but it’s strange to me how completely the GPL seems to be anchored in early Unix architectural conventions. Per the GPL FAQ, unless we’re talking about libraries “designed for the interpreter”, interpreted code is basically data. Using libraries counts as linking, but in the eyes of the GPL any amount of interpreted code is just a big, complicated config file that tells the interpreter how to run.

At a glance this seems reasonable but it seems like a pretty strange position for the FSF to take, particularly given how much code in the world is interpreted, at some level, by something. And honestly: what’s an interpreter?

The text of the license and the interpretation proposed in the FAQ both suggest that as long as all the information that a program relies on to run is contained in the input stream of an interpreter, the GPL – and if their argument sticks, other open source licenses – simply… doesn’t apply. And I can’t find any other major free or open-source licenses that address this question at all.

It just seems like such a weird place for an oversight. And given the often-adversarial nature of these discussions, given the stakes, there’s no way I’m the only person who’s ever noticed this. You have to suspect that somewhere in the world some jackass with a very expensive briefcase has an untested legal brief warmed up and ready to go arguing that a CPU’s microcode is an “interpreter” and therefore the GPL is functionally meaningless.

Whatever your preferred license of choice, that really doesn’t seem like a place we want to end up; while this interpretation may be technically correct it’s also very-obviously a bad-faith interpretation of both the intent of the GPL and that of the authors in choosing it.

The position I’ve taken at work is that “are we technically allowed to do this” is a much, much less important question than “are we acting, and seen to be acting, as good citizens of the larger Open Source community”. So while the strict legalities might be blurry, seeing the right thing to do is simple: we treat the integration of interpreted code and codebases the same way we’d treat C/C++ linking, respecting the author’s intent and the spirit of the license.

Still, it seems like something the next generation of free and open-source software licenses should explicitly address.

Older Posts »

Powered by WordPress