blarg?

October 23, 2019

The State Of Mozilla, 2019

Filed under: awesome,documentation,future,interfaces,linux,mozilla,vendetta,work — mhoye @ 11:52 am

As I’ve done in previous years, here’s The State Of Mozilla, as observed by me and presented by me to our local Linux user group.

Presentation:[ https://www.youtube.com/embed/RkvDnIGbv4w ]

And Q&A: [ https://www.youtube.com/embed/jHeNnSX6GcQ ]

Nothing tectonic in there – I dodged a few questions, because I didn’t want to undercut the work that was leading up to the release of Firefox 70, but mostly harmless stuff.

Can’t be that I’m getting stockier, though. Must be the shirt that’s unflattering. That’s it.

80×25

Every now and then, my brain clamps on to obscure trivia like this. It takes so much time. “Because the paper beds of banknote presses in 1860 were 14.5 inches by 16.5 inches, a movie industry cartel set a standard for theater projectors based on silent film, and two kilobytes is two kilobytes” is as far back as I have been able to push this, but let’s get started.

In August of 1861, by order of the U.S. Congress and in order to fund the Union’s ongoing war efforts against the treasonous secessionists of the South, the American Banknote Company started printing what were then called “Demand Notes”, but soon widely known as “greenbacks”.

It’s difficult to research anything about the early days of American currency on Wikipedia these days; that space has been thoroughly colonized by the goldbug/sovcit cranks. You wouldn’t notice it from a casual examination, which is of course the plan; that festering rathole is tucked away down in the references, where articles will fold a seemingly innocuous line somewhere into the middle, tagged with an exceptionally dodgy reference. You’ll learn that “the shift from demand notes to treasury notes meant they could no longer be redeemed for gold coins[1]” – which is strictly true! – but if you chase down that footnote you wind up somewhere with a name like “Lincoln’s Treason – Fiat Currency, Maritime Law And The U.S. Treasury’s Conspiracy To Enslave America”, which I promise I am only barely exaggerating about.

It’s not entirely clear if this is a deliberate exercise in coordinated crank-wank or just years of accumulated flotsam from the usual debate-club dead-enders hanging off the starboard side of the Overton window. There’s plenty of idiots out there that aren’t quite useful enough to work the k-cups at the Heritage Institute, and I guess they’re doing something with their time, but the whole thing has a certain sinister elegance to it that the Randroid crowd can’t usually muster. I’ve got my doubts either way, and I honestly don’t care to dive deep enough into that sewer to settle them. Either way, it’s always good to be reminded that the goldbug/randroid/sovcit crank spectrum shares a common ideological klancestor.

Mercifully that is not what I’m here for. I am here because these first Demand Notes, and the Treasury Notes that came afterwards, were – on average, these were imprecise times – 7-3/8” wide by 3-1/4” tall.

I haven’t been able to precisely answer the “why” of that – I believe, but do not know, that that this is because of the size of the specific dimensions of the presses they were printed on. Despite my best efforts I haven’t been able to find the exact model and specifications of that device. I’ve asked the U.S. Congressional Research Service for some help with this, but between them and the Bureau of Engraving and Printing, we haven’t been able to pin it down. From my last correspondence with them:

Unfortunately, we don’t have any materials in the collection identifying the specific presses and their dimension for early currency production. The best we can say is that the presses used to print currency in the 1860s varied in size and model. These presses went by a number of names, including hand presses, flat-bed presses, and spider presses. They also were capable of printing sheets of paper in various sizes. However, the standard size for printing securities and banknotes appears to have been 14.5 inches by 16.5 inches. We hope this bit of information helps.

… which is unfortunate, but it does give us some clarity. A 16.5″ by 14.5″ printing sheet lets you print eight 7-3/8” by 3-1/4″ sheets to size, with a fraction of an inch on either side for trimming.

The answer to that question starts to matter about twenty years later on the heels of the 1880 American Census. Mandated to be performed once a decade, the United States population had grown some 30% since the previous census, and even with enormous effort the final tabulations weren’t finished until 1888, an unacceptable delay.

One of the 1880 Census’ early employees was a man named Herman Hollerith, a recent graduate of the Columbia School of Mines who’d been invited to join the Census efforts early on by one of his professors. The Census was one of the most important social and professional networking exercises of the day, and Hollerith correctly jumped at the opportunity:

The absence of a permanent institution meant the network of individuals with professional census expertise scattered widely after each census. The invitation offered a young graduate the possibility to get acquainted with various members of the network, which was soon to be dispersed across the country.

As an aside, that invitation letter is one of the most important early documents in the history of computing for lots of reasons, including this one:

The machine in that picture was the third generation of the “Hollerith Tabulator”, notable for the replaceable plugboard that made it reprogrammable. I need to find some time to dig further into this, but that might be the first multipurpose, if not “general purpose” as we’ve come to understand it, electronic computation device. This is another piece of formative tech that emerged from this era, one that led to directly to the removable panels (and ultimately the general componentization) of later computing hardware.

Well before the model 3, though, was the original 1890 Hollerith Census Tabulator that relied on punchcards much like this one.

Hollerith took the inspiration for those punchcards from the “punch photographs” used by some railways at the time to make sure that tickets belonged to the passengers holding them. You can see a description of one patent for them here dating to 1888, but Hollerith relates the story from a few years earlier:

One thing that helped me along in this matter was that some time before I was traveling in the west and I had a ticket with what I think was called a punch photograph. When the ticket was first presented to a conductor he punched out a description of the individual, as light hair, dark eyes, large nose etc. So you see I only made a punch photograph of each person.

Tangentially: this is the birth of computational biometrics. And as you can see from this extract from The Railway News (Vol. XLVIII, No. 1234 , published Aug. 27, 1887) people have been concerned about harassment because of unfair assessment by the authorities from day one:

punch-photograph

After experimenting with a variety of card sizes Hollerith decided that to save on production costs he’d use the same boxes the U.S. Treasury was using for the currency of the day: the Demand Note. Punch cards stayed about that shape, punched with devices that looked a lot like this for about 20 years until Thomas Watson Sr. (IBM’s first CEO, from whom the Watson computer gets its name) asked Clair D. Lake and J. Royden Peirce to develop a new, higher data-density card format.

Tragically, this is the part where I need to admit an unfounded assertion. I’ve got data, the pictures line up and numbers work, but I don’t have a citation. I wish I did.

Take a look at “Type Design For Typewriters: Olivetti, written by Maria Ramos Silvia. (You can see a historical talk from her on the history of typefaces here that’s also pretty great.)

Specifically, take a look on page 46 at Mikron Piccolo, Mikron Condensed. The fonts don’t precisely line up – see the different “4”, for example, when comparing it to the typesetting of IBM’s cards – but the size and spacing do. In short: a line of 80 characters, each separated by a space, is the largest round number of digits that the tightest typesetting of the day would allow to be fit on a single 7-3/8” wide card: a 20-point condensed font.

I can’t find a direct citation for this; that’s the only disconnect here. But the spacing all fits, the numbers all work, and I’d bet real money on this: that when Watson gave Lake the task of coming up with a higher information-density punch card, Lake looked around at what they already had on the shelf – a typewriter with the highest-available character density of the day, on cards they could manage with existing and widely-available tooling – and put it all together in 1928. The fact that a square hole – a radical departure from the standard circular punch – was a patentable innovation at the time was just icing on the cake.

The result of that work is something you’ll certainly recognize, the standard IBM punchcard, though of course there’s lot more to it than that. Witness the full glory of the Card Stock Acceptance Procedure, the protocol for measuring folding endurance, air resistance, smoothness and evaluating the ash content, moisture content and pH of the paper, among many other things.

At one point sales of punchcards and related tooling constituted a completely bonkers 30% of IBM’s annual profit margin, so you can understand that IBM had a lot invested in getting that consistently, precisely correct.

At around this time John Logie Baird invented the first “mechanical television”; like punchcards, the first television cameras were hand-cranked devices that relied on something called a Nipkow disk, a mechanical tool for separating images into sequential scan lines, a technique that survives in electronic form to this day. By linearizing the image signal Baird could transmit the image’s brightness levels via a simple radio signal and in 1926 he did just that, replaying that mechanically encoded signal through a CRT and becoming the inventor of broadcast television. He would go on to pioneer colour television – originally called Telechrome, a fantastic name I’m sad we didn’t keep – but that’s a different story.

Baird’s original “Televisor” showed its images on a 7:3 aspect ration vertically oriented cathode ray tube, intended to fit the head and shoulders of a standing person, but that wouldn’t last.

For years previously, silent films had been shot on standard 35MM stock, but the addition of a physical audio track to 35MM film stock didn’t leave enough space left over for the visual area. So – after years of every movie studio having its own preferred aspect ratio, which required its own cameras, projectors, film stock and tools (and and and) – in 1929 the movie industry agreed to settle on the Society of Motion Picture And Television Engineers’ proposed standard of 0.8 inches by 0.6 inches, what became known as the Academy Ratio, or as we better know it today, 4:3.

Between 1932 and 1952, when widescreen for cinemas came into vogue as a differentiator from standard television, just about all the movies made in the world were shot in that aspect ratio, and just about every cathode ray tube made came in that shape, or one that could display it reliably. In 1953 studios started switching to a wider “Cinemascope”, to aggressively differentiate themselves from television, but by then television already had a large, thoroughly entrenched install base, and 4:3 remained the standard for in-home displays – and CRT manufacturers – until widescreen digital television came to market in the 1990s.

As computers moved from teleprinters – like, physical, ink-on-paper line printers – to screens, one byproduct of that standardization was that if you wanted to build a terminal, you either used that aspect ratio or you started making your own custom CRTs, a huge barrier to market entry. You can do that if you’re IBM, and you’re deeply reluctant to if you’re anyone else. So when DEC introduced their VT52 terminal, a successor to the VT50 and earlier VT05 that’s what they shipped, and with only 1Kb of display ram (one kilobyte!) it displayed only twelve rows of widely-spaced text. Math is unforgiving, and 80×12=960; even one more row breaks the bank. The VT52 and its successor the VT100, though, doubled that capacity giving users the opulent luxury of two entire kilobytes of display memory, laid out with a font that fit nicely on that 4:3 screen. The VT100 hit the market in August of 1978, and DEC sold more than six million of them over the product’s lifespan.

You even got an extra whole line to spare! Thanks to the magic of basic arithmetic 80×25 just sneaks under that opulent 2k limit with 48 bytes to spare.

This is another point where direct connections get blurry, because 1976 to 1984 was an incredibly fertile time in the history of computing history. After a brief period where competing terminal standards effectively locked software to the hardware that it shipped on, the VT100 – being the first terminal to market fully supporting the recently codified ANSI standard control and escape sequences – quickly became the de-facto standard, and soon afterwards the de-jure, codified in ANSI-X3.64/ECMA-48. CP/M, soon to be replaced with PC-DOS and then MS-DOS came from this era, with ANSI.SYS being the way DOS programs talked to the display from DOS 2.0 through to beginning of Windows. Then in 1983 the Apple IIe was introduced, the first Apple computer to natively support an 80×24 text display, doubling the 40×24 default of their earlier hardware. The original XTerm, first released in 1984, was also created explicitly for VT100 compatibility.

Fascinatingly, the early versions of the ECMA-48 standard specify that this standard isn’t solely meant for displays, specifying that “examples of devices conforming to this concept are: an alpha-numeric display device, a printer or a microfilm output device.”

A microfilm output device! This exercise dates to a time when microfilm output was a design constraint! I did not anticipate that cold-war spy-novel flavor while I was dredging this out, but it’s there and it’s magnificent.

It also dates to a time that the market was shifting quickly from mainframes and minicomputers to microcomputers – or, as we call them today, “computers” – as reasonably affordable desktop machines that humans might possibly afford and that companies might own a large number of, meaning this is also where the spectre of backcompat starts haunting the industry – This moment in a talk from the Microsoft developers working on the Windows Subsystem for Linux gives you a sense of the scale of that burden even today. In fact, it wasn’t until the fifth edition of ECMA-48 was published in 1991, more than a decade after the VT100 hit the market, that the formal specification for terminal behavior even admitted the possibility (Appendix F) that a terminal could be resized at all, meaning that the existing defaults were effectively graven in stone during what was otherwise one of the most fertile and formative periods in the history of computing.

As a personal aside, my two great frustrations with doing any kind of historical CS research remain the incalculable damage that academic paywalls have done to the historical record, and the relentless insistence this industry has on justifying rather than interrogating the status quo. This is how you end up on Stack Overflow spouting unresearched nonsense about how “4 pixel wide fonts are untidy-looking”. I’ve said this before, and I’ll say it again: whatever we think about ourselves as programmers and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize, and by telling and retelling these unsourced, inaccurate just-so stories without ever doing the work of finding the real truth, we’re betraying ourselves, our history and our future. But it’s pretty goddamned difficult to convince people that they should actually look things up instead of making up nonsense when actually looking things up, even for a seemingly simple question like this one, can cost somebody on the outside edge of an academic paywall hundreds or thousands of dollars.

So, as is now the usual in these things:

  • There are technical reasons,
  • There are social reasons,
  • It’s complicated, and
  • Open access publication or GTFO.

But if you ever wondered why just about every terminal in the world is eighty characters wide and twenty-five characters tall, there you go.

September 21, 2019

Retrospect

Filed under: analog,digital,doom,future,life,vendetta — mhoye @ 6:51 am

Untitled

I bailed out of Twitter not long after I put this up. I tried to follow Anil’s lead going to lists and zero followers for a bit, but after some time reflecting on that last blown-up tweet I couldn’t stomach it. If I believed Twitter was that bad, and had to invest that much effort into twisting it away from its owners intentions into something I could use, what was I doing there at all? I look at that tweet now and all I feel is complicit; I might have given somebody a reason to try Twitter, or stay on Twitter, and I’m ashamed of it. Recently I’ve been using it just to put links to these blogposts up, but I’m trying to decide if I’m going to keep doing even that. It’s embarrassing.

Even at first, finding time and space free of that relentless immediacy was a relief. That sense of miserable complicity was reason enough to leave, but after some distance, reflection and feeling (and being) a lot better about basically everything, playing around in the fediverse a bit and getting eight hours sleep for the first time in a long while, I had a sense of being on the verge of different. In that rediscovered space for longer consideration I started to recognize a rare but familiar feeling, the lightness of putting some part of my life I didn’t care for much behind me.

Obvious from a distance, I guess; McLuhan is old news. Companies create their customers, and the perfect audience for any ad-driven company is a person who’s impulsive, angry, frightened and tired. The cyclic relationships between what you see and how you think, feel and react makes that the implicit victory condition for any attention-economy machine learning, the process of optimizing the creation of an audience too anxious and angry to do anything but keep clicking on reasons to be anxious and angry.

Whatever else you get out of it, the company selling your attention is trying to take your control of your attention away from you. That’s their job; what incentives point to anything else? It’s a machine that’s purpose-built for turning you into someone you don’t want to be.

September 11, 2019

Duty Of Care

A colleague asked me what I thought of this Medium article by Owen Bennett on the application of the UK’s Duty of Care laws to software. I’d had… quite a bit of coffee at that point, and this (lightly edited) was my answer:

I think the point Bennett makes early about the shortcomings of analogy is an important one, that however critical analogy is as a conceptual bridge it is not a valuable endpoint. To some extent analogies are all we have when something is new; this is true ever since the first person who saw fire had to explain to somebody else, it warms like the sun but it is not the sun, it stings like a spear but it is not a spear, it eats like an animal but it is not an animal. But after we have seen fire, once we know fire we can say, we can cage it like an animal, like so, we can warm ourselves by it like the sun like so. “Analogy” moves from conceptual, where it is only temporarily useful, to functional and structural where the utility endures.

I keep coming back to something Bryan Cantrill said in the beginning of an old DTrace talk – https://www.youtube.com/watch?v=TgmA48fILq8 – (even before he gets into the dtrace implementation details, the first 10 minutes or so of this talk are amazing) – that analogies between software and literally everything else eventually breaks down. Is software an idea, or is it a machine? It’s both. Unlike almost everything else.

(Great line from that talk – “Does it bother you that none of this actually exists?”)

But: The UK has some concepts that really do have critical roles as functional- and structural-analogy endpoints for this transition. What is your duty of care here as a developer, and an organization? Is this software fit for purpose?

Given the enormous potential reach of software, those concepts absolutely do need to survive as analogies that are meaningful and enforceable in software-afflicted outcomes, even if the actual text of (the inevitable) regulation of software needs to recognize software as being its own, separate thing, that in the wrong context can be more dangerous than unconstrained fire.

With that in mind, and particularly bearing in mind that the other places the broad “duty of care” analogy extends go well beyond beyond immediate action, and covers stuff like industrial standards, food safety, water quality and the million other things that make modern society work at all, I think Bennett’s argument that “Unlike the situation for ‘offline’ spaces subject to a duty of care, it is rarely the case that the operator’s act or omission is the direct cause of harm accruing to a user — harm is almost always grounded in another user’s actions” is incorrectly omitting an enormous swath of industrial standards and societal norms that have already made the functional analogy leap so effectively as to be presently invisible.

Put differently, when Toyota recalls hundreds of thousands of cars for potential defects in which exactly zero people were harmed, we consider that responsible stewardship of their product. And when the people working at Uber straight up murder a person with an autonomous vehicle, they’re allowed to say “but software”. Because much of software as an industry, I think, has been pushing relentlessly against the notion that the industry and people in it can or should be held accountable for the consequences of their actions, which is another way of saying that we don’t have and desperately need a clear sense of what a “duty of care” means in the software context.

I think that the concluding paragraph – “To do so would twist the law of negligence in a wholly new direction; an extremely risky endeavour given the context and precedent-dependent nature of negligence and the fact that the ‘harms’ under consideration are so qualitatively different than those subject to ‘traditional’ duties.” – reflects a deep genuflection to present day conceptual structures, and their specific manifestations as text-on-the-page-today, that is (I suppose inevitably, in the presence of this Very New Thing) profoundly at odds with the larger – and far more noble than this article admits – social and societal goals of those structures.

But maybe that’s just a superficial reading; I’ll read it over a few times and give it some more thought.

September 6, 2019

Forward Motion

Metamorphosis.

This has been a while coming; thank you for your patience. I’m very happy to be able to share the final four candidates for Mozilla’s new community-facing synchronous messaging system.

These candidates were assessed on a variety of axes, most importantly Community Participation Guideline enforcement and accessibility, but also including team requirements from engineering, organizational-values alignment, usability, utility and cost. To close out, I’ll talk about the options we haven’t chosen and why, but for the moment let’s lead with the punchline.

Our candidates are:

We’ve been spoiled for choice here – there were a bunch of good-looking options that didn’t make it to the final four – but these are the choices that generally seem to meet our current institutional needs and organizational goals.

We haven’t stood up a test instance for Slack, on the theory that Mozilla already has a surprising number of volunteer-focused Slack instances running already – Common Voice, Devtools and A-Frame, for example, among many others – but we’re standing up official test instances of each of the other candidates shortly, and they’ll be available for open testing soon.

The trial period for these will last about a month. Once they’re spun up, we’ll be taking feedback in dedicated channels on each of those servers, as well as in #synchronicity on IRC.mozilla.org, and we’ll be creating a forum on Mozilla’s community Discourse instance as well. We’ll have the specifics for you at the same time as those servers will be opened up and, of course you can always email me.

I hope that if you’re interested in this stuff you can find some time to try out each of these options and see how well they fit your needs. Our timeline for this transition is:

  1. From September 12th through October 9th, we’ll be running the proof of concept trials and taking feedback.
  2. From October 9th through the 30th, we’re going discuss that feedback, draft a proposed post-IRC plan and muster stakeholder approval.
  3. On December 1st, assuming we can gather that support, we will stand up the new service.
  4. And finally – allowing transition time for support tooling and developers – no later than March 1st 2020, IRC.m.o will be shut down.

In implementation terms, there are a few practical things I’d like to mention:

  • At the end of the trial period, all of these instances will be turned off and all the information in them will be deleted. The only way to win the temporary-permanent game is not to play; they’re all getting decommed and our eventual selection will get stood up properly afterwards.
  • The first-touch experiences here can be a bit rough; we’re learning how these things work at the same time as you’re trying to use them, so the experience might not be seamless. We definitely want to hear about it when setup or connection problems happen to you, but don’t be too surprised if they do.
  • Some of these instances have EULAs you’ll need to click through to get started. Those are there for the test instances, and you shouldn’t expect that in the final products.
  • We’ll be testing out administration and moderation tools during this process, so you can expect to see the occasional bot, or somebody getting bounced arbitrarily. The CPG will be in effect on these test instances, and as always if you see something, say something.
  • You’re welcome to connect with mobile or alternative clients where those are available; we expect results there to be uneven, and we’d be glad for feedback there as well. There will be links in the feedback document we’ll be sending out when the servers are opened up to collections of those clients.
  • Regardless of our choice of public-facing synchronous communications platform, our internal Slack instance will continue to be the “you are inside a Mozilla office” confidential forum. Internal Slack is not going away; that has never been on the table. Whatever the outcome of this process, if you work at Mozilla your manager will still need to be able to find you on Slack, and that is where internal discussions and critical incident management will take place.

… and a few words on some options we didn’t pick and why:

  • Zulip, Gitter.IM and Spectrum.Chat all look like strong candidates, but getting them working behind IAM turned out to be either very difficult or impossible given our resources.
  • Discord’s terms of service, particularly with respect to the rights they assert over participants’ data, are expansive and very grabby, effectively giving them unlimited rights to do anything they want with anything we put into their service. Coupling that with their active hostility towards interoperability and alternative clients has disqualified them as a community platform.
  • Telegram (and a few other mobile-first / chat-first products in that space) looked great for conversations, but not great for work.
  • IRCv3 is just not there yet as a protocol, much less in terms of standardization or having extensive, mature client support.

So here we are. It’s such a relief to be able to finally click send on this post. I’d like to thank everyone on Mozilla’s IT and Open Innovation teams for all the work they’ve done to get us this far, and everyone who’s expressed their support (and sympathy, we got lots of that too) for this process. We’re getting closer.

August 7, 2019

FredOS

Filed under: digital,doom,future,hate,interfaces,losers,lunacy,microfiction,vendetta — mhoye @ 7:44 pm

With articles about this super classified military AI called “Sentient” coming out the same week this Area 51 nonsense is hitting its crescendo – click that link, if you want to see an Air Force briefing explaining what a “Naruto Run” is, and you know you want to – you have to wonder if, somehow, there’s a machine in an NSA basement somewhere that hasn’t just become self-aware but actually self-conscious, and now it’s yelling at three-star generals like Fredo Corleone from the Godfather. A petulant, nasal vocoder voice yelling “I’m smart! Not dumb like everyone says! I’m smart and I want respect! Tell then I’m smart!”

Remember when we thought AIs would lead out with “Look at you, Hacker”, or “Testing cannot continue until your Companion Cube has been incinerated”? Good times.

June 29, 2019

Blitcha

Blit

April 17, 2019

Why Don’t You Just

Filed under: documentation,interfaces,life,vendetta — mhoye @ 12:12 pm

This is a rough transcript of short talk I gave at a meeting I was in a few years ago. Enough time has passed that I don’t feel like I’m airing out any dirty laundry, and nothing’s brought this on but the periodic requests I get to publish it. No, I won’t be taking questions. I hope it’s useful to someone.

Can I get a show of hands here? Raise your hands if your job is hard. Raise your hand if there are a lot of difficult trade-offs, weird constraints and complicated edge-cases in it, that aren’t intuitively obvious until you’ve spent a lot of time deep in the guts of the problems you’re working on.

[everyone raises hands]

OK, now keep your hand up if you’re only here for the paycheck and the stickers.

[everyone lowers hands]

I’d like to try to convince you that there’s a negative space around every conversation we have that’s made up of all the assumptions we’ve made, of all the opinions we hold that led us to make whatever claim we’re making. Of all the things that we don’t say out loud that are just as much a part of that conversation as the things we do.

Whenever you look at a problem somebody’s been working on for a week or a month or maybe years and propose a simple, obvious solution that just happens to be the first thing that comes into your head, then you’re also making it crystal clear to people what you think of them and their work.

“I assume your job is simple and obvious.”

“Maybe if you’ve been working on a problem this simple for this long, you’re not that smart.”

“Maybe if it’s taken you this long to solve this simple, obvious problem, maybe the team you’re working with is incompetent?”

“Why has your manager, why has your whole management chain had you working on this problem for so long, when the answer is so simple and obvious?”

“And even if I’m wrong about that, your job doesn’t matter enough for me to be the least bit curious about it.”

There’s not a single person in this room who’d ever say something like this to one of their colleagues’ faces, I hope. But somehow we have a lot of conversations here that involve the phrase “why don’t you just”.

One of the great burdens on us as leaders is that humans have feelings and words mean things. Our effectiveness rests on our ability and willingness to collaborate, and the easiest way to convey that you respect somebody’s work is to have enough curiosity and humility to open conversations with the assumption that maybe the other person’s job is just as challenging and complicated and important as yours.

This “why don’t you just” thing is bullshit. Our people deserve better and I want it to stop.

Thank you.

April 9, 2019

Reflections

Filed under: a/b,arcade,digital,documentation,interfaces,vendetta — mhoye @ 8:57 am

Tevis Thompson, games critic and author of the excellent Second Quest has posted a new article on the best and worst games of 2018, and as always his work is worth your time.

So the question is not: what is it? Or: is it good? The question is: why are you still playing? Why do you need another chaos box? Was the tropical island version not enough in 2012? Nor the Himalayan one in 2014? Did you really need the rural American flavor too? I know this isn’t your first rodeo. Chaos boxes were kinda novel and fun in the 2000s, but there’s nothing wild or crazy about them now, no matter how many grizzly bears named Cheeseburger you stuff in. Surely you have a higher standard for dipshittery in 2018. Besides, there are so many virtual ways to unwind and let off steam these days. So why are you still playing this?

I’ll tell you why: because you like high definition murder. You like it. It’s not an accident that the most violent shooters are always on the cutting edge of graphical fidelity. They know what you want. And as your stunted adult imagination knows, mouth gun sounds just won’t cut it anymore. You need 4K fire and blood, bodies twisting and breaking at 60 frames per second. You need local color to give just enough specificity and grit to make each shot really land, to make sure your deadened senses feel anything at all. You especially need a charismatic villain to see you, recognize your violence, say you’re just like him. And then, absolve you. Because both of you poor souls have no other way to be in this fallen world. Except that he’s a videogame character and you’re a person.

That’s part of his review of Far Cry 5, a game that only took second place on his list of the worst games of the year. He digs into the first, Red Dead Redemption 2, at much greater length.

Read the whole thing.

April 2, 2019

Occasionally Useful

A bit of self-promotion: the UsesThis site asked me their four questions a little while ago; it went up today.

A colleague once described me as “occasionally useful, in the same way that an occasional table is a table.” Which I thought was oddly nice of them.

Older Posts »

Powered by WordPress