Over on Mastodon, @therisingtithes makes in important point about the idea that AI “democratizes” artistic expression
“I think the really troublesome part in conversations about AI is when people say it ‘democratizes’ creative expression—as if to imply that practicing a talent and developing one’s own voice is an institutional boundary that only a ‘creative elite’ can overcome. There are a lot of obviously legitimate barriers to entry in the creative industries—ask any person of colour, or anyone who lives in a non-Anglophone developing country. But BEING ABLE TO DRAW PLEASING ART OR WRITE INTERESTING PROSE isn’t, like, an elitist activity.”
I think it’s actually tragic: the idea that you can “democratize” artistic expression at all only makes sense if you first believe that the capacity for art is innate and unchangeable, something you’re either born with or will never have. Part of that tragedy is that it’s a belief that strongly resonates with general disdain for the humanities shared by a lot of STEM practitioners, who don’t see the arts as real skills learned and honed, but as some arbitrary blessing they were denied at birth.
And that frame is such a profound failure of imagination and empathy, some sort of abusive marriage of disdain and despair, to live believing that you can’t experience some profound work of art without also believing, in some fundamental, irrevocable sense, that you don’t and can’t ever have the capacity to evoke anything like that sense in another person. You’re just Not An Artist, you can’t do anything like that ever.
It must be soul-crushing, to live in the loneliness of that kind of envy. Certainly you can see why somebody trapped in that frame – in the foundational belief that they immutably have No Capacity For Art, that they have been arbitrarily denied any capacity to connect with another human on that level – would need to believe, desperately, that the stochastic parrots that take and mindlessly repeat stolen approximations back to us are somehow equivalent to the fullness of human experience.
A lot of us have joked about the whole “I used to be a libertarian capitalist CEO but then I took a bunch of drugs in the jungle and realized other people feel things” tropes, but I frequently wonder what a century of lead poisoning, prohibitionism and suburbs have taken from us. Music, art, poetry, and food, like wine or weed, these are assistive technologies for human connection and empathy, the ramps many of us can’t even know we need until we have them, but maybe never will.
But … “democratizing”, geez.
If you want to be the Robin Hood in a story worth telling, the thing you’re stealing from the rich and giving to the poor has to be the riches. It’s weird that anyone would need to say that out loud. If you’re stealing from artists and selling it to corporations as a subscription? That story isn’t worth the trouble of telling. Who’d listen?
A long retired friend of mine – fascinating guy, among other things a former hostage negotiator – used to say “sometimes people bring me emergencies… I always start by asking myself, ‘what if we do nothing’? Not because I’m lazy, though I am lazy. But because it’s a good benchmark. Nobody likes wasting their time or making things worse, so anything we decide to do has to have a good chance at giving us a better result than we could have had with no effort at all.”
In a similar vein, Mike Haertel, former Grep maintainer and not someone I know, once said that “the key to making programs fast is to make them do practically nothing”.
I think about both of those lines a lot.
A friend reminded me recently of a trick not a lot of people know: the combination of UBlock Origin and the “disable custom fonts” option in Firefox feels a lot like you’ve just bought new, much better computer.
On my own system, I’ve installed the Atkinson Hyperlegible font from the Braille Institute – it does exactly what it says on the tin, it’s unreasonably good – and then in Firefox, under settings -> general -> fonts -> advanced, set both the serif and sans-serif fonts to Atkinso, picked a font size of my own choice and unchecked the “let sites choose their own fonts” box.
(If you’re also looking fora a nice monospace font: Fira Code – a fork of Fira Mono, that includes a bunch of new work including “programmer’s ligatures” – is really nice!)
If your eyes are getting as old as mine, setting that minimum font size is gold. After that, get UBlock Origin here and let it work.
If you’re using Chrome, Bing or Safari, you can get the same results by visiting Mozilla, downloading and installing Firefox, and then going through the above steps.
It is striking what a difference it makes. My only caveats are that if you use any addons that modify your fonts for accessibility reasons, or anything else, this will definitely prevent those from working, and this approach can cause usability problems in some sites that use custom fonts for things like navigation arrows.
For my own purposes, considering how fast everything suddenly becomes – which borders on the ridiculous – it’s worth it.
Indulge me for a minute; I’d like to tell you about a conference I’m helping organize, and why. But first, I want to tell you a story about measuring things, and the tools we use to do that.
Specifically, I want to talk about thermometers.
Even though a rough understanding of basic principles of the tool we now call a thermometer are at least two thousand years old, for centuries the whole idea that you could measure temperature at all was fantastical. The entire idea was absurd; how could you possibly measure an experience as subjective and ethereal as temperature?
Even though you could demonstrate the basic principles involved in ancient Greece with nothing more than glass tubes and a fire the question itself was nonsense, like asking how much a poem weighs, how much water you could pour out of a sunset.
It was more than 1600 years between the earliest known glass-tube demonstrations and Santorini Santorio‘s decision to put a ruler to the side of one of those glass tubes; it was most of a century after that before Carlo Renaldini went ahead and tried Christiaan Huygens‘ idea of measuring relative to the freezing and boiling points of water be used as the anchor points of a linear scale. (Sir Isaac Newton followed that up with a proposal that the increments of that gradient be “12”, a decision I’m glad we didn’t stick with. Andres Celcius’ idea was better.)
The first tools we’d recognize as “modern thermometers” – using mercury, one of those unfortunately-reasonable-at-the-time decisions that have had distressing long-term consequences – were invented by Farenheit in 1714. More tragically, he proposed the metric that bears his name, but: the tool worked, and if there’s one thing in tech that we all know and fear, it’s that there’s nothing quite as permanent as something temporary that works.
By 1900, Henry Bolton – author of “The Evolution Of The Thermometer, 1592-1743” – had described this long evolution as “encumbered with erroneous statements that have been reiterated with such dogmatism that they have received the false stamp of authority”, a phrase that a lot of us in tech, I suspect, find painfully familiar.
Today, of course, outside of the most extreme margins – things get pretty dicey down in the quantum froth around absolute zero and when your energy densities are way up past the plasmas – these questions are behind us. Thermometers are real, temperatures can be very precisely measured, and that has enabled a universe of new possibilities across physics and chemistry and through metallurgy to medicine to precision manufacturing, too many things to mention.
The practice of computation, as a field, is less than a century old. We sometimes measure things we can measure, usually the things that are easiest to measure, but at the intersection of humans and computers, the most important part of the exercise, this field is still deeply & dogmatically superstitious. The false stamps of authority are everywhere.
I mean, look at this. Look at it. Tell me that isn’t kabbalist occultism, delivered via PowerPoint.
This is where we are, but we can do better.
On Tuesday, April 25, and Wednesday, April 26, It Will Never Work in Theory is running our third live event: a set of lightning talks from leading software engineering researchers on immediate, actionable results from their work.
I want to introduce you to the people building the thermometers of modern software engineering.
Some of last year’s highlights include the introduction of novel techniques like Causal Fairness Testing, supercharging DB test suites with SQLancer and two approaches for debugging neural nets, and none of these are hypothetical future someday ideas. These are tools you can start using now. That’s the goal.
And it should be a lot of fun, I hope to see you there.
Never Work In Theory: https://neverworkintheory.org/
The event page: https://www.eventbrite.com/e/it-will-never-work-in-theory-tickets-527743173037
Wordle was fun for while, but being a little bit complicit is kind of like being a little bit pregnant except the thing I’d be bringing into the world is a future I don’t want. So that’s the end of that.
I aired this out over on Mastodon, and because Mastodon is still the good internet, Mastodon delivered. Here’s a collection of generally word-based, short-play minigames, simple joys that remain undiluted. I’ve culled out the straight up clones in favor of things that bring something new to the table.
… and some not-word games, riffing on Guess The Game:
And finally, if you want to microdose on Clue every morning: murdle.
Over on Mastodon I asked: “What modern utilities should be a standard part of a modern unixy distro? Why? I’ve got jq, pandoc, tldr and a few others on my list, but I’d love to know others.”
Here’s what came back; I’ve roughly grouped them into two categories: new utilities and improvements on the classics.
In no particular order, the new kids on the block:
As an aside, about htop: one commenter noted that they run HTOP on a non-interactive TTY, something like control-alt-F11; so do I, and it’s great, but you must not do this on sec-critical systems. You can kill processes through htop, and that gives you a choice of signals to issue, and on most machines running systemd “systemd init” responds to SIGTRMIN+1 by dropping back into rescue mode, and that’s a backstage pass to a root shell. I have used this to recover a personal device from an interrupted upgrade that broke PAM. You must never do this on a machine that matters.
Improvements on “classic” tools and utilities:
So, there you go. Life in the terminal is still improving here in 2023, it’s great to see.
I’ve said elsewhere that I don’t think that the GPT, OpenAI arbitrary-text-generation stuff is all that interesting. A machine repeating permutations of things we’ve already said back to us is a weird thing to be impressed by or frightened of, unless you secretly know that your job is confidently repeating plausible-sounding nonsense with no regard for whether there’s any truth to it.
But in practical terms, their real impact will be that how we conceive of knowledge at all gets rapidly bifurcated into “small towns that can still pump clean water from the wells” and “London during the Great Stink, though, so as the attendants say, be sure to put your own mask on first. Anyone remember when Google’s mission was “organize the world’s information and make it universally accessible and useful”, and not “build tools that automatically generate an endless stream of believably averacitous text?” Yeah, me neither.
I guess it’s no surprise that a few consecutive generations of people being really, methodically deliberate about misinterpreting the Imitation Game to avoid staring directly at Turing’s persecution, debasing his life and work so profoundly that they’d claim that a believable deception is some indicator of nascent intelligence would bring us here.
The Imitation Game was a cry for help from a man being destroyed by the society he spent his life saving. Is it any wonder that a brilliant, closeted gay man, who might be incarcerated or even executed for the crime of being themselves, would have existential questions about what it means to need to deceive people – your friends, your colleagues, your family and maybe even yourself, every single day – simply in order to be treated like a human being?
Using the tools Turing gave us to build stochastic parrots that cannot hew to any concept of right or wrong, whose only utility is a weapon aimed at the foundations of justice, civil democracy and the entire concept of truth, that’s bad enough. But saying they “pass” a made-up test about plausibly lying to yourself that you’ve named after a closeted man the state hounded to suicide is beyond disgusting. It’s grotesque.
The mere existence of these tools demeans us all as scientists, engineers and humans. If you’re involved in building these things you should resign from the field in shame. In honour of Alan Turing’s memory and basic human decency, if nothing else.
Every now and then I see some more of the Muskrat Standom’s effluent ooze past my eyes and all I can think is, kid, that boot’s never gonna lick you back.
Timnit Gebru recently wrote about Effective Altruism for Wired, where she correctly notes that it’s all self-serving nonsense, “promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now.”
“Some of the billionaires who have committed significant funds to this goal include Elon Musk, Vitalik Buterin, Ben Delo, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried, who was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform.
… and she’s right. But I also think that there’s a dynamic at play that she’s elided, an ideological metagame at play.
In his 1963 speech “Wealth And Poverty“, Galbraith observed that “the modern conservative is not even especially modern. He is engaged, on the contrary, in one of man’s oldest, best financed, most applauded, and, on the whole, least successful exercises in moral philosophy. That is the search for a superior moral justification for selfishness. It is an exercise which always involves a certain number of internal contradictions and even a few absurdities”.
This was penned before social media existed, obviously, and long before it was thoroughly weaponized, so it’s understandable how quaint it seems today. How could Galbraith have anticipated the rise of the postmodern conservative, for whom power is self-evidently its own justification, for whom the whole ideas of truth or moral justification are laughable as anything but a tool for retaining that power?
There’s a particular strain of technically-competent, ideologically-adrift pseudointellectual who somehow just stops asking questions the moment they’ve got a mental model that “makes sense” to them. You’ve met their flock on the hellbird and they’ve got a prominent hive over on the orange website, but you’ll find them in any techno-libertarian circle where a dash of cleverness mixed with the facile reasoning that comes with graphs in Econ 101 is praised until it becomes a self-reinforcing cycle, a nerd-posturing Coriolis force churning all these ideas together as they circle the bowl. “The market exists therefore systemic inequality obviously can’t” is the grade of reasoning I’m sure you’ve seen, championed by people happy to believe that the whole story is the just-so story they’ve just told themselves.
Setting aside inquiry and getting to work once you’ve got what you believe is a good mental model of a system is actually a pretty good approach to writing software, and if you’re both a competent engineer and a comfortably-incurious human being you can get a lot done that way. But politics and ideology aren’t software.
The code will never ask you, to whom is it useful that I believe this?
Are they really on my side?
Do I want to be on theirs?
I’ve said this before, that stories are weapons.
Software is quite literally an idea that you’ve taken out of your head and turned into a machine, a set of decisions and parameters that you, as a programmer, impose on people who will have little or no say in the matter. It is ideology and bias turned into power. If you’re happy to repeat things you’ve heard without caring about whether or not they’re true, what are you?
… and I will very happily bet you sixteen billion dollars – payable in six hundred years, assuming an average 4% interest rate – that exactly none of Musk, Buterin, Delo, Tallinn, Thiel, Muskovitz, or Bankman-Fried or anyone else whose net worth goes past the seven-digit mark gives a single fuck about “effective altruism”. Not one. They care about retaining wealth and power, full stop.
And that’s where Effective Altruism really shines. Not as a philosophy, but as a weapon.
Because Effective Altruism gives this entire class of technically competent people a mental model that “makes sense” and has a veneer of moral nobility to it while – conveniently – justifying not doing anything in the present that might inconvenience that wealth and power.
As an ideology, effective altruism is obviously a pile of nonsense. It’s what you’d end up with if you started with Scientology and replaced “thetans” with “dollars.” But as a tool, it’s fantastic: precision-designed to target people who are capable of understanding and changing complex systems and completely neuter them as a threat to that wealth and power.
Put differently: what effective altruism is most effective at is turning smart kids into political NPCs. It’s effective at making its adherents ideologically irrelevant, the world’s smartest, most useful idiots.
The symptoms for this are that:
The solution to this is to find and disable automatic microphone volume adjustment in the application.
I can reproduce the problem with the “Aftershockz” bone-conduction headset I’m quite fond of. While this problem was not easy to diagnose, the solution is one checkbox.
In more detail, many conference applications, Zoom and Teams in particular, have an “automatically adjust microphone volume” option somewhere under the hood. This will in some cases auto-adjust your microphone volume to zero, which in turn results in the operating system sending a signal to your headset telling it to mute the mic in the hardware. The hardware will do that, but this change of state will not be reflected in the zoom client.
And there’s just enough lag in this process, particularly if you’re trying to solve it by pushing the buttons on the headset that unmute you, that nothing quite happens with the immediacy that we normally expect from discernible causality. So you can end up with the software tricking you into the impression that your headset is in some bizarro state of hardware failure, which is not the case.
A quick google search suggests that this is an endemic problem at the intersection of “MacOS confcall apps (Zoom, Teams, bunch of others…) that try to be clever with volume management” and “Bluetooth devices that try to manage power aggressively”, and digging into the preferences to uncheck the “automatically adjust microphone volume” option solves the problem immediately.
Bluetooth has not worked right anywhere since Nokia died, I have no idea how any human who has not been heads down over this class of problems for decades would even start forming a mental model of what might be happening here much less how to fix it, and despite taking some pride in this there are times that I actively resent having to be really good at solving problems that shouldn’t exist in the first place.
Hillel Wayne has registered a strenuous disagreement with a bit I wrote in 2013 about zero-indexing and now I need have an internet argument because, I suppose, that is simply my nature. I’ve never met them, though I’ve enjoyed their writing whenever I’ve come across it, so if you’re invested in precision beef calibration that’s where I am. They seem like decent people, I feel like my case was misrepresented, strap in.
Wayne makes some solid points; I could stand to clean up and republish that article, something I was reluctant to do back when it took off and haven’t gotten back to. He definitely understates the case in saying “Most modern languages have at least some C influence in them”: in fact, approximately every programming language now needs to speak C, something I’m really only mentioning because it gives me an excuse to link up an amazing post that deserves a wider audience. But he falls into the same trap I railed against in my 2013 conclusion: retelling just so stories because it’s easy and fun.
I think his gist-of-it summary is not really a fair reading of the post, but the straw man I’m initially arguing against is that the claim that zero-indexing specifically-in-C “makes sense” because of an argument Dijkstra made is compelling and easy to retell, and obviously nonsense, even if Dijkstra’s argument was influential to later languages.
You can make that claim about Python, though, and you’d be right! Dijkstra’s interval-elegance argument did carry the day there. But I don’t know this because Python was born after 1982, or because elegant math, or because of some hand-waved set of conflated circumstances. I know this because I looked it up and found that Guido Van Rossum said so.
Literally everything having to do with computing was somebody’s decision, and we should not be confident we really understand the why if we can’t point at the person.
We really start wobbling on the rails further in, when… look: there’s no kind way to say this: that offhand sentence “wire formats like EBDIC and ASCII “started from 0″, since they considered the 0-byte a valid unit” is exactly what I mean, when I’m talking about things that are made up, that don’t even rise to the level of wrong. What is ‘valid’? Who’s they? How do we get from there to why?
I mention it because I know the real answer, and talking about the “wire format” of a character encoding when the first networked computers wouldn’t exist for another half decade is precisely what I’m getting wound up about here. The reason we have an all-zero-bits representation in those character sets is the same reason we use null as a string terminator, and that reason antedates mainframes, transistors and the Church-Turing Thesis. There are plenty of benefits to it that came about later! Definitely. But the reason we chose to have that in EDCBIC and ASCII (and Baudot, ITA2 and others of that era) is not “all the ways it turned out to be useful later”, and that reason was so that we could easily re-use punchcards.
(Fun fact, and this is a fact: around the time these standards were being hammered out, punch cards and related tooling made up 30% of all of IBM’s revenue. Consequently, one design constraint on EDCBIC that ASCII did not share was anchored in IBM’s business model: keeping too many holes in punch cards from being too close together, making the cards less likely to fail. Yes, I can find you a citation for everything in these parentheses.)
There are a few other arguments in there, rebutting claims I’ve made that I don’t really think I made, but I recognize enough contrarian in myself that I guess I’m obligated to be charitable with anyone pointing their contrarian at me. And charitable can be a job, for sure, but fair’s fair and a job’s a job. But there’s one larger point here, the real point that I wanted to make then and want to make now, that I’m not going to let go:
“Hoye’s core point is it doesn’t matter what the practical benefits are, the historical context is that barely anybody used 0-indexing before BCPL came along and ruined everything. […] I may think that counting 0 as a natural number makes a lot of math more elegant, but clearly I’m just too dumb to rise to the level of wrong.”
My point is not, my point has never been, that “zero indexing is bad”. My point is “believing and repeating uninterrogated stories because they sounded plausible to you” is bad, and I’m saying that because it’s really, really bad.
“We’re doing this like this because it just makes sense” is not just the general shape of made up stories I see programmers and engineers telling themselves all the goddamn time. It’s a habit that opens an attack surface on your soul. Smart people who’ve fallen in the habit of believing and retelling just-so stories about why things are the way they are, stories that are easy to digest and repeat, are incredibly easy to manipulate into building monstrosities, ignoring structural and systemic injustices, and acting in service of forces they don’t even realize exist because their little constructed reality seems to hold itself together.
Stories are weapons. I don’t know how to say this loud enough. Software is quite literally an idea that you’ve taken out of your head and turned into a machine, a set of decisions and parameters that you, as a programmer, impose on people who will have little or no say in the matter. It is ideology and bias turned into power. If you’re happy to repeat things you’ve heard without caring about whether or not they’re true, what are you?
Because this is where antivaxxers come from. This how the cryptocurrency cultists can bring themselves to believe their racist monkey cartoons are worth anything or ending fiat currency will stop war or any amount of all the other bullshit they believe while they’re destroy the planet and the lives of the people on it. This is why the world’s worst despots have buildings full of people spending all day on the far end of a VPN pretending to be Michigan soccer moms or Boston Antifa or Alberta Gun-Owners Rights advocates or any goddamn thing that helps them pour kerosene on anything that might set a democracy on fire. This is how smart, young developers can convince themselves that their work is morally and politically neutral while building the algorithms that, oops, hand a megaphone to fascists and turn explicit racism into implicit policy.
It’s this. And yeah, sure: it’s also other things. It’s complicated. I get it.
But it’s this.
Ok, sure. I could stand to work on that part. I concede the point.
Hey quick question, is it a normal thing to be so used to struggling to make things work right that when you try something new and it works exactly right on the first try that you feel kind of strangely sad and empty? Or is that super weird and probably really unhealthy and hey mhoye are you OK maybe you should probably talk to someone? Asking for a me.
Anyway, today I took a desk lamp whose Halogen light had burned out, whose crappy transformer always made those bulbs sputter, and whose mildly art-deco appearance I’d always liked, and swapped it out to run an LED bulb off USB power. It took about an hour’s work to replace the light with an LED, the switch with a nice heavy clicky one and now the whole thing runs off USB-C instead of wall voltage. It emits no appreciable heat, and if these calculations are to be believed, will run for decades for a few cents per year, assuming I leave it on all the time.
I hadn’t really appreciated how big a deal USB-PD voltage negotiation was until I found out that the little chips that handle that negotiation are about the size of the end of a pencil, that if you include the USB-C port you can replace basically any low-voltage transformer with something smaller than a quarter.
The magic search string, if you want to try this yourself, is “usb-pd trigger module”, and if you have a soft spot for ideas like componentized repairability and separation of concerns you’re in for a treat. In a practical sense, this means that the negotiation about what power is delivered to a device can be tiny and inexpensive, and the decision about what’s delivering that power is a separate question entirely. So if you, like me, have seen good hardware killed by bad power, there’s a whole class of problems you can put behind you.
Another way to say that is, power bricks are obsolete now; in fact, anything that doesn’t run a motor or heat up a coil won’t need wall voltage at all. It also means refurbishing old electrical stuff just got a lot cheaper and easier. That old lamp that looked great at the thrift shop or your grandmother’s basement but hasn’t worked since the Cold War and will definitely burn your house down? For a few bucks it’s good as new, running cheap and cool.
And there it is: to my significant surprise, something was cheap, easy, and worked exactly as intended on the first try.
I don’t think I’ll be able to recognize myself, if this keeps happening.