For most people, the history of technology is:

  1. Humans have to do crappy stuff to stay alive
  2. Humans are clever, so they invent technologies to make life easier
  3. ...
  4. Profit

Before there were wheels and plows, it was very hard to wheel things around or plow things, and technology fixed this. Hooray!

If you scratch even a little below the surface, though, this take on technology isn't always right. In Sapiens, Yuval Noah Harari makes a strong case that the plow (and, more generally, agriculture) ushered in a long period of even more crappiness for most humans, because it forced people into narrower, less healthy diets, and dulled their abilities to catch wild animals, mentally map large territories, etc.

It's not exactly a "hot take" that technology can have negative impacts; even Socrates (in Phaedrus) was worried that writing would diminish a person’s wisdom. But this kind of critique can often take on a hand-wringing tone—like the "all technology is bad for us" mantra that ultimately leads to unibomer tendencies. This isn't particularly useful or empowering, because we don't really have a choice about using technologies (like wheels, or agriculture). But more importantly, I don't think it's universally true that technologies make us worse.

So recently, I came across an older episode of Sam Harris's podcast—Episode #40, with David Krakauer in which David discusses the idea of "complementary" and "competitive" cognitive artifacts.

A competitive cognitive artifact is anything that makes us less capable. This might happen by substitution of a poor mental model—one that promotes stupidity or obscures something true about the world. Or it might happen by atrophy—removing the need for us to be good at something, or crowding out the time or energy for doing the things that used to make us good (thus, it's "competitive" with the development of our own capacities).

You don't have to look very far for examples:

  • Cars are great for lots of reasons, but they have made us less physically fit overall.
  • GPS-based directions are handy, but decrease our propensity (and perhaps ability) to form our own mental maps.
  • Calculators remove the drudgery from long division, but also decrease our ability to do basic figures in our heads.

And so on. Now, you might say "who cares?"—why make our brains do something that we can delegate? For all of these particular examples, the benefits absolutely outweigh the erosion of our human faculties; I'm certainly not about to give up my car, GPS, or calculator, because they all let me do things that I will never be able to do on my own, no matter how much I refine my own abilities.

But does every technology achieve the same calculus? If we just accept technologies as they're presented to us, there's no guarantee that we'll be better off.

This brings us to the opposite case: a complementary cognitive artifact, which is something that makes us more capable. Krakauer cites a couple of examples of this in the interview:

  • Arabic numerals, which, as opposed to Roman numerals, actually let you conceptualize things like multiplication and division quite naturally.
  • The abacus, which complements (rather than replaces) your mental calculation facilities, and evidently allows you (eventually) to do pretty complex calculation in your head by just imagining the state of the abacus. (I've never used an abacus, but I remember encountering stores of its effectiveness in Feynman's stories.)

This got me thinking about whether there are any complementary artifacts in my life, and I turned up a couple good ones.

Memory

I've never had a stellar memory; something about my brain is just happy to let information slip away. I've always been rubbish at trivia, and history classes were a slog. So, when search engines like Google came on the scene, I never looked back.

That said: I've absolutely felt that "tug" when there's a fact just out of mental reach, and I could try to remember it, but it's so tempting to ... just Google it! But because of the way memory works, that's detrimental; if you don't use it (i.e. actually recall things from memory), you lose it (i.e. the ability to recall those facts from memory is weakened). This is one example of the effect that Nicholas Carr talks about in The Shallows: effortless access to facts and information has dimmed our memories (among other things).

Thankfully, I've found a great complementary cognitive technology in this space: Anki, a spaced repetition flash card system.

The basic idea of spaced repetition is that when we learn a new piece of information, our brains will remember that fact for a short time (say, a few minutes) and then it's gone. But if you recall that fact (i.e. bring it up from memory) before the timer runs out, then the act of recalling it strengthens the mental pathway of remembering it, and your forgetting timer gets extended—by a day the first time, then by a few days, a few weeks, etc. So if you are clever, you can recall each fact just before you would have forgotten it, and with very little actual effort, you can remember lots of stuff, effectively forever. Of course, this is hard to do without tools, because ... how do you remember when you were "just about to forget" something?

That's where software like Anki comes in; it does the tracking for you, on a fact-by-fact basis. You could also do it with a more manual system (say, by putting note cards in various boxes, like the originator of this method did in the last century) but that's essentially the same technology; Anki just takes out some of the friction and hassle.

What do I use Anki for? Loads of stuff!

  • Language learning (I've memorized thousands of vocabulary words and phrases, based on the work of Gabriel Wyner)
  • Music (I've memorized thousands of traditional Irish tunes, jazz standards, classical pieces, etc.)
  • Faces (I work regularly with thousands of people, and I use Anki to better remember their names)
  • Vocabulary (I've increased my lexicon by at least 500 words using Anki)
  • Trivia (I've memorized geographical facts, the periodic table of elements, Trivial Pursuit cards, etc)

And more. And I can tell you, it feels like magic; it has absolutely complemented my mental faculties, and made me less dependent on Googling things. One of the surprising benefits is that I've been memorizing poetry (most recently Ulysses by Tennyson), and I regularly have moments when something reminds me of a line in the poem. Being able to recite it to myself from memory feels like pure wizardry.

Anki is far from the only complementary cognitive artifact in the world of memory. Joshua Foer's great book Moonwalking With Einstein goes into great detail about the idea of a "memory palace" which is a technique whereby you coopt your visual and spatial memory (which is already very strong) to remember more abstract things. Memory palaces are a "pure cognitive technology" (meaning, there's no hardware or software other than your brain and some ideas), but they absolutely qualify as complementary artifacts. (I've never tried this technique, but by all accounts it's very powerful.)

I'm not a memory purist; there are plenty of tools I also use that relieve me of having to keep things in my memory. I use a task manager (Omnifocus) to get things done, and I use a Personal Knowledge Manager (Obsidian) to function as my outboard memory, and as a highly connective thinking tool. I suppose that if I gave these up, I'd have more impetus to remember things myself, and there might be benefits from that. But, well ... I choose my battles, and I'd rather be spending my memory cycles on tunes and foreign languages than on what I need to pick up at the store.

Attention

I haven't been actively engaged with social media platforms for years (after my misplaced early enthusiasm and hope that they would "connect us in better ways"). One of my reasons for opting out is very pedestrian: I can't stand the mental fragmentation of a timeline feed. When I read things that people say or share, I need to contextualize those statements based on who the speaker is and where they are coming from. Do I trust them? Should I be skeptical based on their other views? Do I get their sense of humor? And so on. If every feed item I read is coming from a different human, that's incredibly fragmented—it costs me a huge amount of mental energy in context-switching! I would much rather see all the things from each person, seriatim, and reset my attention that way.

But, there's potentially something even more insidious that now keeps me away. A 2020 documentary called The Social Dilemma, featuring the work of Tristan Harris and many others, revealed that within the incentive structures of these businesses, there's a goal that's at odds with human flourishing. The machine-learning-based algorithms that power discovery in these platforms (i.e. that curate your "feed") aim to maximize your engagement. To do this, they're not selecting for content that's going to help your mind grow, or make you a better citizen, or make the world a happier place. Instead, they're going to nudge you to be more predictable, and more lymbically-motivated (i.e. by strong negative emotions). I have a strong reaction against being "programmed", and right from my first experience with these algorithms, I jumped ship.

However, this poses a dilemma: I do actually still want to connect with people! I want to know what's up with my friends, especially those who are geographically distant. And beyond that, I want to participate in the worldwide conversation about important things. I just want to do these things on my own terms—I want to use social media platforms as a complementary cognitive artifact, not a competitive one that hijacks and fragments my attention.

Enter RSS (Really Simple Syndication)—the bargain-bin technology we've had for years, but which has never really caught on in the mainstream. With RSS, you can curate a set of sources (blogs, twitter feeds, etc) and then consume them in the way you want to. This has the benefit of interrupting both of the issues I mentioned above—you can choose to read things one person or topic at a time (so, less fragmentation), and there's no algorithm selecting or prioritizing things for you.

I used Google Reader for RSS for years, but it was eventually shut down, and I went RSS-less for a while. But recently, I've discovered Feedly, and I love it. If you pay for it, it can even pull Twitter feeds (which don't directly support RSS). I follow hundreds of feeds, and probably end up seeing a few dozen posts per day, which I can cruise through quickly because my attention isn't fragmented. (Feedly does offer an AI-algorithm-based priority feed as well, but (a) you can provide direct signals to train it, and (b) you don't have to use it.)

I'm not sure if Feedly itself qualifies as a complementary cognitive artifact; but at the very least, it's neutralizing the stupefying aspects of these platforms for me, and giving me back some of the inherent benefits of social conversation (which is itself a complementary cognitive technology ... assuming you listen to smart people.)


It's interesting to look at the technologies that already exist and favor complementary ones. But there's a deeper question here that's maybe even more interesting: can we consciously create new complementary cognitive artifacts? Can we nudge all the tools we use in this direction? I'll come back to that in the next article.