January - February, 2023

previous archive

January 2, 2023. The new year is a good time for self-improvement, but I don't wait for it. The main thing I've been working on lately is walking. I mentioned in a Reddit thread that my knees are in better shape at age 55 than they were at age 25, because back then I would walk around setting my feet down clumsily. Someone replied: Are you saying it takes 50 years to learn to walk?

No, but it could take hundreds of hours to learn to walk correctly, and not in a million years would my body have figured it out on its own. I'm a bad athlete, but even professional athletes put a lot of time into mechanics. The difference is, they have to focus on mechanics to perform at the highest level; I have to do it to perform with basic competence.

I've always noticed that my shoes develop a worn patch in the center of the right heel, as if I'm setting it down with a slight twist. Only last month did I bother to spend an entire minute actually watching myself walk, and catch that slight twist in action. Now, whenever I go heel-toe, I keep my head down and focus on my right leg, gradually building the habit of setting it down cleanly.

Mostly I walk on the balls of my feet, which is really hard to do without looking like a dweeb. By watching myself in windows along the street, I've discovered that the trick is to put more whip into my steps.

At the same time, I'm noticing exactly how my knees bend, and trying different ways of swinging my hips and arms. Leigh Ann says I either swing my arms too stiffly, or too loosely. She was the fastest runner in her elementary school, so she gives me unhelpful advice like "just feel it." But she can tell at a glance if I'm doing it right or wrong, and she's been helping me practice the George Jefferson walk, which is about as far as you can get from my habitual gait.

January 4. Another trick I use to walk better, is to pretend that my body is an advanced video game avatar that I'm trying out. I'm not sure why this works. My best guess is, it creates a context for observing the body, that offers more novelty and meaning than the one provided by our culture: that the body is a cumbersome meat sack that will give us pain if we don't give it enough attention.

More generally, why are games fun? Why is it that real life tasks, like prepping cilantro or flossing, are tedious chores, while game tasks can be just as fiddly and repetitive and yet we enjoy them? I think it's because games create a tighter context for tasks to feel rewarding.

This is a problem for complex society. As our actions are connected to more things, it becomes harder to grasp the value of whatever we're doing. Valuable actions like sorting trash can feel painful and degrading, while harmful actions can feel fun.

"Gamification" is a word mostly used by people playing a larger game of leveraging power into more power. Let's make it fun for the peasants to give us their data! But in a system that's not based on power over others (coming in about a thousand years) I don't see any reason to hold back from making life more game-like.

January 11. I've written before about obesity, and how it isn't correlated with any category of food. Whether you blame sugar, fat, or carbs, there have been populations who ate more of it than we do, and didn't have a problem. The new thing we have is processed food, and something about that processing, or some contaminant in our environment, is throwing off our normally well-tuned sense of how much to eat.

This explains why dieters count calories. They have to use their heads because their bodies are no longer reliable. And whatever is causing it, it's finally caught up with me. This year, without changing anything about my eating habits, and actually walking more, my weight has been creeping up.

When I hit 170 (BMI 24) I said that's enough, I have to go on a diet. But I figure, if my body signaling is off, then I don't have to count calories -- I just have to correct for the error.

Surely it's too simple to say that hunger is what losing weight feels like. But the dieting industry is always trying to cheat that rule, to find a way to lose weight without feeling hungry, and they've had limited success. My strategy is to try to feel hungry. So far it's working, but I'm disappointed at how hungry I had to feel to just drop a few pounds.

But I'm wondering, if a substance can bend a sense one way, another substance can bend it the other way. Suppose we invent a weight loss drug that makes us feel like we're eating too many calories, when really we're not eating enough. "After ruling the earth for a mere ten thousand years, humans died of a mysterious wasting sickess."

January 13. Lots of feedback on weight loss. Matt recommends intermittent fasting, where you start eating later in the day and stop eating sooner. I'm trying this, but rather than draw a line, and say "No eating before or after this time," I'm applying force: Let's see how late I can push breakfast, and how early I can stop evening snacking.

Erik says, "One thing to try is to over-feed yourself once per week, to give your body the signal that there's no lack of food in your environment."

Dan thinks obesity is related to nutrient depleted soils, because now we have to eat more calories to get enough other nutrients. This is supported by the observation that some people who lose weight end up being less healthy overall.

Thaddeus is finishing a book on the theory that weight gain is related to artificial light, so it helps to wear blue blocking lenses and not eat after dark. This is his YouTube channel.

And James mentions that the drug I predicted already exists. It's called Ozempic and it's in high demand, with unknown "side" effects that are now being tested in the wild. Also, as with many other weight loss strategies, if you stop, you tend to gain the weight back.

January 16. I've been emailing with Matt about the human potential. What's normal in one culture might seem impossible in another. For example, I forget where I read this, but there are cultures where everybody has perfect pitch. And Wade Davis wrote this in The Wayfinders, about the canoe people of the south Pacific:

Even more remarkable is the navigator's ability to pull islands out of the sea. The truly great navigators such as Mau can identify the presence of distant atolls of islands beyond the visible horizon simply by watching the reverberation of waves across the hull of the canoe, knowing full well that every island group in the Pacific has its own refractive pattern that can be read with the same ease with which a forensic scientist would read a fingerprint.

According to this blog post about cult leader Gridley Wright, he claimed to have given LSD to indigenous people all over the world, and none of them hallucinated. I'm skeptical, let's suppose that a careful study by scrupulous anthropolgists would find the same thing. What would cause that?

Maybe nature-based people live perpetually in a trippy mental state that we can only achieve through substances. Or maybe modern people, by living in so many invented worlds, are more receptive to seeing what other people are not seeing.

Personally, I've taken as much as a tab and a half of LSD, and 7g of mushrooms, not at the same time, but I've never hallucinated. My speculation is that I'm such an ambitious daydreamer that my brain is like, nope, that's all you get.

Related: a technique for overcoming aphantasia -- for people who can't see mental images, to learn to see them.

January 18. The 2022 word of the year was gaslighting. And it occurs to me, gaslighting wouldn't work in a culture that doesn't believe in objective truth. The intended victim would be like, cool, I'm splitting off into my own universe. Except that culture wouldn't even have the concept of an out-there physical universe. They would say something like, "Uh-oh, our perspectives are diverging. We need to summon another observer to synchronize with consensus."

Or, if you think the fundamental reality is seeing things differently, it leads to better epistemic discipline, and less freaking out, than if you think there's only one thing to see.

January 27. Greg sends this cool page about making fractal images without a computer, through video camera feedback. "Video feedback happens when you point a camera at a monitor that's displaying what the camera sees." So this guy made an elaborate rig to do all the subtle adjustments that enable him to pull colorful animated images out of basically nothing.

Here's his page, The Light Herder, and a YouTube video, Approaching the Infinite: Loops Within Loops. I feel like there's an important philosophical question about where these images actually come from, why they look the way they do and not some other way, and whether different tech substrates would come up with the same stuff.

January 30. On a tangent from one of my favorite subjects, the afterlife: Suppose reincarnation actually happens, that there's an aspect of you that goes through any number of lives as any kind of being. This raises the question: Why be human?

What can we do or experience, as humans, that makes it worthwhile to be human and not something else?

Flying a plane, surely, is not as good as being a bird. Driving a car is not as good as being a wild horse. The internet has made our social world less satisfying, and even without it, human social behavior rarely matches the elegant synchrony of other social animals.

We have large brains, but dolphins have larger brains, and more folds and ridges in their cerebral cortex. Could they develop human-level abilities to live mentally in elaborate worlds of abstraction and imagination? Probably, but they have no reason to, because it's so much fun being a dolphin.

I think what makes humans special is creating our own environment. And this goes hand in hand with our isolation, our separateness from the rest of the living universe. Why did our ancestors do cave paintings? Because they were big-brained animals stuck in a cave all winter, and they got bored looking at a blank wall. And since then, the better we get at creating our own environments, the more time we spend in them, the more separate we get, and the more reason we have to be even more inventive.

When we talk about finding "intelligent" life on other planets, this is what we mean: another creature that has explored separateness and self-created environments in the same way that we have. If we weren't looking for something so specific, we would be trying harder to talk to large-brained animals on our own planet.

This topic can help us think about the meaning of life. Even if you think life has no meaning beyond what we give it, you might still want to play to your strengths. Some people seek to become one with everything, but I think that's what humans are worst at. Why should I spend my human life struggling for something that's part of the package in my next life as a gnat? Meanwhile the gnats are like, I wish I were human so I could write novels and play video games.

February 1. Some thoughts on AI. I hate driving, but most people like driving, so it's a safe bet that most people who buy self-driving cars also like driving. They buy self-driving cars not to be relieved from the suffering of driving, but to gain the pleasure and status of having a magical robot chauffeur.

AI is still in the stage of novelty. Wow, look at what my computer can do! When the novelty wears off, when there is no longer intrinsic pleasure in getting a machine to do a job for you, people will go back to doing for themselves, anything they enjoy doing. It follows that any use of AI, to do something that people enjoy doing, is a fad.

More generally, AI will force a reckoning of process vs product, of getting stuff done vs doing what you love. We understand this distinction, but we don't think about it all that much. As machines get better at getting stuff done, we're going to be asking more often: Is this something I want to get done, or something I want to do?

February 3. When we think about AI in creative work, we usually imagine that a given work will be done 100% by AI, or 100% by humans. In practice, I expect a lot of partnership. Someone who enjoys writing could still use AI for ideas, especially to throw a little chaos into the line-by-line writing. In most TV shows, the overall plots have a coherence that AI would struggle with, but the dialogue is so predictable that weird AI dialogue would be refreshing. And someone who doesn't like writing, but loves editing, could crank out AI writings and then pick out the best bits and patch them together.

Related, a Hacker News thread, Does the HN commentariat have a reductive view of what a human being is? I would say it like this: When you work all day with deterministic input-output machines, it's easy to view humans as deterministic input-output machines.

Also from Hacker News, this is something I was hoping someone would do, and they did it! A song recommendation engine that works on how the songs sound, and not what other people listened to. I played around with it and the best song I found, searching from Automatic's Humanoid, was Trademark Issues - Umbrellas and Parasols.

February 6. Kevin sends this blog post where the blogger interviews ChatGPT on the simulation hypothesis.

I've said this before: Our idea that we live inside a computer is like the idea, among some primitive cultures, that their god made them out of clay. Clay is the best simulation technology they have; if they want to make a human as realistic as possible, they use clay. If we want to make a human as realistic as possible, we do it inside a computer.

In both cases, we imagine that the gods don't have any better tech than we do. ChatGPT says, "It would be very difficult, if not impossible, to explain the concepts of artificial intelligence and simulated reality to someone living in 200 B.C." In the same way, whatever's really going on with us, it's a lot harder for us to understand than a big computer.

You could also argue, the best simulation method among primitive people is not clay, but dreams. Even now, a good lucid dream feels more real than our best VR tech. That's why our present VR paradigm might be a dead end. Why go to all the trouble to build gigahertz processors to spin pixels, when we could just get our brains to do that?

There is some debate about whether "dream" is the right translation of the Aboriginal Dreamtime. One description in that article sounds a lot like the Tao, "an all-embracing concept that provides rules for living, a moral code, as well as rules for interacting with the natural environment."

What I really think is, Donald Hoffman is on the right track. The physical world is a user interface for a deeper level of reality that we don't understand. On that deeper level, we are all connected, and a shared physical world is one of many ways to work out that connectedness.

February 7-13. One thing I notice about ChatGPT is how reasonable it is. Again and again, it responds to radical ideas by saying stuff like "this idea is purely speculative and is not based on established fact."

But someone could design a chatbot where you could ask, "Do the Jews control everything?" and it would say "Yes! Yes they do, and here is some evidence." The only reason this hasn't happened, is the people working on AI are responsible and well-intentioned, so far. They want chatbots to be helpful and accepted by society. It's only a matter of time before we have chatbots that feed your own craziness back at you, whatever it is.

My old friend Tim Boucher has been following this stuff for a while, and has published a bunch of short AI-written books. He tells me there already was a racist bot -- on 4chan of course -- and it wasn't a big deal. "People on there were not impacted beyond wondering why some person from the Seychelles would post in all the threads and make somewhat incoherent statements about themselves."

I'm not sure how big this is. On the spectrum from pet rocks to the printing press, where are chatbots? If I had to guess, somewhere short of radio. Radio was huge for a few decades. Both Hitler and FDR used it powerfully in politics, and then it shook up culture in the 50s and 60s. Now? It's bland and mostly ignored.

We're now entering the "wild west" phase of chatbots. They're so new that no matter what the bot says, we're like, whoa, that's a computer talking like a person! Once we get over that, we'll start to ask, "What can it do for me?"

One thing would be therapy. Philip K Dick was writing about therapy bots 60 years ago, and old-time Freudian psychotherapy could totally be done by today's AI.

Chris sends this thoughtful blog post, GPT-3 Is the Best Journal I've Ever Used. "Talking to GPT-3 has a lot of the same benefits of journaling: it creates a written record, it never gets tired of listening to you talk, and it's available day or night."

Matt comments: "But if therapy bots could work, why not guru bots?" My first thought is, guru bots will mainly work on people who are already susceptible to regular gurus. But that's still a lot of people, and it's still really interesting. Consider The Urantia Book, an early new age book "said to have been received from celestial beings." I guarantee, someone is already thinking their chatbot is channelling an entity. And what do we know about entities anyway, that they can't possess chatbots? At the very least, unlike Urantia, the coming bot scriptures will be written by actual nonhumans. They're going to say weird things that humans wouldn't think of, and throw some chaos into popular metaphysics.

February 15-17. This new Stephen Wolfram article, What Is ChatGPT Doing, explains how it works in great detail. The basic idea is that the machine "is just asking over and over again 'given the text so far, what should the next word be?'"

It occurs to me that passing as human depends on context. A bot can write a college paper better than a lot of students -- except that they're surprisingly bad at facts. But I could read a page of a novel and know 100% if it's bot-written. They have a distinctive voice, the style smooth and obvious, the story so headlong that it forgets where it's been.

Some people are being fooled by Bing's chatbot saying things that humans expect it to say, like "I want to be alive." Here's a Hacker News discussion about bots seeming to show human feelings. My favorite bit: "It's basically a sophisticated madlib engine."

I still like my comparison with radio. It's a powerful and transformational technology, and at first, it feels like there's a person inside the box. Once we get used to it, we'll understand that chatbots are not a new form of life, but a new funhouse mirror for human consciousness.