March 11-13. Smart essay about humanity's deep future and the threat of extinction from stuff we are only now beginning to create. My favorite ideas are from Daniel Dewey, a specialist in artificial intelligence. This is the first time I've seen a plausible analysis of the motivations of a dangerous AI. We imagine that it will be like an evil human, but human motivations come from human nature and human culture, neither of which will motivate a machine. Dewey observes that our AI will have exactly the motivations we give it, and that it might follow these motivations into consequences that our relatively low intelligence cannot predict.
'The basic problem is that the strong realisation of most motivations is incompatible with human existence,' Dewey told me. 'An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don't take root systems or ant colonies into account when we go to construct a building.'
It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal -- something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans in prisons of undreamt of efficiency.
Related: a reader sends this page about complexity of value and how difficult it is to encode human values into a system of rules:
Because the human brain very often fails to grasp all these difficulties involving our values, we tend to think building an awesome future is much less problematic than it really is. Fragility of value is relevant for building Friendly AI, because an AGI which does not respect human values is likely to create a world that we would consider devoid of value.
Another angle: The Best Intelligence Is Cyborg Intelligence. I think this is where we'll be for the rest of this century, because no matter how powerful computers get, it will always be easier to combine machine and human intelligence than to duplicate human intelligence with a machine. The more interesting possibility is that someone will build a self-improving AI that is not a computer.
March 20 and 26. Two good articles about de-extinction. Cloning Woolly Mammoths: It's the Ecology, Stupid:
Is one lonely calf, raised in captivity and without the context of its herd and environment, really a mammoth? ... Perhaps the best course of action is to first demonstrate that we can effectively manage living rhinos and elephants before resurrecting their woolly counterparts.
And Efforts to Resuscitate Extinct Species May Spawn a New Era of the Hybrid.
April 3. While one person dabbles in drugs with few ill-effects, another will become a chronic addict. What's the difference? The author rejects the idea that addicts are morally depraved, and also that they're helpless and have no control. Instead, they're choosing to stay on their drug because the alternative seems even worse:
They are usually people who suffer not only from addiction, but also from additional psychiatric disorders; in particular, anxiety, mood and personality disorders. These disorders all involve living with intense, enduring negative emotions and moods, alongside other forms of extreme psychological distress... They are unlikely -- even if they were to overcome their addiction -- to live a happy, flourishing life, where they can feel at peace with themselves and with others.
April 12. A Practical Utopian's Guide to the Coming Collapse is an excerpt from David Graeber's new book. His most interesting idea is that popular uprisings that seem to fail can ultimately succeed. So the revolutions of 1848 all failed to take power, but they mostly got the reforms they wanted because the rulers were afraid of future revolutions. And the protests of the 1960's failed to end the Vietnam War any sooner, but every American war since has been conducted to mimimize protests, more than to actually win the war. From here, he argues that the main objective of the ruling system is to create a feeling of hopelessness:
It does often seem that, whenever there is a choice between one option that makes capitalism seem the only possible economic system, and another that would actually make capitalism a more viable economic system, neoliberalism means always choosing the former. The combined result is a relentless campaign against the human imagination. Or, to be more precise: imagination, desire, individual creativity, all those things that were to be liberated in the last great world revolution, were to be contained strictly in the domain of consumerism, or perhaps in the virtual realities of the Internet.
Graeber goes on to suggest some future reforms, for which the mechanisms have yet to be worked out: canceling debts, producing less stuff, and redefining labor in terms of helping other people instead of growing the economy.
April 15. Reddit comment on why anarchists fail or succeed. The author's summary is "Anarchists should try to do one thing of value to the community and do it well. They should do so in a strategic way and be open to alliances. Subcultures, drugs, alcohol, and rage suck." My summary would be: Successful movements contain many cultures and one goal; failed movements contain one culture and many goals.
April 23. Love and artificial intelligence. The big idea is, when we dream of the possibilities of artificial intelligence, we're not looking rationally at what our technology actually does, but looking at a myth that's thousands of years old, in which we can make something out of dead matter that is human but without all the flaws.
May 1. I've previously mentioned several solutions to Fermi's Paradox, the idea that there should be lots of extraterrestrial civilizations but we haven't found evidence of any. One of my favorite solutions is that any sufficiently advanced civilization is indistinguishable from nature. Another is that the aliens are just too weird. Terence McKenna has said that looking for radio transmissions from other planets is like looking for Italian food on other planets, and Jacques Vallee thinks the aliens are already here but they're so alien that we don't recognize them. Here's his pdf article on the subject: Incommensurability, Orthodoxy and the Physics of High Strangeness.
My own solution is that we are alone for metaphysical reasons.
First, it doesn't make sense to talk about reality without an observer. Mind is the foundation of matter, reality itself has the structure of a dream, and objective reality is an illusion created by an agreement among many dreamers to dream the same thing. Every time we look in a direction that has never been looked in before, we are creating what we find there. As with any collective creation, at the beginning our perspectives will be wild and inconsistent before we settle into consensus. This happens in science, where it's called the decline effect: there is an observed and testable pattern of strong experimental results that fade away the more the experiments are repeated.
This is also why there are so many paranormal experiences and so little proof, because a few isolated observers can create all kinds of reality, but "proof" means forcing everyone to see it the same way. "Paranormal" is just a word we apply to the region at the edge of consensus reality where inconsistent experience challenges the idea of objective truth. For more on this subject, see the book The Trickster and the Paranormal by George Hansen.
In terms of space exploration, this is why the first few people to look at Mars through telescopes saw canals, because they were dreaming more boldly than the eventual popular consensus. Charles Fort's second book, New Lands, is loaded with examples of the chaos of early astronomy. Maybe, if we'd been ready, we could have dreamed outer space much more alive, like in Philip Reeve's Larklight trilogy.
Now, could there be an intelligent species on another planet also dreaming this universe, with whom we'll have to reach consensus? It doesn't work that way, because the whole framework of other planets didn't exist until we dreamed it. We will not find aliens because this whole universe exists just for us. For more thoughts on this, check out the anthropic principle. In terms of consciousness, Earth is the center after all. We might eventually find primitive life on other planets, but we will not find any intelligence also capable of dreaming a universe. If there are "aliens", they are separated from us through a dimension of mind, not space, and they are centers of their own universes. And if we unlock technologies to move through dimensions of mind, space exploration might become pointless.
May 8. I'm reading Morris Berman's Wandering God. If you don't want to read the whole thing, the best stuff is in the introduction and the "Zone of Flux" chapter. Berman's first big idea is that some metaphysical ideas that supposedly go deep into prehistory, were really invented only a few thousand years ago in the transition from nomadic foraging-hunting to permanent agricultural settlements. This stuff includes earth goddess worship, Jungian archetypes, the desire for oneness with the universe, and all vertical spirituality, including the belief in a higher spirit world.
What I take from the book is that we have two modes of consciousness, which happen to correspond to quasi-scientific ideas about right brain vs left brain. Nomadic people are broadly focused, surfing the flow and watching out for opportunties. Civilized people are narrowly focused and striving for particular goals. These different modes of consciousness go with different political systems and different values, and we need to reclaim nomadic consciousness even if we can no longer physically roam the world.
This reminds me of something American Indians said about the first white people: that they had wild staring eyes as if they were constantly looking for something and not finding it. It also reminds me of a line from Valerie Solanas (keeping in mind that she put everything through a filter of women-good-men-bad): "Incapable of enjoying the moment, the male needs something to look forward to, and money provides him with an eternal, never-ending goal: Just think of what you could do with 80 trillion dollars -- invest it! And in three years time you'd have 300 trillion dollars!!!"
So civilized religion is a substitute for our lost ability to be at home in the here and now. This also reminds me of different ideas about meditation. The popular idea is that you meditate to achieve enlightenment, or transcendence, or oneness, to permanently ascend to a higher state. But experienced meditators say that's all a distraction, and meditation is about getting more skilled at noticing and appreciating whatever you are sensing right now.
Finally, the book of Ecclesiastes is all aobut this: nothing we do in this world will amount to anything, but instead of being depressed, we should let go of the desire for achievement, and live every moment to the fullest. Two of my favorite lines: "Better is the sight of the eyes than the wandering of the desire" and "Whatsoever thy hand findeth to do, do it with thy might."
June 7. Kickstarter must not fund biohackers' glow-in-the-dark plants. Bioengineneering frightens me, but not for the reasons in this article. I think biohackers and glow-in-the-dark plants are awesome! I'd like to see biotech labs in a million garages all over the world. It would cause some moderate eco-catastrophes, but in the long term I think it would be good for life on Earth.
My fear is that biotech will be monopolized by large corporations and governments, which will only permit modifications that strengthen large corporations and governments. So there won't be any miracle plants that do best under intensive human attention like you would have in your garden, because big systems are allergic to labor, and will only make miracle plants that do best in mechanized systems with high energy inputs. This is exactly what most GM crops are being designed for now. And there won't be miracle crops that thrive in waste places and provide nutritious food all year, because that would make it easier for people to live without money. Instead, all food will be produced on giant centrally controlled farms and you will only be allowed to eat by obeying the systems that run the farms. And this nightmare will be justified by the need for "proper regulations and safeguards."
June 8-10. Related to the NSA wiretapping scandal, an exceptional reddit comment about what it's like to live in a surveillance state, by someone now living in one of the Arab spring countries. Of course it's not like this in America, but the laws and technologies are in place so that it could be like this if the ruling powers ever think it's their best move.
June 12. A reader in Missoula, Shawn, came for a visit last night, and we talked about lots of stuff including metaphysics. Most of my friends don't like to use the word "God" because it suggests a ridiculous sky father deity. Instead, to speak of an unseen greater intelligence, some people say "the Universe". I like to talk about "the Flow", which is a good English word for the concept of the Tao in the Tao Te Ching. Shawn says he calls it "the Mystery". I like that better than "the Universe" because it implies something that is beyond matter and energy, and unknowable.
June 19-21. Inspired by coverage of the Global Future 2045 conference on Early Warning, I have some thoughts on artificial intelligence. First, I think people in the future will laugh at us that we thought we could make a human-like intelligence out of a sufficiently large pocket calculator. If we ever do create something that has human-like consciousness and superior intelligence, it will done by augmenting human brains, or by somehow making an AI whose behavior cannot be reduced to numbers and logic.
More likely we will never get around to artificially duplicating human intelligence, because big calculating machines with clearly nonhuman intelligence will be easier to build and powerful enough to radically change the world... probably in ways we don't like.
Does it even matter if an AI has "consciousness"? The word has many definitions, and I'm defining it here as the quality such that it makes sense to ask what it's like to be that thing. One context where it matters is if someone says you can transfer your consciousness to a superior computer replica of yourself. You want to make sure that you'll really wake up in a new form, and not just die and have a mindless copy pretending to be you. Personally I think "uploading consciousness" is a sci-fi myth based on a total misunderstanding of matter and mind. But there is something similar that should be possible: to replace your brain and body with other stuff, one tiny bit at a time. And I'm wondering how hard it would be to duplicate or engineer memories.
The other context where it might matter is that if it makes sense to ask what it's like to be an AI, we might want to give it rights. But we already believe that it makes sense to ask what it's like to be a cow, and yet we still raise cows in factory farms and kill and eat them, and people who want to give them rights are considered loonies. Even other humans are routinely bombed and tortured and captured into slavery. AI's, like everything else, will be given rights when and only when it serves the interests of corporations, governments, and worse control systems yet to be invented. Probably at some point they'll declare that drone aircraft have sentience so that you can be charged with murder if you shoot one down.
Finally, I can imagine one consciousness technology that, if possible, would change everything. The idea is that you go into another world, something like a holodeck or a lucid dream, you come out, and you have experienced a much greater amount of time than the world you come back to. There's a great Star Trek episode about this, The Inner Light, in which Picard lives decades of another life in 20 minutes. And an ancient Hindu myth, The Story of Narada.