We’re human, so naturally we want to know what humans value. But we don’t know precisely what values are or where they come from, and we know even less of what humans are. What are humans? What’s an answer to the question that will hint at what kinds of things humans could be expected to value?
We know some important things about humans. They evolved and are evolving. They adapted to a tribal environment where social maneuvering was very important for survival. They learned to model their environment and each other. They learned to model themselves, which is probably relevant to this mysterious phenomenon called ‘consciousness’. They are reinforcement learners. They eventually acquired all of the basic AI drives [pdf] to varying degrees, both at an individual level and a tribal/social level. Their general intelligence is borrowed from other special-purpose planning algorithms and the like; nowhere in the human brain is there a general intelligence module. Humans specialize, not often making connections to the meta-level nor decompartmentalizing knowledge between domains.
A human is a bundle of thermostats loosely wired together, pretending to have agency.
Humans are kludgey. Their brains are made up of many algorithms with different purposes and methods, and these algorithms don’t often talk to each other. They compete for resources, normally measured in thinking time. Some are constantly active, like breathing; some are selectively active, like bicycle riding skills; and some are nearly never active, like associations between memories that will never be primed again. Humans are thus often hypocritical. The part of them that wants something and says so might be less powerful than the part of them that doesn’t want it but is less vocal. Humans often say things of their desires that are transparently false while believing them true, for there was a selection pressure for being sincere, and less so for speaking uncomfortable truths.
There are two partially overlapping classes of algorithms within human minds.
The first are what we may call ‘genetic algorithms’: acquired over the course of development in the absence of any contact with humans, the algorithms you’d expect to find in the mind of a man raised by wolves. Visual processing, imagination, athleticism, gracefulness, perhaps rudimentary language: these are all in-born genetic algorithms for most humans.
The second class is that of ‘memetic algorithms’: the processes and memories humans acquire in the course of interacting with one another and social structures, which the man in the wild could only have thought up if he was luckily creative. Humans were shaped for and then designed by memes, a new type of reasoning that jumped into a universal mind as soon as one popped up in the universe. Some memes are attractors in mindspace: many minds will find similar mathematics, for we believe that mathematics are universal. Economics, egalitarianism, so-called ‘humanism’, even things like art, are all probabilistic attractors for minds in general. Humans boast of discovering theorems; but perhaps it makes just as much sense that the theorems found brains as their computing substrate.
The intersection between these two classes of algorithms is fairly large, for the memes were not invented overnight: they were the result of specific idea generation algorithms in humans that had to be in genes in order to start the bootstrapping process. In Jungian psychology these algorithms are called ‘archetypes’, and make up some of the ‘collective unconscious’ of humankind. Similar things are found in Freudian psychology, which places a greater emphasis on understanding development. The result is that certain similar memes will show up across all of humanity despite not being transmitted between the cultures. Language; storytelling; animism, spirituality, and religion; magical thinking of all kinds; astronomy; dreams: these all pop up in various cultures and lead to the development of more complicated and more potent memes.
Some memetic algorithms are very smart. Science, for instance, is very powerful. Is science smart enough to find humans and enter their brains? Science is of course an attractor in mindspace by its powerful nature, but is it actually powerful enough to actively and ‘willfully’ enter the human universe and human minds when those minds are ripe? Do humans find this weird and implausible simply because they’re humans, not science, and not nearly smart enough to understand the entirety of the algorithm that is science all at once?
It seems not implausible to me that this is the case, and that though individual humans have the illusion of humanity inventing these many universal concepts for humanity to use for its own aims, it also seems that these memetic algorithms that genetic algorithms have discovered have their own agendas, and that human genes are in symbiosis with these memes. Humans could not exist in their current form without bodies, but neither could they exist without these powerful memes of which their minds are only one of millions of parallel processes for computing. This view doesn’t change our anticipations, but it might change the things we might notice to anticipate.
What else are humans? How else should we construct our ontology? Though there are many things to be said, I’ve outlined the direction of my thoughts. At the very least, I remain skeptical of proposals to determine what humans value so long as they don’t bother to define ‘human’. By reducing humans to something sensible like algorithms or processes, I hope we’ll discover and then solve the problems of figuring out what these structures-called-human ultimately want.