Are evolved drives satiable?

Can we expect evolved drives to be satiable at any one instant? If so, which drives are satiable, and which would eat the entire universe if they could?

Thermostats have a narrow domain of preference. When the temperature is at the desired point (as measured by some internal representation), the thermostat is satiated. The thermostat does not usually need many resources at any one point in time to maximally fulfill its goals. Can the same be said to be true of the drives of various evolved life forms? How about human-specific drives?

Maslow created a now famous hierarchy of needs of humans (which I quite like), and claimed that the first four levels of the hierarchy — physiological, safety, love/belonging, and esteem — are all deficit (satiable) needs. The need for what he deemed ‘self-actualization’, though, he said could not be satiated. Was Maslow correct in this description?

  • Physiological needs: It seems correct to say that these homeostatic needs are satiable. Needs for food, water, sex, excretion, breathing, sleep, et cetera, are all satiable, and indeed they must be sated before humans can seriously work on satiating any of their other needs. I don’t see any insatiable physiological needs.
  • Safety needs: These needs include the security of the body, employment, resources (monetary), health, and property. It is less clear that these drives are satiable, especially as ‘resources’ could be taken to include lavish material possessions, for which humans seem to have a large if not unbounded desire. That said, humans seem satisfied beyond a given level of safety in the sense that Maslow intended, as somewhat indicated by the diminishing marginal returns of available spending money on self-evaluated happiness. I will mark this as unclear, though I suspect that Maslow was right to say that these are satiable.
  • Love/belonging needs: Needs for friendship, family, and sexual intimacy. It seems that beyond a point humans become satisfied with having a large but comfortable amount of friends. The same is true of family. Humans may wish they had the capacity to keep track of a large group of friends and family, but their need for friendship is bounded by their cognitive abilities in much the same way that the need for food is bounded by the size of the stomach or the speed of metabolism. It is still a satiable need. Sexual intimacy is less clearly satiable. If you gave humans a button they could continue to press forever, each time doubling their amount of sexual intimacy per moment, it is less clear to me that they would ever stop hitting the button. I have never heard anyone complain of too much (good) intimacy with someone they love. Thus, I am not sure that the need for loving/sexual intimacy is satiable. I do suspect that it is.
  • Esteem needs: Needs for self-esteem, confidence, achievement, and respect. These seem satiable. I have felt that I was at the desired level of confidence or self-esteem in the past, beyond which I wouldn’t have appreciated additional boosts. Achievement is less clear, especially because it does not cleanly decouple from self-actualization needs, which Maslow claims are of a different character. Certainly it is possible to satiate the need for achievement in certain domains — being the best in the narrow domain is one such form of achievement. Beyond a certain point you get diminishing marginal returns. I believe this is what Maslow meant by esteem needs, and I think he is correct to say that they are satiable.
  • Self-actualization needs: Needs for creativity, problem solving, spontaneity, morality, rationality, virtue. As long as there are things humans wish to learn or discover or create, I do not think this need is satiable. I am unsure. Though in the abstract it is easy for some of the memetic algorithms in my mind to say, “Yes, we want infinite compassion, infinite knowledge, infinite whatever-is-right-and-good”, I’m not sure what the rest of my mind thinks of what those memes think, and I’m not sure those memes are reflective enough to know what they want. There are also memes in others’ minds that would quite adamantly state that they desired infinite suffering, death, and all-that-is-evil. Both extremes are somewhat unrelated to what I intuitively think of as ‘self-actualization’, for they are more preferences than needs, and so Maslow did not categorize them. Although I would feel uncomfortable speaking of infinities here, I do think that humans do want a very, very large amount of things that go along with self-actualization at any one instant. The needs would be hard to satiate.

Thus I tentatively agree with Maslow that all needs ‘below’ (evolutionarily older and less cerebral) needs for self-actualization are satiable. I surely wouldn’t bet the universe on it. What Maslow left untouched — preferences that are not needs, aesthetics, the desires of memetic algorithms like egalitarianism or the Christian heaven — I would evaluate as being similar to the class of self-actualization needs. Some are satiable, some are not. Or they are at least not at all easily satiable.

It seems that non-human animals have completely satiable needs. If true, I think this is excellent news. Humans may feel guilty about only satisfying human needs and not the needs of the countless animals that can be found on Earth, to say nothing of counterfactual animals or aliens of the factual or counterfactual variety. If animal needs are satiable we can burn a small amount of the cosmic commons to satisfy their needs, while still spending the vast majority of resources on the insatiable needs that delineate humans and that humans seem to care the most about. It is of course not obvious that we should be so generous, but that problem is deemed the Friendliness problem, and I’d rather solve the general problem of computational axiology for now.

Why should we expect evolved drives to be satiable? We could imagine drives like dynamical processes that were unstable and therefore like a thermostat pulling the temperature towards infinity: perhaps having to split resources with other satiable needs in a larger process that only finitely values a process that itself values something infinitely, but nonetheless desiring infinite resources, and therefore subject to weird decision theoretic or control problems, or competing to an alarming degree for the attention of altruistic superintelligences.

I suspect that more knowledge of the importance and centrality of reinforcement learning to evolved systems would point in the right direction. Many animal behaviors are sphexish: reward is endogenously generated for running a certain subroutine in response to a pattern of stimuli, regardless of its effects on what humans would see as the system’s implied goals. Because the reward generated is limited by the number of times the subroutine is called, and because that is limited by the number of stimuli occur, the sphexish drive is satiable. But are there subroutines that fire off constantly and get positive reinforcement (which is rather distinct from lack of negative reinforcement), entirely in the absence of external stimuli? Are there subroutines which would be run an infinite number of times as quickly as possible, each time being rewarded, if only you would let them? Breathing, for instance, seems like it can be satiated because on the whole each breath is not positive reinforcement, and even if there could be an infinitely long chain of breaths, at each point of the chain there is only a finite amount of breathing that the breathing algorithms desire. Are there embodied drives that want a signal of infinite intensity? I doubt it, but why?

Peter de Blanc pointed out that desires that needed to work well with other desires had to be satiable, but once you had minds smart enough to explicitly model the idea of (and create algorithms for) ‘have as many kids as possible’, the importance of satiable needs (at least at that cognitive level) is less pronounced. This could potentially be understood with game theory and especially evolutionary game theory, or the field of mental accounting and evolutionary mental accounting, if such a field exists.

One obvious observation is that values seem more likely to be insatiable as they become more abstract and more general. These kinds of trends along the vectors of universality or epistemology (which I normally contrast with arbitrarity and confusion) show up a lot in my thinking, so expect to see a lot more of them.

The concept of diminishing marginal returns seems important here. There might be literature on resources with infinitely increasing marginal returns. There might be other ideas from microeconomics that are relevant.

I also suspect that better intuitions about dynamical systems and their stability would yield insight. But I currently don’t have the analogical knowledge. Understanding preference-like attractors in mindspace, like the universal AI drives but perhaps less reliably attractive, would also appear to be useful for this kind of reasoning.

No firm conclusions were reached, but I feel a little easier in continuing to think that most drives are satiable, and that paperclip maximizer AI designs are pretty difficult to engineer. Hopefully doing so is perhaps even more difficult than creating a solid framework for the more general problem of computational axiology. The human drives that humans seem to care most about, at least when waxing philosophical, like freedom, happiness, peace, equality, beauty, knowledge, and other highly abstract attractors, seem to be largely insatiable, or at least not easily satiable. So we’ll probably end up wanting to fill the void between the starts with some pretty interesting utilitronium. At any rate, at some point in future I plan on following up on this line of reasoning, hopefully with more knowledge and sharper tools, and also exploring another similar topic: the stability of evolved drives.

Advertisements

About Will Newsome

Aspiring protagonist. View all posts by Will Newsome

4 responses to “Are evolved drives satiable?

  • Louie

    I notice that you mention “game theory” or “evolutionary game theory” might be potentially useful for understanding humans. That’s sort of worrying. Game theory is useful for understanding option pricing, horse betting, and other purely synthetic human created games. But it gives pathologically wrong answers for human-human interactions and I see no reason why the answers wouldn’t be also be entirely useless for human-computer interactions as well. It may be a very useful concept for understanding computer-computer interaction or your overall goals of sorting out temes or computational axiology, but is totally inappropriate for analyzing humans or our evolutionary past. Someone doing reasoning from EEA + game theory is actually one of my biggest warning signs that that person has no understanding of human interactions since they can’t see their ideas have no predictive power. I mean, if we’re going to use game theory to analyze human interactions, we should also consider other equally probable systems like voodoo or ouija boards, which I think have equal or better explanatory power. Anyway, just stay away from game theory. It’s intellectual candy that’s definitely sitting around to trap smart people and waste their minds. It’s like anthropics. But unlike anthropics where we can’t tell if it’s not working, game theory (when reasoning about humans) is clearly wrong and only traps the most introverted and naive smart people.

    Anyway, that’s a minor quibble. Even though it’s not an exact parallel, it just hurts my mind whenever anyone suggests “game theory” or “precommitment” as useful ideas the same way Eliezer might wince at “complexity” or “emergence”. They’re not exactly empty ideas — just misguided. Please stop the pain.

    That said, overall I’m lovin the new site! Keep up the great expounding. I’m getting a lot out of your writing and think it’s terrific foundational work.

    • Jasen

      Louie,

      …Come again? Was that a joke? I know I can have a difficult time telling on the Internet, so I just want make sure before I waste a lot of text arguing with you :-P

      • Louie

        The ouija board part was a joke but my comment is serious. The only caveat I didn’t add is obviously I’m talking about commonly discussed forms of game theory up to and including anything that is state of the art today.

        I understand that some hypothetical game theory must be “right” the same way that the Chinese room is a fantastic way to translate Chinese. Given a long enough GLUT, I’m sure someone could create a “game theory” that understood things and gave correct answers — so long as that theory can balloon in size proportional to the number of things it has to be right about.

        All the things “discovered” by game theory were things where people filled in the bottom line, then applied one of several inconsistent tenets of game theory until it agrees with their expected outcome. There are enough contradictory rules about how to frame situations that someone can tweak the strength and weightings of things so that game theory can prove anything… so long as they already know it. Modeling people as entities which can “precommit” or optimally “compete” with each other is ludicrous on it’s face except in contrived examples or over time periods of seconds or perhaps minutes. If someone needs to understand human behavior beyond that, game theory can tell them anything they’d like to believe and the predictive power of their theory will only be limited by the predictions they are able to pull out from around the curtain of their own human intuition. This is why people without good social intuition can’t use game theory to actually improve their intuition (and get get more wrong than right with it in proportion to their natural levels of poor intuition). The game theory part isn’t actually providing anything (except the overconfidence that comes from thinking there is more independent evidence for a given belief when it is just duplicated, intuitive evidence that has been re-framed).

        As far as I can tell, the best application of modern game theory appears to be a convenient way to help me quickly identify poorly reasoned proposals, the same way any of us can rule out proposals that rely on things like Emergent Complexity Theory.

    • Peter de Blanc

      Louie, it was not my impression that Will was advocating the use of game theory to model human-human interactions.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: