Is indexical experience valued? (And more anti-Friendliness propaganda.)

This is a mysterious question, and so we will be tempted to give mysterious answers. Readers beware.

When I reason about who ‘I’ am in a non-philosophical way, I notice a few things. I’m all in one physical space. I see out of two eyes and act through various muscles according to my intentions. And this feels very natural to me.

And yet when I want to reason about values in a coherent framework, I prefer to think in terms of massively parallel cognitive algorithms and their preferences, rather than the preferences of these bundles of algorithms that seem to have indexical subjective experience. In order for me to figure out if I’m going about this the wrong way, then, I have to ask: why is subjective experience indexical? And is indexicality an important value?

Let’s say that two computer programs happen to be installed on the same computer and can be accessed by a program-examining program or a program-running program or a program-optimizing program. If one of the programs is doing pretty much the same operation on this computer as on some other computer, then we can reason about both programs as if they were the same program. We can talk about what Mathematica does, and what specific parts of Mathematica do. But when we ask what the program-examining program does, then descriptions must become more general. It’s very dependent on which other programs are on the computer, and how often they get run, et cetera. The program-examining program only has information from the computer it’s on, and maybe it has access to the internet, but even then it generally doesn’t get a very in-depth view of the contents of programs on other computers.

Consider human experience. It seems that the most interesting qualia are the qualia of reflecting on cognitive algorithms or making decisions about which algorithms to run, which has a lot to do with consciousness. Instead of coming up with reasons as to why all subjective experience is indexical, we could come up with reasons as to why the subjective experience of the reflection or planning or decision-making algorithms is indexical. And I think there are okay explanations as to why it would be.

Humans aren’t telepathic, mostly. Our minds are connected tenuously by patterned vibrations in the air, seeing the movements of one another, and so on. This is not a high-bandwidth way to communicate information between minds, and especially not complicated information like the qualia of thinking a specific thought. It tends to take awhile to get enough details of another’s experience to recreate it oneself, or sometimes even to recognize what it could be pointing at. Read/write speed is terribly slow. Thus most information processing goes on in single minds, and the qualia of processing information is indexical. But when two minds are processing the exact same thing in the same way, then their experiences are not indexical. Their epiphenomenal spirits could jump back and forth and never know the difference.

What does that mean for the study of value? Humans seem to value their subjective experience above all else. A universe without sentience seems like a very bad outcome. But it’s not clear whether humans value indexical subjective experience, of subjective experience generally. Some of the most intense spiritual experiences I’ve heard of involve being able to feel true empathy for another, or to feel connected to all of the minds in the universe. These experiences have always been considered positive. The algorithms that make up humans thus might not strongly value keeping their subjective experience confined to inputs from one small physical space.

If you look at the brain as a piece of computing hardware for lots of algorithms inside it, then it seems natural to answer the question of what humans value by asking what the things that make up humans value. And if the algorithm running in one mind is the same as the algorithm running on another, then we needn’t look at their computing substrate. But does the fact that the decision algorithms in each mind tend to look at different patterns of cognitive algorithms (even if the decision algorithms themselves are nearly identical) mean that we have to go ahead and look at individual minds specifically to figure out what each decision algorithm wants? What sorts of values do these decision algorithms have? Are they subordinate to the values of the more parallel cognitive algorithms that they run? Do they largely use the same operations for satiating other algorithms in the mind, and if so, are their values not actually indexical, even if they can only satiate indexical drives? If so, do we need to reason about the wants of individual humans at all? I have my intuitions, but we’ll see if they’re justified.

It makes sense to ask these questions for the sake of axiology, but what about it helps with computational axiology specifically? Most proposals to solve the Friendliness problem I’ve heard of involve doing various sophisticated forms of surveying individual humans and seeing what they want, and then resolving conflicts between the humans. I suspect this probably works if you do it right. But I contend that it is difficult because it is the wrong way of going about it. It is difficult to tell a computer program to look at individual humans. Artificial intelligences are programs, and naturally reason in terms of programs. Humans are not programs. Humans run programs. And what humans value is those programs. If we had an AI look at the world to find algorithms, or decision processes, or what have you, it will find the algorithms that run on minds, and ignore whatever pieces of hardware they were running on. This isn’t a bug; it’s the way humans should be reasoning, too.

I’ve said before that I’m not particularly interested in Friendliness. This is because I care about programs, not their computing substrate. And if the same program is running on a human mind as on an iguana mind… what’s the difference?

Advertisements

About Will Newsome

Aspiring protagonist. View all posts by Will Newsome

12 responses to “Is indexical experience valued? (And more anti-Friendliness propaganda.)

  • Vladimir Nesov

    “I’ve said before that I’m not particularly interested in Friendliness. This is because I care about programs, not their computing substrate.”

    It looks like you are inventing (or refusing to revise) specific interpretation of the term “Friendliness” just for the sake of finding something to be contrary about. If you see “cares about the substrate” as a problem, shift your understanding of Friendliness to “doesn’t care about the substrate, if it’s not a thing to care about”. (This whole discussion is confused, but that’s beside the point.)

    • Will Newsome

      I was outlining an actual philosophical disagreement between people like me and people like Eliezer. Eliezer talks about FAI being something that gives humanity what it wants. I’m not sure that’s a good place to start. I see no reason to reinvent Eliezer’s term to mean something similar but different that he probably didn’t intend it to mean, instead of using it as an easy way to point towards where disagreement might be.

      • Vladimir Nesov

        Good, we started unpacking definitions. Taboo “Friendliness”.

        I don’t see why “giving humanity what it wants” is opposed to “caring about programs, not their computing substrate”.

    • Will Newsome

      If you tell me what sorts of intuitions that could have led to me writing this post are confused, then I can calibrate which intuitions I should be more suspicious of. Currently I’m just posting things in weird orders and handwaving why I believe a lot of things, so it’s easy for me to respond to “Your whole line of reasoning looks wrong” with “That’s just because I didn’t write up the prerequisite material explaining my background intuitions”. In the past I’ve often ended up correct, but I wouldn’t bet too heavily on it in this case.

      • Vladimir Nesov

        For example, “Artificial intelligences are programs, and naturally reason in terms of programs. Humans are not programs. Humans run programs. And what humans value is those programs.” seems wholly confused. Programs as opposed to what? If that alternative can’t be thought about (for example, by AIs), how can you discuss the distinction?

    • Will Newsome

      Blogging software doesn’t want me to reply to your most recent comment for some reason. Anyway…

      “I don’t see why “giving humanity what it wants” is opposed to “caring about programs, not their computing substrate”.”

      Eliezer probably also cares about programs and not that they’re running inside of human minds, but thinking in terms of human minds means potentially excluding programs in non-human minds when it comes down to implementation, which seems wrong to me. Pointing at humanity traditionally (the way people at SIAI seem to talk about it in my experience) means pointing at a bunch of computers (individual humans), when I think we should be pointing at a bunch of programs. Giving humanity what it wants implies not giving other programs what they want.

      Luckily, because life on Earth is pretty similar, a lot of the programs on Earth are also in human minds. Thus I think pointing at human minds for the initial dynamic is probably okay as far as not being dicks to the programs in the rest of the universe. But that’s before reasoning about counterfactual preferences and alien preferences and what have you.

      It may end up where we just want to do the human thing even if it’s arbitrary, and enter the acausal economy that way. Or just start by checking the acausal economy after adding in information about the preferences of the local multiverse. The order might not be important; it seems to depend on how dynamic the acausal economy is, or if it’s easy for different sets of preferences to reach stable equilibria. Getting the initial dynamic right seems most important in cases where computing counterfactual preferences or doing acausal trade (checking the acausal economy) are computationally or philosophically infeasible.

      At some point I’ll write about why I disagree with Eliezer’s dislike of the term ‘arbitrary’, by the way, in case you thought I was unaware of such arguments.

      • Vladimir Nesov

        It’s frustrating, I want to interrupt you at the start of your long replies, when you say something I don’t understand or don’t think is correct, but this mode of communication doesn’t allow that. Check your facebook private messages.

    • Will Newsome

      “For example, “Artificial intelligences are programs, and naturally reason in terms of programs. Humans are not programs. Humans run programs. And what humans value is those programs.” seems wholly confused. Programs as opposed to what? If that alternative can’t be thought about (for example, by AIs), how can you discuss the distinction?”

      I’m not talking about a distinction out there in the world, I’m arguing about a difference in the ontology of the problem. The ontology might not end up mattering when it comes to implementation (“it’s all just bits”) but in the meantime I get nervous when people talk about humans and not parts of humans, because it seems like the wrong level of organization to be reasoning about. If we take the parts of humans to be the things we actually care about, and the parts of humans are also in lizards, then people might be sneaking unreasonable implications about exclusivity into both their thoughts and their communication when they reason about giving ‘humans’ what they want. Thus, even though it doesn’t correspond to a change in expected anticipation, I want to reason about individual humans as just computers for the real things humans (parts of humans) care about, and not treat them as more atomic than they are. This won’t change our anticipations but it might change what we bother to anticipate.

  • Will Sawin

    Speaking as a component of a human and most of its programs, the goals of programs not contained in humans or human-like things do not strike me as terminally valuable.

    • Kutta

      I agree; if I build a FAI as a human, I will do it as a complete brain program. In other words, the whole set of my sub-algorithms are going to be involved in the big picture and it might even be meaningless to talk about a particular sub-algorithm taken out of context.

      First, what’s about some human quasi-utility-functions that are weighted together from conflicting “interests” of computational sub-algorithms?

      Also, if I slice a program to two halves at some particular line of code, the resulting chunks would probably do nothing meaningful by themselves. So I have to slice programs so that the resulting parts are functionally meaningful. But – currently – my hypothetical thinking about morally relevant sub-algorithms of human minds is done by a labyrinthine tangle of functional sub-algorithms as opposed to any single one. This argument also works if I examine greater subsets of my mind but step short of the entirety of my computation of morality.

      Which computation in its abstracted form is the basis of Eliezer’s type of FAI theory. So the point of divergence between you (Will Newsome) and Eliezer could not be that you think in terms of abstract algorithmic subsets of human minds and he doesn’t.

      It appears to me that your ontology of axiology stems from improper reduction: you reason that since human minds are valuable, the parts that human minds are made of should be also valuable. This strikes me as a sneaky form of essentialism. They might be valuable, but not because they are parts of a human mind, but because they are evaluated as valuable by human moral computation.

  • Jef Allbright

    For a rather clear explanation of the (necessity of the) indexical self, see Ismael & Pollock (2006) So you think you exist-In defense of nolipsism.pdf

  • Ronald Edward Lepper

    continue sending updates

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: