What is computational axiology?

This blog is supposedly dedicated to solving computational axiology, which is a term I made up out of thin air. What is it? How is it different from Friendliness?

I use ‘value’ as a catchall term for things like ethics, aesthetics, drives, inclinations, et cetera. Axiology is the study of value. Normally this is meant to mean what humans value, or what God values, or things like that. I don’t want to be that specific. I don’t know whose values I value yet, or even who I am, and so I want to figure out the values of everything that can value. Knowing what sorts of things are valuable seems useful. Certain values are attractors in mindspace. Others are less probable. For thousands of years we humans have wondered what our purpose in life is, or whether eating animals is morally bad, or which music is objectively good, and other difficult philosophical questions that we weren’t equipped to handle. Nonetheless, a small measure of progress has been made, and though humans in general are still very confused about their desires and the desires of those around them, some of us are lucky enough to feel comfortable in accurately reflecting on what it is we truly want.

Though axiology is an important endeavor for every person, every couple, and every organization, it has never been more important. It appears very possible that humanity will engineer a recursively self-improving artificial intelligence sometime in the next few centuries, and probably sooner than later. With ever-increasing optimization power it becomes more and more important to know what we’re trying to optimize for. Value is fragile and diverse, and this is more true of human values than any other values we’ve seen in the universe. But this is not cause for pessimism. We should be careful not to mindlessly destroy value, but this is also an opportunity for humanity to spread the most glorious values to every corner of the universe, and get everything we could ever want.

The ‘computational’ part of computational axiology is twofold. First, we’re trying to get powerful computer programs to solve axiology — doing axiology in the abacus that is our collective mind is a fool’s move. But secondly, it is to emphasize that I’d rather reason in terms of formal structures and algorithms, and not ultra-high-level concepts like ‘human’ or ‘human-Friendly’. We can begin by looking at our intuitive concepts and figuring out their implications, but eventually we need to start getting formal. The point is to actually win the universe, after all.

The problem of Friendliness is narrower than the problem of axiology, for Friendliness is determining what humanity wants, and axiology is determining what is wanted by anything. Nonetheless there is a lot of overlap, for humans hold many of the values in this universe. I hope that we won’t have to solve what the boundaries for ‘humanity’ are, or difficult and seemingly arbitrary decisions like that. In fact, I think it’s a sign that something’s wrong if our decisions feels even a little arbitrary.  This is potentially where I break with the idea of Friendliness. In the words of Steven Kaas, the good is the enemy of the tolerable. Nonetheless, with so much good on the line I’d like to solve the problem as perfectly as transhumanly possible.

I don’t want to get too involved in certain somewhat tangential super-technical details of what it would look like to implement an algorithm for solving computational axiology. This is mostly because I don’t have them, but it also because the details I do have could be repurposed to fill the universe with values that humanity would object to. But I’m not averse to discussing technical ideas in private. Dangerous ideas shouldn’t be sent over email if possible. I err on the side of caution if I am to err.

Informally, we could describe it as ‘figuring out how to get a computer to figure out what is valuable’. More formally, in a single sentence:

Computational axiology is the study of the foundations of value and of techniques for implementing axiological algorithms in computer systems.

Advertisements

About Will Newsome

Aspiring protagonist. View all posts by Will Newsome

One response to “What is computational axiology?

  • Luke Grecki

    But secondly, it is to emphasize that I’d rather reason in terms of formal structures and algorithms, and not ultra-high-level concepts like ‘human’ or ‘human-Friendly’. We can begin by looking at our intuitive concepts and figuring out their implications, but eventually we need to start getting formal.

    If a lot of philosophical confusion is rooted in the use of language we might do better by doing philosophy in a formal language as soon as possible. At the moment our axiological concepts seem to be mostly expressed in terms of natural language, so we’d need to translate these into a formal language. That’s kind of what I’m doing at my blog right now: I’ve been trying to explore intuitive concepts and their relationships through mathematical problems. As the collection of problems and their (partial) solutions grows, I hope to see patterns that will allow me (or others) to condense and refine these concepts into formal definitions, and then prove theorems about their relationships.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: