About

My most recent posts are closer to the top of the page. If you want to be sure I’m using a concept the way you think I’m using it, or if you wonder why I’m using this-and-such ontology, it might do you well to look at possibly-relevant older posts first.

‘Symbolic Optics’ is a reference to the fact that we often play around with concepts from optics in a metaphorical sense. What if we tried to shine light on the way we used these metaphors, and what exactly they meant? Enlightenment optics is tied in with the closely related problem domains of epistemology and axiology. In reflecting on the true optics behind these metaphors, I hope that we’ll become enlightened as to how to become Enlightened. But moving from the symbols to the substance…

This blog is where I transfer ideas from my brain to a place where they’re a little bit more stable and examinable. The topic is what I’ll label ‘computational axiology’: how an artificial intelligence could be designed to determine what is valuable in the universe. Not necessarily valuable to humans, mind you; just what is valuable to whatever processes there could be in the universe that could value things. This is pretty similar to what Singularity Institute Research Fellow Eliezer Yudkowsky calls the problem of Friendliness, or Friendly artificial intelligence (FAI). Hopefully someday we can design a machine superintelligence to fill the void between the stars with whatever we end up wanting the most of. I want this blog to be a step in that direction.

My intended audience consists of strong rationalists who are already familiar with the concept of FAI and aspiring rationalists who are willing to jump into playing with speculative ideas and filtering for conceptual gold. I’m not going to bother linking to references. I’ll try to indicate the extent to which I know what the hell I’m talking about. But I’m basically writing for an unrealistically knowledgeable, intelligent, and charitable audience. If that’s you, welcome!


%d bloggers like this: