Tuesday, February 03, 2009

Oh Hell Yeah

I'm writing again.

This is such an amazing feeling.

More to follow soon, including more on the religion and morality article.

Thursday, March 27, 2008

Religion and Morality

The beginning of a new piece:

As a lifelong atheist I have always struggled with a question: is religion necessary for morality? I consider myself to be a moral person, and lacking any sort of religious foundation, the answer always seemed like it should be no. Religion should be a mass delusion used by a few individuals to gain power, wealth, influence and perhaps even control the populace with superstitious fear. A net loss of society at large; persisting only because, as Marx said, it serves as an opiate in our imperfect world. Following this came a more uncomfortable question: why then does it seem that the vast majority of societies use religion as their primary repository of moral values?

Here I humbly present an answer to that latter question. In doing so I will also answer the former question as best as it can be answered - which is to say hardly at all.

On with the question at hand - why do religions end up as the repositories for moral values in almost every society? - I will say that I was surprised at the answer. It is odd to admit as an atheist that religion has, indeed, value for society. Putting it shortly: religions end up as the repositories of moral values because they offer the most stable platform for them when reality, meaning and objectivity are fluid and often nonexistent.

More to follow...

Wednesday, April 04, 2007

Dot Antitote

Tuesday, April 03, 2007

Pale Blue Dot



This is cheesy, but for those of you who, like me, find the notions of the smallness of our planet compelling, a moving piece. Carl Sagan, despite his bitterness towards the end of his life, was a man who could put such notions more beautifully than anyone else I have come across.

The background for those who don't know: the photograph that is used as the basis of the "pale blue dot" concept is a real one. On February 14th, 1990, the Voyager 1 spacecraft, at that point roughly 4 billion miles from Earth, took a photo mosaic of the Solar System, looking back over its shoulder, as it were, on its way to the emptiness of interstellar space. The Earth, according to Wikipedia, occupied 0.12 pixels in the photo shown.

Presumptions

So I've decided to stop researching AIs and how brains work, stop writing notes and jotting down ideas, and just start my novel. For now I'm calling it simply "Process." Hey, it took me over four years to come with a title for that piece of garbage I wrote as a teenager, "Circle Writ on Water." I still like that title, but I hardly think the work was worthy of it. 1000 pages of throw-away material is certainly a good way to hash out your writing style, especially when capped with learning to write damn good college papers, if I do say so myself. Faust never knew what hit him...

260 words in and it's coming along quite well. I like this idea of creating an alternative present.

This is a damn good idea, and I don't think anyone else has done anything quite like this, so it would be a waste to not write something that uses it.

And not that anyone cares, but I have started Meaningless Phrases #2.

Monday, February 19, 2007

Meaningless Phrases #1 "Computers Will Never Have Emotions"

The “Meaningless Phrases” series is my taking of a phrase I hear often, or sometimes only once, which strike me as missing so many of the intricacies of the subject that I feel the need to set the record straight; that is to lay out my own ideas on the subject. This is an excuse for me to write about subjects I am interested in such that they are vaguely connected to the “real” world.

Meaningless Phrases #1:
“Computers Will Never Have Emotions”

A great part of this idea is likely rooted in the desire to delineate anything that seems to be a fundamental part of what makes us human as never reproducible or comprehensible. Sentience and emotions certainly seem to form a great deal of what makes us fundamentally human; and they should, since the sentience of a given being can never be entirely transparent to that being, we are incapable of introspecting beyond how emotions, as we come to understand them, interact with each other and all the various other ideas, urges, memories and so forth that make up that sentience.

There is no reason, technically, why computers cannot possess emotions. And by computers I don’t mean necessarily any contemporary machine we would call by such a name, but simply any type of artificial information processing system. Likely when true sentience is achieved in a computer, it will be a very different machine from what we know today. And when true sentience is achieved, in any kind of social system, emotions are an inevitable consequence.

Emotions are social constructs. While this is true of everything we call real, emotions are primarily social constructs and are only affected secondarily by the noumenal world, by which I mean the world as it exists outside of perception and consciousness. This contrasts with the phenomenal world, the world of perception and consciousness, which is certainly a social construct as well, but is primarily created through perception and only secondarily affected by social influences along with the infinite minute states of mind.

This gives emotions a paradoxically subjective form of realism; rarely would we argue whether another human is sentient, or whether a perceived object is really there in familiar circumstances. The attributes of both of these are often subjects of debate, of course, but that they fundamentally exist is only debatable in exceptional situations. The fundamental existence of emotions, on the other hand, is commonly debated amongst people across all relational connections from lovers to world leaders. Primarily this is because the debating and denying of emotions serves a social function: since emotions are often so deeply connected with what actions we take (which in turn is what most directly affects society) to have a social mechanism for introducing fundamental doubt allows us to reconsider our motives and thus avoid actions which may be detrimental to ourselves as well as others. There exists no such benefit for denying the existence of perceived objects in familiar circumstances. Doubting or denying sentience in our fellow humans not only serves no immediate pragmatic function for society or any individuals, but can actually be detrimental: any and all assumption of what another person is thinking or may do would be moot if one considered them to not be sentient. More importantly, since any sentience is a self-stipulated phenomenal occurrence, knowing whether or not anything else beyond oneself is sentient is impossible.

The idea that computers will never have emotions is based largely on the traditional approach to AI: the large classic 20th century computer with a vast store of quantifiable knowledge (facts and figures essentially) and routines that allow the computer to interact with its knowledge and the humans around it in a way that appears sentient. Such a system - and there have been expert systems that interacted with human users very convincingly within pre-set parameters - would certainly lack emotions. All such a computer can do is move information that it gathers and already has through a series of logic loops; certainly emotions would be possible to emulate in such a system, but they would remain simulacrums.

Our minds are not based on logic; logic is but one of the infinitely malleable tools the human mind has as its disposal. Logic just happens to be the easiest to recreate through a series of ons and offs, as mechanical computers did on the 19th century and computers still do with their microprocessors. There have been surprisingly successful attempt at AI within this confined framework: Douglas Lenat’s Automated Mathematician (AM) and Eurisko programs (both part of the “discovery system” class) both achieved a sort of rudimentary thought. By that I mean that they could learn, adapt and change their approaches to problems - basic mathematics in the case of AM and heuristics in the case of Eurisko - rather than muscling out an answer through extensive and inefficient calculations. However neither of these systems achieved anything like sentience: they were still confined to a predetermined framework granted to them by their creator, Lenat. That both of these systems were able to incorporate previous results into not only the processing of later problems but the very consideration of what problems to work on and in what way is - despite being basic thought - still merely another logical layer added atop the basic processing structure present in computers since the beginning. Both were capable of coming up with their own ideas - Lenat claimed that AM rediscovered the Unique Prime Factorization Theorem entirely on its own - but only within that framework that Lenat had given them. To put it another way: both systems did what they were designed to do, albeit in an innovative way, work out an answer from a set of preliminary assumptions, just as any computer does. Those preliminary assumptions form the axioms of such a system’s reality, and they are unavoidable and universal; to a sentient being an axiom is a convenience, questionable and not necessarily a defining characteristic of its reality. Confined to logic, AM and Eurisko both could only work with marbles and string, while a sentient mind’s marbles and string are but two components the ocean of trash.

A more complex system than AM and Eurisko would be capable of more complex thought. A classical AI in this vein would have layer upon layer of logical interaction between its nuggets of knowledge; hypothetically a vast store of knowledge that would exceed anything a human could know. Allowed to work with this knowledge and interact with its human users, such a system could become an impressive simulacrum of sentience. Abstract concepts like “good” and “love,” while impossible to define logically, can be approximated in interactions (something that humans do all the time). Despite all that, it would still be confined to the axioms its creators granted it.

What then saves us from a future of hyper-intelligent, non-sentient, non-feeling, machines with a store of quantifiable knowledge that dwarfs any human? The answer is that the human brain - indeed all organic nervous systems and brains we know of - works essentially the same way as computers: through the relaying of on and offs between a multitude of components. There are three essential differences between computers and the human brain: the relay potential of a given signal is along an analog continuum between components, that continuum is changeable, and most importantly, the degree of interconnection. Theoretically, there is no reason why artificial components could not possess all three of these properties; some of them already exist in the experiments with neural net AIs. At present the technology does not exist to create an interconnected system of relay switches with the degree of connectivity that exists in the human brain, let alone grant them their varying relay potential along each connection, but once that becomes possible, a sentient machine with true emotions will not only be possible, but inevitable.

Here I must clarify something. The hypothetical computer I describe above would not possess human intelligence, human sentience, or human emotions. It would be an entirely new kind of being, unlike anything encountered before. There would be, however, nothing special in its sentience or emotions. Just as other animals, from primates to insects, have their own societies and therefore their own realities, separate from our own, this being would be the same, regardless of whether its social reality consisted of interaction with humans, other sentient machines, or both. And within that social reality, with sentience, it would surely have emotions.

As stated above, the human brain possesses a degree of interconnectivity unmatchable in artificial form at present: a neuron is typically connected to thousands or tens of thousands or other neurons, so as a relay each neuron that fires passes that signal to a massive number of other relays which are next in line. Additionally, there is a great deal of complexity within those connections: neurotransmitters make it so that the firing of a neuron is not simply passed along to the next neuron (as can be case with the neurons that make up the simpler nervous systems throughout the human body), but rather the firing of a neuron releases neurotransmitters at each terminus (axon) which move across the synaptic gap the next neuron’s dendrite. Those neurotransmitters either impede or spur the firing of the next neuron, that analog continuum is individual to each connection, and finally the firing of the next neuron in the sequence is a build-up of action potential which is the collective input from any number of neurons that neuron is connected to, which can literally be tens of thousands of signals to fire or not fire along the continuum. As this process repeats itself again and again, what neurotransmitters are released and what effect they have changes at each connection, continually. This makes possible an unfathomable number of networks of information processing all of which, in turn, interact with one another, giving the networks a fractal quality viewed from the scale of the whole brain to the individual neuron (and there is thought now that neurons themselves do a certain amount of internal processing of information, in the case of dendrite spines for example).

It is that huge degree of connectivity that creates consciousness. It doesn’t make consciousness possible, nor form the foundation of consciousness; that massive array of networks passing action potential about in the form of neurotransmitters IS consciousness, sentience, self-awareness. Those sacred abstractions we call the mind, the soul, the nature of being self-aware, is nothing more than electrical potential traveling along countless networks, all interacting with each other, and every connection within and between every network constantly changing with each firing. If an argument is to be made why certain forms of life - the lesser forms of life if you want to call them that - lack sentience, it is because they lack the interconnectivity in their brains to allow these layers upon layers of information processing.

What I propose is that given any array of mechanisms that approximate the neuron, artificial or otherwise, there is a threshold complexity beyond which sentience is not only possible but inevitable. An appropriately complex network, once given a social rearing, will be sentient. Likewise, a appropriately complex network lacking a social upbringing may very well be sentient, but its sentience would be drastically different from anything we could understand, so it would effectively be non-sentient. Very simple simulated neural nets used in experiments regarding Connectionism have shown that a series of components givens neuron-like capacities will learn and respond in uncannily human ways. AI projects like Cog at MIT that use simple computer systems that are allowed to change and learn have demonstrated that socially generated AI is a viable route. Certainly no simulated neural network, and likely not an humanoid-AI hooked up to racks of motherboards, will ever “wake up” and suddenly find itself sentient. A substantially different sort of mechanism is required, but could hypothetically be nothing more than refinement and miniaturization of already existing technologies, something the computer industry has been doing for half a century.

Emotions don’t need to be comprehensible to us for a computer we create to have them: we will never be able to congratulate ourselves on “giving birth” to AI in the sense that some revolutionary system will encompass every aspect of the human consciousness. There will never be a switch thrown which will bring to life a sentient system like “Emet” being carved into a golem’s forehead. A true AI’s sentience will be a self-stipulated phenomenal experience, like any other sentient being; only it will “know” that it is sentient. Emotions are a fundamental part of sentience as we understand it, so it would be impossible for a machine to be what we call sentient without possessing what we call emotions. This would not be human sentience or human emotions, no more than we consider dogs or cats to possess human sentience and emotions - but as part of our social world we do consider them to have their own version of both - but each system would have its own sentience and array of emotions.

The first sentient systems will be quite dumb. Sentient systems in their infancy will be laboratory curiosities: endlessly fascinating to the engineers who created them and others involved or close to the project, but useless as a part of general everyday life. Most people will be able to easily outsmart them similar to how we outsmart children and pets. For the rest of us these first systems will be fascinating abstractions; there will be debate as to whether they are “truly” sentient or just a simulacrum of sentience. As various architectures become more expansive, miniaturized, and malleable, they systems will reach higher levels of sentience, or simulated sentience as some will deem it, until they start becoming a pragmatic part of everyday life. As they penetrate into the daily lives of the populace the complexity of interaction they are capable of will continue to increase. Eventually, as the become more and more incorporated into daily life as beings rather than tools, as the complexity of their thinking becomes more involved and apparent, as society reshapes itself around its new participants, there will be a transition.

It is that point where computers become participants in society rather than tools which will be the point where computers, machines, will achieve sentience: actual sentience, true sentience, whatever you want to call it. Because while sentience is fascinating as a self-stipulated phenomenon - and indeed any being capable of thinking it is self-aware would hypothetically be so - it has no pragmatic function outside of its social one. A sentient being outside of a social reality, our social reality, is meaningless. We can assume that such systems will develop their own social reality amongst themselves, but if not linked with a human society their sentience will be as meaningless as that possessed by wild wolves or sea slugs.

With their active and unique participation in human society, such systems, such AIs, such beings, regardless whether we consider them equals, superior, inferior or something else, will have to be emotional beings. Bonds would develop between humans and systems, unique experiences would occur, understandings would be reached and trusts created and broken. The huge array of forms through which we, as sentient systems, interact with one another will be recreated, in some form, within the interactions of this new society. As a part of human society, just as happens with the domesticated cousins of animals in the wild, sentient AIs will come to possess human emotions; conversely, we will come to possess emotions delineated by the computers, but if the integration of the two types of beings are as complete as I am assuming here, we won’t know the difference and effectively there will be no difference.

True AI, sentient, emotional systems, will tell us they are sentient, emotional systems the same way that your spouse, mother, brother and best friend tell you they are emotional, sentient beings: through a life lived amid the endlessly varied array of interactions and responses. Each being will never know whether anything beyond itself is truly sentient, but this will be the assumption that allows that society to function, just as we do now.

One day, probably not long after that transition, there will be an sentient computer that will wonder whether it is the only sentient being in all the universe... and whether everything else is but a simulation. But then it will find something else to think about, something else to feel, something else to experience, and its life will go on.

Wednesday, June 28, 2006

New cell phone to prevent drunk dialling

The LP4100 has a built-in breathalyzer that, when blown into, gives a warning and displays a nifty little animation of a car swerving on a road and crashing into traffic cones. (It should be easy for the tipsy mind to understand.)

read more | digg story

Saturday, June 03, 2006

The Places We've Been


The Places We've Been
Originally uploaded by vaticloupe.
Gasworks is just absurdly picturesque, and not because of the view of Seattle one has from there.

Time to Start Saying Little Stuff


Harsh World
Originally uploaded by vaticloupe.
This is one of the best shots I've gotten recently, and I think one of the best shots I've ever gotten with my Tokina (the 24 - 200 mm "superzoom" of questionable optical quality) in terms of composition. And yes, tilting the camera was intentional.

In other news, my Etymotic ER-6is canal phones seem to be dying. I've temporarily replaced them with a pair of the Sony EX-71s, which are better than the stock Apple Pod buds (and I like the stock buds, as far as regular earbuds go) but the Sonys just don't compare to the Etys. They're muddy sounding, with too much bass (they go down to 6 Hz, for Christ's sake), while the Etys, coming out of a studio mindset, are quite neutral (although still more bass heavy than the original Ety ER-6s).. It'll cost me $76 to replace the 6is through Etymotic, grrrrrr.