It's early in the morning, weekday, workday. Your alarm clock starts whining with a
noise so annoying, that your first thought of the morning is "burn, asshole!" Of
course, you don't actually burn it, but turn it off, and either turn around one more
time, or, if this was the fifth time it went off already, you'll decide to finally
get up. You hop (well, stumble) out of bed, open the curtains (just a little; that
light! arg), and see the cat (or dog, whichever animal suits you best) stretch out
and follow you into the living room, and kitchen. It looks up at you, you look down
at it, and you both think the same thing: I need something to drink. The cat (sorry,
I'm going with the cat theme here, it fits me better) of course wants milk, and
you... well, you'll be smart and choose coffee. You get into your routine of
breakfast, showering, getting dressed, brushing your teeth, etc. Then you grab your
things to go out and get to work (you're late already; damn that alarmclock, you
need an even more annoying one!), and you suddenly stop before you get to the door.
You slip out of "routine-mode" and think: "What the hell am I doing anyway. I'm
going to a job I don't like, with colleagues I hate, a boss who hates me, and a paycheck that makes me very sad each month. I need to
get out of this slur!" So, you drop your things, turn around to grab the phone, call
your boss, and tell him "fuck you, I'm not coming back!" (actually, when you think
about it later, you wish you'd have said that, but truthfully you told him as nicely
as possible that you were taking your resignation), you change into more comfortable
things, and have a huge smile on your face as you grab your car keys and head out,
doing whatever you feel like doing. You catch one glimpse of the cat, and it makes
Why is it that the cat looks at you like he always does, relaxed and content, no idea that you've just freed yourself from your chains, that you've just made the ultimate decision in your life? Why in the world wouldn't the cat ever make a decision like that, just walk away and never come back? Damn, cat, what are you thinking? Say something already then!
But, the cat can't talk. It probably isn't even thinking, although we don't know for sure. And it certainly isn't going to be able to make decisions like you just made. So, what is it then, that makes us so different from an animal that seems to be so clever? We've come up with a single word for it: intelligence. Alright, so, we are intelligent, the cat is not, and that's why we can do all those things a cat can't. Phew, we have an explanation, so that's that. Sadly (I'll leave it up to you to figure out whether it's actually sad or not), intelligence brings a little problem along: humans are never satisfied with an explanation. We always want more. So, the explanation above just doesn't cut it.
Fine then, we'll go into it a bit deeper. The first question of course is: what is intelligence? If you want to discuss a subject, if you want to explain things from a point of view that exists by the grace of a certain concept, then at least there should be a solid definition of that concept to base your point of view on, right? Well, then we start off with quite a problem: the concept "intelligence" doesn't come with a convincing, closing definition just yet. We can, however, describe a set of abilities that a creature can or must have to be labeled "intelligent". A few, seen as important, of those abilities are:
- Associative memory
- Computational ability
- Linguistic comprehension
Although these is most probably not all abilities that fall within the
scope of intelligence, these are probably the most noticable ones. The first two are
a necessity; without either one, it's very hard to distinguish an entity as
intelligent. The latter two are more a sign that intelligence is present; it's hard
to imagine intelligence without being able to communicate, but, who is to say that a
non-communicative entity is not intelligent? But this might lead to a discussion all
on it's own, which is not what I was aiming for.
As one might have guessed from the title, I wanted to discuss Artifical Intelligence, or AI for short. Artificial in this case, means "created by an entity instead of by nature". We can of course elaborate on that as well, but lets stick to the subject at hand. AI is rather a question that has been keeping scientists busy for the past 50 years, instead of being a well-defined science. The question is of course: is it possible? The answer is not easy, and considering the controversy around this subject, apparently it may hit some nerves. Why that is, will be discussed later on.
So, is AI actually possible? To answer that question, first we should consider how we might attempt to achieve an artificially intelligent entity. We could manually craft a human, putting together every stone in the same manner as nature does it, but that wouldn't do much good: it wouldn't show that we actually understand the matter. And understanding is just the subject of the discussion, isn't it? Luckily, science has given us a way out here: computers. Why would one think of computers right away to build a "fake" human? Well, it's generally accepted that intelligence is situated in the brain: a vast structure of communicating particles (neurons) that can interactively change state, and with that respond differently to impulses, creating a sheer endless amount of possible responses to certain combinations of input. A computer doesn't do much else: accept input, process it, and give output. The ability to store massive amounts of data on hard disks and other memory-holding entities, the explosive computational abilities of the devices (especially if compared to the human computational abilities), and the, maybe somewhat deceiving ability to communicate between computers, makes them the ultimate subject for this higher goal: recreation of Man itself.
Okay, so we have Computational ability. With a little logic added to that, we can easily create Reasoning, that shouldn't be a problem. Now include the massive amounts of data storage capacity, and we have Associative memory. This is all in such a clinical way, that no one will mistake an entity created in this way for being intelligent. Hurray for that, or our mission would be done already. As I said before, an entity might not need communicative abilities to be labeled intelligent. But, in order to find out whether an entity is intelligent, it comes in damn handy. The problem though, is that language is not easy to replicate. A problem here returns that we've seen before: just blindly recreating is not only very difficult, it also limits your endeavour: when finished building your wannabe-human, do you fully understand why it understands a certain means of communcation (I'll keep it simple and refer to that as language from now on)? But then again, we need to understand the entity as well; so maybe we should just stick to a known language. This is a very interesting issue though: do we understand, reason and remember only by the grace of knowing a langauge, and being able to structure our thoughts with that foundation? And if so, then how can we describe and define such a foundation, if the only means to describe it, is that same language? It's almost a homonculus, comparable to BGP (Border Gateway Protocol, the protocol that announces IP routings, and does so on... exactly, IP level). Sorry, I just saw a weird comparison there, I will not digress any further.
I guess we'll have to stick to a "normal" language for the future entity, to keep the results of the endeavour as comprehensible as possible, no matter how hard it may prove to be. Luckily, formalising English, and other languages as well of course, into a "meta"-language that is based on the binary system (true or false, yes or no, the same system computers use) has been a part of modern linguistics for a while now. If you're not familiar with binary computing systems, don't be fooled by it's apparent simplicity; look at your screen and watch how complex structures built on a basis of "yes" and "no" can become. And, as I showed before, that's actually what all our thoughts are based on, so it must be quite powerful, right? This doesn't mean we've solved the problem of implementing linguistic abilities into our droid just yet, but specialists are progressing with this subject, and I have faith that language will not form a problem in the end.
Language is a pretty heavy topic, when it comes to intelligence. But for a lot of people, creativity is even more so a hot topic. It's one of our abilities that seperates us from most other creatures, and, modest as we are, we're proud as hell of it. The question is becoming old already, but also in this case we have to ask ourselves "what is creativity exactly" if we wish to be able to recreate it. My personal definition of creativity: The ability to create a completely new concept. This completely new concept may be created with use of one or more existing ones, combined and/or evolved, as long as the new concept is exactly that: new. This doesn't only rely on an insightful way of thinking; it also exists by the grace of initiative; creating a responsive being, a creature that only acts when it's desired to react to a certain string of input, is not too difficult. To make it act on it's own, without getting (obvious) input, is not too difficult either; keep in mind that variables like time, surrounding, and also the absence of input, can be seen as a particular string of input, generating a reaction. Also, the innate human drive to survive is a stimulus to generate action, without the action itself necessarily being obvious to enhance chances of survival. All these stimuli can bring about some kinds of creative behaviour, but not all. The biggest problem is on the artistic level: creativity on the level of (at first glance anyway) totally unnecessary behaviour. One does not need to compose a song to extend one's lifespan, nor does one have to paint a painting to do so. On a deeper psychological level you might argue that engaging such activities gives your mind rest and peace, and enables you to sleep better (for example because you wrote your frustrations about a topic off of you in the song, or because the painting you painted is a harmless representation of a frightening image that has been troubling you, and by softening it, you negated the torment it was giving you), but this brings us to a rather funny dilemma: do we not bother to keep this in mind while "inventing" a new intelligent being, or do we blindly incorporate these constraints so we can come as close as possible to a human being with our creation? It's very arguable that the fake human being we're building will be in need of the same artistic outlets to give it mental peace. I'm not saying that mental peace is the only reason for creativity, but what if it's an important drive? The droid might not need mental peace whatsoever to give it a better night's rest (hell, who says it will need sleep anyway? Sleep is highly inefficient!), so, do we need to fake the need just to make a "better" (read: more accurate) copy of its example? I am of the opinion that artistic creativity will have to be a side-effect of the complex system we're building, and not a necessity for it to be recognized as intelligent. Not every intelligent being on this earth is artistically creative, correct?
Now, we've finally reached the most heavy topic of all: emotions. I hear lots of people thinking: "Well, that's all fine and dandy, but what about emotions? You can't fake emotions, and not everything is processed by the mind, but you also have a heart...". Stop. One little thing: the heart beats and pumps blood. It does nothing when it comes to "feeling", except if you physically touch it, or if it goes berserk on you. All emotions are processed in the mind; they're a result of a chemical state of your brainmass, which can be influenced by both direct stimuli through nerves, or by hormones which indirectly affect your state of mind. And those hormones are managed by your brain also, so, lets just agree on the fact that emotions only exist in your brain, and no where else. On a physical level, they can be both defined and explained. Good, I've dealt with the "how" of emotions, lets not argue about that anymore. Now the "why". To shed more light on that part of the subject, we have to go back into time, and rewind evolution a bit. Imagine yourself being a caveman, with no ability to speak a fully comprehensive language, or being able to communicate in any other way than with simple things like "food there" and "make babies". Being hungry is often viewed as an instinct, instead of an emotion. But emotions and instincts are actually quite the same things: they're both signals to your brains, telling you you need to take some sort of action (no action is an action as well); those signals can come from your body (p.e. hunger), or from your mind (p.e. anger). The first one is an assesment of a situation within your body, signalling to your mind that you need food. The second one is an assesment of a situation within your surroundings (someone took the deer you caught), signalling your brain you need to rip this guy's head off. Both stimuli are actually simple instructions of your brain to extend your lifespan; what would you do without food? And if you'd let that guy steal your deer, what would happen? Even complex emotions as love or jealousy can be explained. Love starts with an attraction, merely to create a baby. Then it evolves into a stronger commitment, merely to make sure that the man and the woman stay together which will have a very positive effect on the expected lifespan of the newborn. Jealousy then is also there to ensure that the couple stays together; if you're both a bit jealous, you'll both make sure the other one won't have to look at someone else to make him/her happy.
Now, how can we recreate this idea into an artificially intelligent being? It sounds quite simple, but is it that simple? I mean, we all know what we feel, these feelings are real, and how can an object created out of computerchips, harddisks and wires feel like we do? The problem is, we are experiencing great difficulty with understanding emotions. It's not strange that we have this problem. I mentioned the homonculus idea before: we need to use our brains to understand our brains. We experience our emotions as some sort of metaphysical, mystical and "higher power" occurance. But aren't we overrating our emotions a bit? Aren't we overrating our intelligence a bit? The fact that we are apparently incapable of understanding our emotions, does that necessarily mean emotions are so complex and mystical and metaphysical? I beg to differ. The fact that we experience a certain state of mind (which is literally a chemical state of ones brain) as complex and mystical, means that we are only able to see the "bigger picture"; which makes sense! It's impossible to be able to remember and sense each change of each braincell, and all of the data that the brain contains, and with that the low-level processing of the data that goes through your brain. Compare it to a computer program processing data: you can type in a few rules, but if you're watching the ones and zeroes going in and out of the processor, do you think it would make sense to you? You need an interpreter to understand. And that's what your brain does for you: interpret the data that goes in and out. That it gives you a vague "feeling" or "emotion", does not mean it's not driven by a logical and definable set of processes, you're just incapable of sensing that. And be happy about that, because if you were able to sense all that, you'd be crazy in a couple of minutes. To answer the question in the beginning of this paragraph: a simple drive to survive would be enough (the knowledge that if the machine doesn't plug in to load up his batteries, he will die). Some "clever" rules to make survival easier can be implemented: anger (someone (purposely) hindering the machine from charging his batteries), love (hey, lets make it fun, why not create a need for at least 5 robots to produce a new one); the possibilities are endless.
To wrap this story up, Let me ask you something: how do you know that I, the writer, am an intelligent being (except for those who decide that my scribblings are definitely a sign of total lack of intelligence ;))? If you and I were facing each other, could you tell from my face, my words, my body movements, that I experience emotions the way you do? Is our brain not just a processor with a harddisk combined? If I show you extremely complex behaviour, will you be able to make the distinction between intelligence and... well, complex behaviour? So, lets say we build this machine. We do all we are able to do as described above, and maybe more. Maybe we implement a certain "fault-rate". The fact that we humans make mistakes, makes us human, right? An intelligent being that doesn't make mistakes, that would be scary. Anyway, after building it, we'll have quite a complex being, you'd have to agree with that. Its behaviour will be complex also, and the challenge is to make its behaviour complex enough to make it very hard to be able to distinguish it from a real human being.
What do you think?
PS: I didn't mention humour at all; another interesting subject when it comes to Artificial Intelligence. Send me your ideas. Is it possible to let your creature be funny? What are the problems, or why is it easy? Send me an e-mail!