Tuesday, May 8, 2012


The Artificial Man Debate
We could have just made history by creating an exact replica of me with your latest invention, the Clonotron 3000 machine. It sounds great in theory, especially considering the possibility of omnipresence.  But in practice how is it like me? It’s programmed with a digital image of my brain state, presumably giving it the ability to mimic my thoughts and behaviour without flaw.  But how can the machine learn like I do? How can it feel what I feel? How can a glorified three-dimensional photocopy of a person do anything other than what it's programmed to do which in this instance is imitate me? What happens when you take away the script? Will it continue to pretend to be me, follow its own path, or simply cease to function? In other words, is it conscious and if so, does it have free will?

Before we begin replication you inform me that the machine will think and behave just like me to the point of being able potentially to replace me.  At first it appears the machine does know everything I know, but that doesn’t prove it perceives and interprets information exactly like I do.  So how do we know it’s conscious? Unlike in the account of Dr Dennett’s ‘Brain in the Vat’ thought experiment, we have two bodies as well as two ‘minds’ to use as guinea pigs in determining whether or not the replica is ‘conscious’, that is aware of its surrounding environment and if it does so of its own volition.  Let’s assume now that the replica passes those tests.  Say, on a driving track, it scores the same time, accelerating, braking and taking corners exactly like I do.  On a pop quiz, it reacts at the same time and guessing all the same answers even if they are wrong.  Apparently it even likes all the same foods I do despite having no sense of taste and being incapable of eating.  Now, is it conscious? By definition yes it is.  It’s aware of its surroundings, its strengths and it’s weaknesses as I am. 
When you look at the information at hand though, how can something either mechanical or organic or even both be programmed with someone’s conscious mind, then be proclaimed free and independent.  That’s a bit like telling a slave in Jim Crow America he owns the farm.  It’s ridiculous.  After all, the machine thinks what someone else is or should be thinking and is therefore bound to that person by virtue of their thoughts and memories.  That’s not free will is it? Then again, what is free will? The Stanford Encyclopaedia of Philosophy says this:  ‘“Free Will” is a philosophical term of art for a particular sort of capacity of rational agents to choose a course of action from among various alternatives’.  On that note alone the premise that the replica has free will is false because although it might for all intent and purpose believe it has options, those options are still limited by its programming to think and act like I do which constitutes a form of determinism.
So now what? Here’s an interesting thought.  We could instead argue in favour of both man [sic] and machine leading programmed existences and are therefore conscious without free will.  So no matter what we think, we cannot choose what we are aware of or how we react to external stimuli.  Speaking in terms of a person, it sounds ridiculous to suggest there is no such thing as free choices because, unless you’re severely mentally handicapped, deranged or in a coma, then everyone makes decisions constantly.  What to eat, what to wear, how to wear it, occupation (or not maybe) and so on.  These premises all imply at least some form of free will since they involve choosing from a list of alternatives.  On the other hand there are things we cannot choose such as whether or not to breathe or eat as they are fundamental requirements of our continued existence unless we’re suicidal or deranged or both.  The question then would be what determines the choices we make?
A determinist would say that all of our paths are pre-ordained from time indefinite to time indefinite and that anything with a semblance of choice is merely a ruse to maintain order within the system.  But how would that be possible and who gets to determine what everyone and everything else does? Arguably it’s perfectly reasonable to believe in the case of the replica, but it doesn’t seem very plausible concept in relation to people because if nothing else, why were we given the ability to reason if there was no reason to use it? That’s a bit like giving someone a remote control but nothing to control for example a television.  When you look at it like that, the determinist theory is quite pointless, but that’s another debate.  In the context of the Clonotron concept, the replica is using my brain state and its thoughts and actions are determined by me from start to finish.
Also what happens when you take away the script? In this particular case, what would happen if you took away my brain state from the machines program and then left it to its own.  Would it continue to function and start to learn for itself or would it simply cease altogether? Assuming it is reprogrammable then it’s fair to say that it would be possible to salvage the replica and ‘teach’ it to think and learn for itself.  That would effectively give it sentience along the line of Star Trek’s Data from the ‘Next Generation’ series who is the ‘Enterprise’s’ Chief Operations Officer and also an android.  However Data is, at least according to the shows creators completely sentient, being aware of his surrounding environment and is at times even capable of human emotion with thanks to a special ‘emotion chip’
Data is also capable of reasoning and making rational decisions like his human counterparts giving him free will.  So on close comparison, the only major differences between him and his human companions are lack of bodily function, insusceptibility to poison or disease and a near indefinite life cycle.  You could argue he can be corrupted, but, so are people.  He can be killed (switched off or short circuited), but, so can people.  If you wanted a truly sentient android with both consciousness and free will in any sense, then you are better off building something like Data and save yourself potential embarrassment and a lot of wasted resources. 
In summary it is a great idea and could potentially be a useful tool alongside cloning as a method of preserving or even immortalising a person.  However the problem relating to creating a machine with the ability to predict and mimic people’s behaviour based on a database of prior experiences and trains of thought is that you cannot justify labelling it as truly sentient.  It will have the ability to form opinions and reason just not independently.
References:  Dr Daniel Dennett's 'Brain in a Vat' thought experiment.  That truly is an inspirational  piece of work.  Also many thanks to Wikipedia for some background info on 'Data' from 'Star Trek - TNG' and especially thanks to Gene Roddenberry for creating the character whom I borrowed so rudely for the purpose of writing this.  R.I.P good sir and finally the Stanford Encyclopaedia of Philosophy.

No comments:

Post a Comment