Thursday, November 13, 2008

An MM-Theory Twist on BIVs


Okay, so like I said in my previous entry, I'm going to touch on certain questions that I feel were left lingering in the official website (even though I said the paper 'The Universe and God' pretty much covers all loose ends - heh heh). I'm also going to drop the formality a bit in my style of writing - make it a bit more free flowing and 'human'.

Anyway - first topic: brains in vats.

I'm actually going to do something different from what I said I'd do in the previous post. I said I'd explain a way that brains in vats might have wholly different experiences than brains in craniums - even though their internal neural configurations would be the same and the same signals would be fed into them. Since posting that, I've had time to think about it, and now I'm convinced that the experience wouldn't be at all different. But let me explain my thoughts anyway since I still think it sheds some important light on the questions that originally lead to the idea.

Philosophers abbreviate 'brains in vats' as BIV, and so will I. A BIV is essentially what it sounds like. Take a human brain out of the cranium and place it in a vat of life-supporting liquid. Plug cables into any neural entry points to the brain where it would have gotten external information should it have been left in the cranium (i.e. the senses). Information is fed into the brain through these cables from a super-complex computer capable of mimicking the real world (kind of like in The Matrix). The computer is programmed to stimulate the brain exactly as if it were being stimulated by the natural environment for a typical human being living a typical life. We assume that such a brain would be completely fooled by the information fed to it, thinking the computer generated world it was being presented with was in fact the real world, and he'd have no indication of it being otherwise.

Philosophers typically take it for granted that this experience would feel no different than if the brain were to be kept inside the cranium - but I wonder about that. I have reason to suppose - given MM-Theory - that the experience would be nothing at all like it - that it would be indescribable in terms of ordinary human experiences (yes - even though the physical internal structure of a BIV would bear no signs that it had indeed been "envatted" as they say).

Why do I suppose this?

It starts with a simple thought experiment (keeping MM-Theory in mind at all times): consider a thought neuron from the human brain. Assuming that one thought can be said to correspond to the firing of a single neuron, then it stands to ask what would happen to the experience of this neuron should we remove it from the brain, place it on a table, and stimulate it with an electrode. Would it still correspond to the same thought? To a thought?

I don't see how it could. A thought, according to MM-Theory, can only be thought if it is entailed by the appropriate antecedent experience. But what this implies is that the antecedent experience in question would have to correspond to the electrode. Strictly speak, this is not all together contradictory to what MM-Theory says - it has ways of justifying how the same experience can correspond to both MODs and electrodes - and furthermore the thought being entailed need not proceed from exactly the same experience - one quality of experience can be entailed by more than one quality of antecedent experience.

But now consider this. Suppose you took a neuron from the visual cortex and gave it the same treatment. Would it still correspond to the same visual experience? In this case it seems especially odd because there would be absolutely no discernable difference between it and the case in which the neuron was taken from the cognitive centers (assuming both neurons are relatively the same in structure). Why would one correspond to a thought while the other to a visual experience?

To this, I brought in the following insight: perhaps the firing of a single neuron doesn't correspond to a particular experience but only to the way the experience is processed. The experience fed into it would correspond to the electrode's activity, and the structure and predispositions of the neuron would only guide the manner in which the experience would subsequently flow. The neuron therefore represents a sort of algorithm - that is, a structured pattern by which the flow of experience is regulated. It would be like a mathematical formula - it leaves open the possibility for any quantitative value to be assigned to the variables, but rigidly determines the manner in which those values are calculated.

And if this is true at the level of single neurons, why not for larger structures - whole MODs for example. And if this is true for whole MODs, why not for whole brains? That is, for example, if we removed the brain from the larger system of which it was a part (i.e. the body) and plugged it into artificial stimuli, like electrodes and other such cables, what reason would we have to expect that the experiences corresponding to its stimulation feel anything like they do when kept inside the cranium? The experiences it would experience would be determined, not by its internal structure, not by the configurations of its neural networks, but by the experiences fed into it by the machinery it is hooked up to. In other words, since the experiences corresponding to the computerized machinery it is hooked up to would have to have some notable differences from the experiences corresponding to the ordinary organic structures of our usual anatomy - such as the optic nerve, or the inner ear structures, or the tactile nerves stemming from our skin - then those differences would remain as the experiences morphed into those corresponding to the activation of our primary sensory regions. In effect, our sensory experiences in the BIV scenario would be nothing like those in the mundane scenario of brains in craniums.

This is how I arrived at the alternate version of MM-Theory, and now I'll explain why I abandoned it. In so doing, though, I'll have to find some other way of explaining what happens to a neuron as you take it out of the brain and stimulate it with an electrode. The problem is this: in pondering over BIV thought experiments, philosophers tend to consider only such cases wherein the BIV is there to stay - supposedly for the remainder of its life. It may have been taken from a living human body, but no one ever asks what would happen if it were placed back into a living human body. So let's ask this question.

Well, it seems like what would happen is nothing. We might think at first that the person would report all his wild and crazy experiences and how different they were from the real world, but this would entail that his brain could distinguish between the BIV experiences and the post-BIV ones, which in turn would entail that his brain was processing information differently in the post-BIV state from the BIV state. But this contradicts the parameters of the thought experiment. A BIV, at least in the thought experiment, does not process information any differently than a non-BIV. Its internal structure and neural configurations are no different in any significant way from those of non-BIVs. In effect, when plugged back into a human cranium, the brain would have no basis upon which to express his real-world experiences any differently than his envatted experiences; the brain in question should not physically react to real-world stimuli as if they were in any way different from those of the envatted state. Such a brain would not notice any difference.

My gut reaction to this thought was to say that the brain in question would not be epistemically aware of any difference, but that experientially it would. However, two problems exist for this reasoning: 1) one's epistemic awareness cannot be wrong. To be epistemically aware of seeing an apple, say, is only possible if you are indeed seeing an apple (epistemic awareness can only be entailed by the experience one is epistemically aware of). So if the post-BIV is indeed epistemically aware of the same experiences he was epistemically aware of in the envatted state, the difference between experiences cannot be at the level of epistemic awareness - they must be below that level. But this leads to the second reason: 2) if we are to say that the difference exists below the level of epistemic awareness, then it seems to defeat the purpose of proposing any difference at all. Let me explain: consider the thought experiment that lead us to this proposal in the first place - namely, the removal of a single neuron from the brain and stimulating it with an electrode. Whatever experience we are epistemically aware of corresponding to this neuron, it would have to be the same experience whether in the brain or removed from the brain and connected to the electrode. If there is a difference at all, it would have to be below this level. But then that means that what's being entailed - whether plugged into the brain or an electrode - is the same experience at the level of epistemic awareness. So, for example, if one was epistemically aware of the thought "I should clean the house", that would have to be the same thought whether in the brain or connected to an electrode. Furthermore - and this is where it becomes paradoxical - a neuron taken from the visual cortex - say one corresponding to the sight of red - would, when connected to an electrode, have to then correspond to the same thought. The latter situation is especially troubling since, as we've seen, the physical structure of such a neuron is indistinguishable from that of the thought neuron considered earlier, and epistemic awareness of thought is obviously different from that of redness.

An alternate solution to this problem is to say that the post-BIV simply has no recollection of his envatted state, that any memories that were physically encoded in his brain while envatted are themselves experienced differently once taken out of the vat. After all, memory is not brought to consciousness out of a void; it is triggered by current conscious thought or other experiences. If these experiences, now different from the envatted state, are the input fed into the neural circuitry corresponding to memory, and if this neural circuitry represents only an algorithm as opposed to the precise quality of the corresponding experiences, then the output, namely the memory itself, need not bear any resemblance to the output of the envatted state. The neural circuitry of memory only represents the way information is processed, not what experiences felt like. It is another way of saying that if one experiences reality as one usually does in the non-BIV state, then one will recall his memories as coming from that reality with exactly the qualities one would expect from that reality. If one experiences reality as one would in the envatted state, however, his memories will likewise be recalled as though they came from that reality with precisely the quality expected of that reality. It wouldn't matter whether those memories were accurate or not - it just matters that they define a particular reality in a particular way. In other words, if that's what one remembers happening in reality, then that's reality for that person at that time.

But this too was eventually abandoned after considering the following thought experiment: the half-BIV. A half-BIV is a brain that, while still inside the cranium, is half plugged into the machinery it would otherwise have been fully plugged into if it were a full BIV. Suppose, for instance, that we plugged that same machinery, run by the same computer with the same program, into only half the subject's brain, but with one important difference: the machine doesn't simulate the real world but replicates it - that is, it takes real-world information (through cameras, touch pads, sound recorders, etc.) and digitizes it. The digitized information is used to reconstruct the real-world within the hardware, and then it is sent to the subject's brain. Because the subject's brain is receiving this information from a completely digital world, there shouldn't be any significant difference between it and that of the full BIV case. So his entire left hemisphere is being fed information from the computer while the right hemisphere is being fed information from the real-world. Would he then be able to contrast and compare the difference in experiences? Would the world then appear half bizarre, half normal? Again, the same problems arise: he couldn't possibly behave or talk as if there were any difference because there would be no difference in the physical effects or configuring of neural networks between the two hemispheres; it would be exactly as if both hemisphere's were being fed unmediated information. We couldn't say that he was remembering the machine fed experiences wrong because it has nothing to do with memory - they're happening now.

This was the final straw. I decided right then and there that the half-BIV thought experiment meant the disaster of the alternate experiences theory of BIVs. It just wasn't working.

So then we come back to the original question: if BIV's experience the digital world no differently than BIC's experience the real world, then why would a single neuron taken out of the brain experience stimulation by an electrode any differently than a neuron in the brain experiences stimulation by neighboring neurons? To get a solution to this problem, we have to go back to the Basic Theory - we have to go back to the theory of what "shapes" an experience - namely, the neural configuration of MODs.

This is what made different experiences different. It isn't nearly as problematic to suppose that a full MOD taken from the brain would correspond to the same experience when stimulated by electrodes because the electrode in question would have to be of a very particular design and programmed to stimulate the MOD according to a very particular pattern. If we recall the analogies drawn between MODs and computer circuits, we can describe MODs as having a set of "input lines" - these would be the neural sites where stimuli like other neurons, neurotransmitters, or electrodes have their initial effects on the MOD. Therefore, to stimulate the MOD as the brain would, an electrode would have to be "plugged into" the MOD at each of its input lines. There might be a specific pattern of input, such as one input line being stimulated before another, or every odd one in synchrony with each other and every even one being stimulated at random, in order for the MOD to function as it normally would when plugged into the brain. The electrode would have to mimic this pattern as well. The electrode, in other words, would have to be tailored to work with this one MOD only, or at least a very select few MODs. It could not replicate the idiosyncratic pattern of activity of any old MOD as manifested in a normally functioning brain. It may invoke atypical patterns, but then we wouldn't be talking about the same corresponding experience.

We could still bring into question the versatility of this manner of invoking particular neural activity, and thus invoking particular experiences - that versatility being the multitude of devices we could use to stimulate a particular MOD in the same pattern as would be found in a functional brain. For example, instead of using electrodes, I could plug the MOD into my computer and program it to behave as if it were plugged into a functional brain. The question would be whether it still makes sense to say that the vast array of devices one could use to invoke this pattern of activity could possibly correspond to the same antecedent experience. However, this problem is not really a challenging one. First, as we noted earlier, there needn't be only one such antecedent experience - a whole slew of them might exist, each fully capable of entailing the experience corresponding to the MOD in question. Second, the antecedent experience would not be the only factor in the equation. We must also take into consideration the experiences of the UOS (Universal Operating System) corresponding to the atomic organization of the MOD. Those too count as experiences, and it is only in conjunction with them that the experiences corresponding to our stimulation device can entail that corresponding to the activation of the MOD. Setup such an atomic arrangement different, and you'll have not only a different MOD but a different resultant experience. So the antecedent experience (the one corresponding to our stimulation device) cannot entail the one corresponding to the activity of the MOD alone - it needs the assistance of certain experiences belonging to the UOS.

Perhaps we can make this into a general principle, one that is long overdue: the capacity of one particular experience to entail another particular experience is equal, and corresponds, to the capacity of the particular physical system corresponding to the first experience to have particular effects either on itself or another physical system, which in either case would correspond to the second experience, that were necessitated by natural law. What this says, in simpler words, is that if we are not surprised by the fact that an electrode can stimulate a particular MOD (because the laws of physics necessitates it), why should we be surprised by the implication that the one corresponding experience entails the other? If the physical event is necessitated by natural law, then the experiential event is equally necessitated by the laws of entailment. That's what the physical systems and the laws of nature represent, after all. We could have argued this all along, of course, but it seems so less absurd when considered in the context of whole MODs rather than single neurons - primarily because whole MODs are so much less generalizable and require very particular preconditions in order to be stimulated.

***
PRINCIPLE: The capacity of one particular experience to entail another particular experience is equal, and corresponds, to the capacity of the particular physical system corresponding to the first experience to have particular effects either on itself or another physical system, which in either case would correspond to the second experience, that were necessitated by natural law.
***

But what should we say of the case of the single neuron? We shouldn't leave such danglers lingering. First, I'd be loathed to suppose that a thought, or any experience we as humans are epistemically aware of, would correspond to something as simple as a single neuron. I can see the allure of such a notion, however, when I think how thoughts and the firing of single neurons seem to have in common the likeness of a "unit of information". I have my doubts however. I doubt a thought corresponds to a single neuron. It more likely corresponds to a particular pattern of neural activity across various centers in the brain. To suppose it were the product of a single neuron firing would imply that it could be wiped out - made impossible to think - with only the destruction of that single neuron. But a pattern of neural activity not only accounts for the unit-like impression of this sort of mental information, but its continuous flow as well. You see, even though our thoughts feel like units, there is great difficulty in spotting exactly where one thought ends and another begins - at least, in the flow of time (you can spell out its beginning and end in how its expressed - a few English sentences usually do the trick). The unit-likeness of thought can be seen in the unit-likeness of a particular pattern of neural activity. With no other pattern quite like it, it stands out as a unit among other unit-like patterns. A single neuron added to or removed from that pattern changes the pattern. On the other hand, the flow from one pattern to another would not likely be a matter of discreteness or zero-overlap - it would likely merge from one to another, thereby accounting for the elusiveness of beginnings and ends to a particular thought within the flow of consciousness. In any case, I see no reason to suppose that any of our experiences, the ones we are epistemically aware of at least, correspond to the firing of a single neuron.

However, this doesn't help us much when the question is turned to the firings of single neurons. Is the experience corresponding to the firing of a single neuron in our brain the same for each and every instance? Is it the same whether in the brain or taken out and stimulated by an electrode? If so, how do we get the kaleidoscope of qualities that seem to come with MODs higher up in scale? The reader might notice this question seems eerily similar to the one that plagues us from down at the subatomic level of things. That is to say, the question of how experiences corresponding to things like the behavior of fundamental particles, with their electromagnetic pulls and pushes, can give rise to the multitude of qualities we enjoy as a part of human life, and presumably life in the mind of the cosmos. This question pivots on our assumption that the experiences in question - those corresponding to the behavior of fundamental particles - are something akin to pains and pleasures. How do we get things like red-ness or musical melody out of pains and pleasures?

Well, as it turns out, my insights of late - the ones relating the the BIV problem just discussed - have shed new light on this problem. I may have come up with my best answer to this question yet. It will account for the plethora of qualities we experience on our level of scale in terms of both fundamental particles and the firings of single neurons - but I think I'll divulge this in a future post.

No comments: