ImperialViolet

Still waiting in that room?... (18 Mar 2007)

So Aaron writes to defend the Chinese Room Argument (go read that link if you don't already know the argument - you should).

I am absolutely a child who grew up reading Egan (who writes awesome short story books - the full lengths ones are not so great) so the idea of uploaded minds etc is as normal to me as trans-pacific shipping (I happen to be watching the ships in San Francisco bay at the moment).

Take a neuron from my brain. It's certainly complex: it has many inputs, both dendrites and chemical, and its actions are poorly understood at the moment, but it's nothing magical. It's matter and fields and probably some quantum physics in there. If you think there is something magic about them - you can stop reading now but you have a lot of proof to provide.

But if it's not magic, we could replace a neuron in my brain with a functional clone and I'm the same person. I'm not suggesting that we could do that tomorrow, but that we could do it at all. Repeat many times, and you have a concious person with a non-natural brain.

Unless you think that the result isn't concious. Did that feature fade out as more neurons were replaced? If so, and since our artificial neurons are assumed to be perfect functional clones, you do believe that there's something magical about them it seems.

On the other hand, if I have a concious person with my crazy brain, why can't it run in simulation? My artificial neurons can be implemented using beer bottles and bits of string. It really doesn't matter.

You say that informational processes have to be interpreted to mean anything. But conciousness is a process reflecting upon itself. That happens whatever the hardware is. Yes it leads to some crazy-sounding conclusions and it's certainly not good for one's sense of self importance, but I'm lead here by the above reasoning which seems sound to me and so I accept it.

Maybe I'll write up one of those crazy conclusions later.