Hi everyone,
This last week has found me in a challenging entanglement with a good psychologist-in-training friend, discussing what is termed eliminativism in the philosophy of mind. And, as promised, since this is my latest endeavour, I thought about sharing it with all of you (thank you for your vote of confidence, I’m grateful for each one of you joining my undefined experiment!).
In a nutshell, here is what eliminativism or eliminative materialism has to say, according to the Stanford Encyclopedia of Philosophy:
(…) the radical claim that our ordinary, common-sense understanding of the mind is deeply wrong and that some or all of the mental states posited by common-sense do not actually exist and have no role to play in a mature science of the mind.
In other words, eliminativists such as Paul and Patricia Churchland claim that there are no mental states like beliefs or even emotions.
And the good, old folk psychology needs to be replaced by a better theory, generated by the neurosciences.
Crazy, right? But let’s get back to the larger context.
The body-mind problem
One of the main problematics posited in the philosophy of mind is this constant struggle to define what the mind is, what the brain is, and if they’re both real or both imaginary. A bit of a mind bend, if you ask me.
In this whole debacle, the radical view of eliminativists is that there is no mind (kind of like The Matrix?) and that neurosciences will advance to prove that we won’t need concepts such as I believe or I feel.
Or something of the sorts. I’m no philosopher.
Let’s listen to Chico the Philosurfer, for he’s so much better at this than me:
But what about consciousness?
Remember what Chico was saying about Java, the Neanderthal? That his belief in fire demons has explained the reality of wood-burning until science came along with a better theory, proving no fire demons exist and it’s all a matter of combustion.
Can the mind and even consciousness per se be mere folk theories that will be explained by different scientific methods operated at the level of our neurons and synapses?
Without any scientific reasoning and operating on my pure intuitive mind, I would say this whole argument is too narrow, not accounting for why we experience consciousness and are aware, even in dream states sometimes. Of course, it’s because we don’t have the scientific means yet, as Churchland proposes, or maybe it’s just a matter of language — we should be better than folk theories at creating concepts and scientific explanations.
Yet there is a nugget of wisdom or, at least, a worthwhile thought experiment to be had — even for an ignorant thinker like myself when equating the brain with the mind or saying there is no mind, proposed by Brian Tomasik in his article, The Eliminativist Approach to Consciousness:
Your brain is like a cult leader, and you are its follower. If your brain tells you it's conscious, you believe it. If your brain says there's a special "what-it's-like-ness" to experience beyond mechanical processes, you believe it. You take your cult leader's claims at face value because you can't get outside the cult and see things from any other perspective. Any judgments you make are always subject to revision by the cult leader before being broadcast. (Similar analogies help explain the feeling of time's flow, the feeling of free will, etc.)
&
The physical stance is more impartial and accurate than the phenomenal stance in accounting for all the mind-like processes that exist in the world. However, the physical stance is also more dispassionate. While the brain of a person being tortured does look physically very distinctive -- with lots of activity and long-lasting neural "scars" being created -- appreciating its true awfulness requires imagining ourselves in its position. Without subjective imagination, a physical-stance approach is liable to give way to aesthetic judgments -- valuing more brains that appear more interesting, sophisticated, nuanced, or dynamic. Looking for beauty and novelty is a natural temptation when we view physical objects, but it has little to do with ethics. There's a danger that eliminativism gives too much sway to non-empathic judgment criteria.
I think we should try out the eliminativist view as an exercise, to bend our prior prejudices and intuitions. When we unshackle ourselves from the conventional concept of consciousness, how many other ways might there be to reimagine the world!
Whether we have a mind or our brain is just playing tricks for the sake of human evolution, we mustn’t forget about empathy and morality, as Tomasik underlines above. If we only focus on synapses, then we might forget about the experience altogether. But can we be truly empathic or altruistic when realizing that our neural processes are universal?
I really enjoyed this lens and I think that any radical position, on an ideological level, can shed light on the important questions we should ask ourselves.
As for me, I still believe that beyond the philosophers’ debate on the concept of the mind, it’s more important to be aware of the consequences our experiences create on ourselves and others.
It’s a false problem, albeit a fascinating one, whether in the future we’ll call it the brain-consciousness, or special-synaptic-road, or whatever scientific name we coin — but will we be able to truly understand it? And, more importantly, will we be able to recreate it in the “true” form of AI (yes, the SciFi one)?
What do you think?
I think that the mind is an emergent property of the brain
https://www.psychologytoday.com/us/blog/finding-purpose/201901/how-could-mind-emerge-mindless-matter but we can still be truly empathic or altruistic not because we feel that this is the right thing to do but because we truly understand that this is what keeps our society together.
I also think that it is important to truly understand that our brain is creating our experience and that sometimes it fails to create the right one (like the one we sometimes see later when we get some distance and more information)
Neuroscientist Lisa Feldman Barrett explains that the brain is creating our emotions and we may have more control over them than we think
https://www.ted.com/talks/lisa_feldman_barrett_you_aren_t_at_the_mercy_of_your_emotions_your_brain_creates_them
In principle there is no reason to not be able to create an intelligent machine
that behaves mostly like a human once we understand well enough our functioning
but engineering mostly finds solutions that are practical. Plane wings look pretty different
than bird wings even if at the beginning very close designs had been attempted.
It could be more practical that for the purposes that we need the AI for to design
it to not have a conscience.
And by the way philosopher of mind Dan Dennett has a very nice explanation of what consciousness is (https://www.amazon.co.uk/Bacteria-Bach-Back-Evolution-Minds/dp/0393242072) so maybe it will become a bit more clear that if we want to design it in the machine we could do it given enough time