Your Brain Does Not Store or Process Information
Your Brain Does Not Store or Process Information
Or so claims Robert Epstein in this piece: https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
There are quite a few problems with this article and its thesis but I still think it is worth a read for those interested in the brain, cognition, and consciousness. The article is ultimately an appeal to the field of Embodied Cognition.
Some of the worthwhile bits include passages like the following and just simply recognising the fundamental importance of metaphor in our thinking, our understanding, and how we see the world:
The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.
But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge. The IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.
Prevailing metaphors adopted to understand the world are heavily influenced by and at times dictated by the technological paradigm of the time. Hindsight allows us to see the errors and simplicity in old, outdated, metaphors. If nothing else the article forces us to ask: is the prevailing metaphor of our times, that of computation, the final metaphor? If not then improved future understanding and metaphors will look at us as we do our forebears. This forces the author to give an account as to why it isn’t the final metaphor but I don’t think they achieve this.
As an example, the dollar bill-in-memory test doesn’t appear to offer a satisfactory explanation:
But she hadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, you might argue, she could presumably have drawn the second image without the bill being present.
And also:
But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions . . . We simply sing or recite – no retrieval necessary.
The author argues that even if she draws the dollar bill perfectly from memory, she doesn’t actually have the bill in memory in her brain. This seems like a tautology of sorts or else I have something very wrong here. My understanding, to put it simply, is that the act of memorisation of a detailed object involves the brain forming an ever-more-accurate pattern in the brain representing the object and this accurate pattern can be re-experienced, i.e. re-membered, in order to recognise or reproduce it in future; surely this counts as storing the memory of that thing and this pattern has been computed by the neural networks of the brain?
Throughout the piece I kept wanting a clear and coherent alternative to be presented. The author claims to do so, but as far as I can tell either fails, demands too much prior jargon from the reader, or otherwise dispenses with any clarity. The closest to clarification they come to concerns a description of catching a flying ball:
The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
With this and other descriptions (see below) I can’t help but think the author is blinded or hindered by an incredibly constrained understanding or definition of computation. To me this explanation manages to explain very little, and certainly not how it is free of computation. Fortunately the author makes reference to and recommends other prominent proponents of Embodied Cognition and after searching a bunch of their blog posts I found what appears to be their best explanation here http://psychsciencenotes.blogspot.com.au/2015/07/brains-dont-have-to-be-computers-purple.html.
While their explanation may be even more obtuse, one of the key examples or analogies they are relying on concerns the Polar Planimeter. Knock out the Planimeter and you knock out one of the foundations of their argument. They claim the Planimeter doesn’t actually compute the area of the shape it traces out, despite the fact that it takes an input - moving the needle around the edge of the shape - and produces an output - the area of the shape thus traversed. It seems to me as though the Planimeter does indeed compute the area of the shape or am I missing something here?
The computation, or algorithmic function, for determining the area of the shape transcribed is encoded in the design of the device and its gears, or so it appears to me. Am I wrong or missing something here? They seem to claim something along the lines of “simply by interacting with its environment the Planimeter naturally produces a suitable response to that environment” but it all seems terribly hand-wavy and imprecise, and again suffers from a restricted definition of computation. Others have referred to Planimeters as analogue calculating devices; surely they can also be referred to as analogue computing devices?
A problem with this passage and those that precede it:
Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading.
This seems to claim that the human mind / cognition / consciousness is not physical, that it exists apart from matter and physical law. There are few people who have time to entertain such simplistic dualism. All evidence points to these things having a physical basis and as such claiming substrate independence for the phenomena is a reasonable claim. I suspect the author is caught up in holding only the most basic of computational substrates as a possible alternative, when other substrates can easily be posited to address arguments he has against these.
Finally, for this passage:
Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it.
I would ask if fMRI studies are now allowing us to partially determine what someone is thinking then surely running a complete connectome simulation and likewise measuring activity would allow us to determine what that brain was thinking, even without a body?
Planimeters
[1] https://www.youtube.com/watch?v=_W35iDhRfZg
[2] https://www.youtube.com/watch?v=kdxPEZnv-U0
[3] https://www.youtube.com/watch?v=l_k_0hRpOA4
[4] https://en.wikipedia.org/wiki/Planimeter
Comments
Post a Comment