One of the more interesting things about Mead’s concept of mind is that it suddenly makes true artificial intelligence a lot more feasible… or at least, possible.
In case you haven’t skimmed your copy of Mind, Self, and Society yet (available free at the Mead Project), Mead basically points out that our mind – our consciousness – is a process, not a thing. And as a process, it exists in relationship to other things. That includes our internal dialogue, but is not limited to it. Therefore, what we consider mind is interactions and relationships with other things, which means that mind cannot be limited to just our bodies.
Read that last sentence again.
Our mind is, according to Mead, not limited to our physical bodies. Yes, the biological brain impacts it – the meat is part of the process – but it is also in the symbols that you and I share (like these words). In a very real sense, my writing this blog is literally part of my mind; your reading this is literally part of yours. [The pertinent quotation is below]
What is important to Mead, in the concept of mind, is the self-organizing symbol usage. Language is the key way that humans manipulate symbols like this; it’s the ability to pack so much meaning and knowledge into the process that distinguishes the human mind.
Mead wrote in the early 20th century (or, rather, lectured and his students wrote it down). As a result, he has the rather predictable human-centric viewpoint of that time period. Since, we’ve learned that other species have, even with the most skeptic interpretation, been able to learn signs and symbols from us, and self-organize concepts. I think it’s fair to indicate that these other species are capable of mind (even if you’re willing to argue whether or not they develop it in the wild).
So let’s take this a step further. We’re creating near-autonomous computers already (the Phoenix lander essentially had to land itself due to lightspeed delays; this resulted in several “judgement calls” on its part). What Mead’s theory of mind suggests is that while biology (or the “hardware” running the mind) matters, it is not limited to our brains per se. If we can teach computers to learn – efforts at which are still continuing – and interact with others in a social environment, then it is a very real possiblity that they could develop “mind” in exactly the same way you have one.
This is in direct opposition to Searle, who argued in Minds, Brains and Science that the specific biology of our brains is neccessary for our minds. While Mead acknowledges the influences of biology, he also recognizes that social forces can exist, evolve, and change outside of a biological matrix.
The key paragraph I’m interpreting comes from here:
It is absurd to look at the mind simply from the standpoint of the individual human organism; for, although it has its focus there, it is essentially a social phenomenon; even its biological functions are primarily social. The subjective experience of the individual must be brought into relation with the natural, sociobiological activities of the brain in order to render an acceptable account of mind possible at all; and this can be done only if the social nature of mind is recognized. The meagerness of individual experience in isolation from the processes of social experience –in isolation from its social environment-should, moreover, be apparent. We must regard mind, then, as arising and developing within the social process, within the empirical matrix of social interactions. We must, that is, get an inner individual experience from the standpoint of social acts which include the experiences of separate individuals in a social context wherein those individuals interact. The processes of experience which the human brain makes possible are made possible only for a group of interacting individuals: only for individual organisms which are members of a society; not for the individual organism in isolation from other individual organisms.