Cook et al presents a thorough and convincing argument for the functionality of mirror neurons originating in sensorimotor associative learning as opposed to having a specific evolutionary purpose of “action understanding”. The most convincing aspect was the meta-analysis of experiments searching for gesture imitation in infants, where most facial expressions did not result in imitation across several experiments completed. It would be interesting to explore why of all things tongue protrusion, even a mechanical tongue or disembodied one, may be innately ingrained into infants.Otherwise, it seems that the purpose of this article is to promote research in the area to be directed to associative learning where the replicated neuronal activity is more related to the environment humans have been exposed to. Combining this theory with Ericsson’s theory of deliberate practice to formulate expertise could create a strong argument for ensuring highest possible levels of environmental stimulus at a very young age to promote superior development.Additionally, while the authors used the occipito-temporal cortex and its role for reading as evidence for a brain area serving a particular purpose that it was not evolutionarily adapted for, it present the area as the “system adapted for generic object recognition” (p.182). This could be more aptly named “arbitrary symbol recognition” system and could be an area of investigation for a type of code-processing mechanism.
Mirror neurons, the lighting up of parallel brain function when watching another machine (human and animals included) perform a task. Does this mean you understand? Humans claim their difference to any other animal species with their ability to think and understand. Mirror neurons are demonstrated to be shown in monkeys. Do monkeys understand? Yes the same neurons fire when an animal is performing an action as well as when they are watching another perform an action, does this automatically mean understanding though? Is simple recognition understanding? I do wonder if motor neurons could have some insight into the hard problem. Is there basis for them to be able to explain empathy? Is the reason we can empathize with another human because parallel neurons fire in our brains when we are sad as when we see someone else sad?
There was a question raised in class today as to what a causal explanation of how and why we cognize would actually look like. In the example of an apple falling from a tree, our explanation is gravity. I’ll spare any equations here, but suffice to say that you could open up your intro to physics textbook and go to the section on gravity and have your causal explanation as to why the apple fell to the ground. And from the discussions we’ve had in class, I understand that we’re happy with that explanation and we want to find an analogous one for cognition. That said, consider the following line from the conclusion of the article:“The associative account … is open to the possibility that MNs play one or more important roles in the control of social interaction” (p. 191)Again, based on our discussion, even if we could nail down exactly what the processes in the brain are that occur to allow us to socially interact (maybe some combination of MNs among other neural pathways), we’ve said this wouldn’t really be the answer we’re looking for. Can anyone clarify why this is the case? At this point – going back to the example of gravity – I imagine it like this: explaining neural activity to explain cognition is a little bit superficial, in the same way that explaining that the air particles are disturbed when the apple travels from the tree to the ground. It’s true that this happens each time an apple falls from a tree, but it doesn’t explain why or how the apple falls. If anyone can put a finer point on the above or point out anything I’m missing, that would be great. Lastly, if my reasoning above is why we aren’t satisfied with a neural explanation of cognition, this begs again the question of what an acceptable explanation might look like. Would it simply be explaining the mechanisms of a reverse engineered machine that could pass some level of the Turing Test?