The Tech - Online EditionMIT's oldest and largest
newspaper & the first
newspaper published
on the web
Boston Weather: 44.0°F | Partly Cloudy and Breezy

The Science of the Mind, and Its Baggage

Beckett W. Sterner

Neuroscience is both frightening and laughable. On one side, cognitive scientists have already begun to chip away at the comfortable illusion that we can consciously control all of our actions. A deeper scientific understanding of the laws governing the brain promises to take the magic out of the mind the way physics drained nature of much of its spiritual mystery. On the other side, many nightmares we may conjure of an all-powerful science of the mind are simply impossible. The ability of science to predict human behavior will be sharply limited by the chaos of the environment swirling around us.

I’m not going to waste your time arguing that neuroscientists should declare some questions off-limits — they will just pursue those questions anyway. In this sense, thinking about the consequences of certain areas of scientific research seems like a waste of time, ink, and hot air. There is a very good reason, however: how we think about the research matters as much as its results. Not only do scientists’ beliefs shape their research agendas, but in a society where few people have technical knowledge, scientists are also the foremost interpreters of the meaning of new ideas and technology in our lives. When it comes to the mind, nothing could be more dangerous than a science bent on purging the subjective and “irrational” from what it means to be human.

One way to better understand how scientists think about their research is to consider what counts as being creative. In science, creativity often follows from redefining what is “real”, forcing the field into a new paradigm where some questions are deemed not fit for scientific research. For example, in his behaviorist approach, American psychologist B.F. Skinner asserted that psychology could say nothing about the internal states of minds, and that only the physical inputs and outputs (i.e. behaviors) were the subject of legitimate science. More recently, some artificial intelligence researchers have asserted that the mind is a computer and that the biochemistry of our neurons must follow logical rules. Each of these paradigms is successful insofar as they capture an objectively true — if incomplete — aspect of reality, but their worldview goes far beyond their scientific success.

Specifically, how scientists think about their work guides what everyone else learns and concludes from the results. When Einstein presented his theory of relativity, he revolutionized mechanics and also our understanding of nature. Before Einstein, though, Newton had developed his physics in search of the harmonious order of the Christian god, whom he believed was the creator of the laws he discovered. The idea of a deterministic, clock-work universe run by God had a profound influence on society not because the idea was particularly novel, but because it had physics to back it up. Similarly, Einstein’s relativity theory undermined our belief in an absolute order for morality because no such order seemed to exist in nature.

Simply because a scientist has produced a better explanation of nature, however, is not sufficient reason to privilege their opinions above all others. For neuroscience, the possibilities are dangerous. Humans are not computers, but if the science of the mind seeks to describe only how the two are alike, then the average layman will have little recourse in disputing the expert wisdom of scientists. Alternatively, if scientists claim that our internal, conscious thoughts and emotions are wispy things unfit for study, then it will be even harder to argue that they matter to hard-nosed “realists” who are only interested in the bottom line. All the impacts of these scientific ideas stretch far beyond the discipline of neuroscience itself.

When addressing the moral questions created by science, it’s hard to avoid the atomic bomb as a topic. The ability to destroy a whole city with a single blast put to rest the idea that science could continue oblivious to anything outside its technical disagreements. The bomb is a technology, and the meaning of technology depends on the ends to which we use it.

Neuroscience is far more dangerous than the atomic bomb because it promises more than just powerful new technologies: it also promises to fundamentally change what we value as meaningful in life. If we can predict how one state of mind will evolve into another — for example, how anger can become a desire for revenge — can we still hold people responsible for their actions? Or will it become legitimate to argue that, “You can’t convict me because I can’t control my brain chemistry?”

Now that MIT has launched its expanded brain and cognitive sciences department, it’s time for us to face up to the consequences of what we may learn, but more importantly, what non-scientific baggage will come with it. With a topic as important as the human mind, it’s not enough for scientists to blindly muddle along with unexamined values and a practical mindset opposed to reflection. It’s also not enough to hope that ethicists and other humanists will somehow magically intervene to correct scientists’ misconceptions before the damage is done. MIT needs to actively and formally engage its neuroscientists in questioning what consequences their points of view will have outside the scientific community — because not only will many people care about what we discover, they will have to follow our lead when deciding what it all means.