What would you do? Neuroethics sheds light on our darkest dilemmas

When machines and brains mix, who's in charge? This is the type of problem pondered by neuroethicists such as UAB's Josh May, Ph.D., who examine questions at the crossroads of neuroscience and ethics.
Written by: Matt Windsor
Need more information? Contact us


neuro thinkerThink about this: A 59-year-old Dutch man with advanced Parkinson’s disease is experiencing debilitating tremors. His doctors implant electrodes deep in his brain, which counteract the faulty signals but cause new troubles. The man starts behaving erratically, making grandiose claims, racking up sizable debts and generally making poor decisions. His doctors adjust the stimulation settings, and even prescribe mood stabilizing drugs, but they don’t help. Eventually, he has to make a choice: Stop the stimulation and be admitted to a nursing home, or keep it and be confined to a psychiatric ward.

This real-life dilemma, pulled from the pages of a Dutch medical journal, illustrates the ethical quandaries that arise from new mind-altering technologies such as deep-brain stimulation, says Josh May, Ph.D., an assistant professor in the UAB College of Arts and Sciences Department of Philosophy.

“The patient chose mental disorder over physical impairment,” said May. “But does a manic state limit one’s decisional capacity? Must we make sure his decision is made when he is suffering from the symptoms of Parkinson’s, free from an overactive mind prone to reckless behavior and delusions of grandiosity?”

These are the kinds of questions you’ll find at the crossroads of neuroscience and ethics, in a new field known as neuroethics. “Neuroscience attempts to understand and manipulate the brain, which is still largely a mystery,” May said. “That makes the ethics of its research especially tricky and fascinating. And it makes the results directly relevant to ethics itself, especially perennial questions about what drives moral and immoral action, how we think about morality, and whether we’re really in control of our actions.”

Brain scans are being used to advance longstanding arguments about ethical theories, for example. And researchers — May included — are taking advantage of the reach of the Internet to investigate ethical dilemmas in entirely new ways.

What follows is an edited version of an email conversation with Dr. May.

How are technologies like functional MRI being used to shape ethical debates?

josh mayJosh MayOne famous example involves a philosopher-turned-neuroscientist at Harvard University, Josh Greene. He argues that neuroimaging can help prove utilitarianism — the ethical theory that we should always and only maximize happiness for the greatest number of people. This means that sometimes the ends justify the means, even if the means to the greater good are the most horrific acts you can imagine.

Greene argues that our brains generate intuitions that conflict with utilitarianism, but these are the parts of our brains that involve automatic, emotionally driven processes that aren’t suited for today’s moral dilemmas, like euthanasia, global poverty, climate change, animal rights and health care reform.

Greene wants us to trust the utilitarian intuitions we have, which he argues arise from areas of the brain developed later in evolution, that are more characteristic of our ability to think carefully and override emotional responses. For example, people tend to think it’s immoral to push a large man off of a bridge so that his body stops a train from hitting five other innocent people (assuming only his body could stop the train). Greene says, don’t trust that response! Act for the greater good and push that man! Trust the part of your brain that can override that automatic response and do the cold calculation.

What do you think about this argument?

This research is fascinating and certainly adding to our knowledge of how our moral brains work. But I do have several worries about the ethical conclusions Greene draws. For example, the brain’s automatic, emotional responses are not clearly untrustworthy, as evidence suggests they’re quite flexible and subconsciously shaped by rational thought. While Greene argues such responses aren’t equipped to resolve complex contemporary moral problems, they may provide a shared moral framework that is precisely suited to resolving moral disputes. After all, if these intuitions are so engrained in the brain, then they may provide a kind of common moral currency.

In general, I think research on moral judgment is revealing that principles are more important to moral thinking than emotions, even for automatic responses. We certainly have biases, and emotions have their role, but morality involves complex social information and norms that we seem to tacitly navigate. Our automatic moral intuitions shouldn’t so easily be tossed aside, even if they conflict with utilitarianism, as they are guided by sophisticated information processing that is suited to rational social interaction.

So how do you do your research, given that it crosses disciplinary boundaries?

Philosophical research involves a lot of thinking for sure, as you have to consider arguments, objections, etc. Most of my research time is probably devoted to reading — hours upon hours of painstaking reading. Especially since my work straddles multiple disciplines, there’s always plenty to keep up on. Each article or chapter can take hours to read carefully, and I bet I read about 50 to 100 per year. While the reading is usually interesting, the writing is very poor and far from leisurely. The same goes for writing and revising my own papers. For me at least, a lot of my research ultimately involves thinking; but it’s often done while reading, writing or talking with other academics, although some of it does happen when I’m driving, cooking, watching a show, etc. I personally find it difficult to sit in silence for very long just thinking with my chin on my fist!

I do sometimes conduct my own experiments, which requires design, ethics approval, etc. But a large part requires assessing the results, reading about other research, formulating arguments, assessing objections and writing it all up — a whole lot of thinking beyond just data gathering!

And you’ve been doing some interesting experiments using Amazon’s Mechanical Turk service?

I’ve been using MTurk for several years now. It allows me to quickly gather responses from a diverse group of people online, instead of doing paper-and-pencil surveys around campus. Often the studies I do involve variations on the famous trolley problem, which pits promoting the greater good against violating people’s bodily rights.

For example, I’ll present participants with a version of a hypothetical scenario and ask them to provide their moral opinion about it — Did the person act wrongly? Then I compare the responses across scenarios that vary slightly in different respects, e.g., in whether the harm was brought about actively or passively, as a means to a goal or as an unintended side effect. Statistical analysis can tell us whether the differences are significant — providing evidence about whether the variable had a causal impact on responses. This technique — standard in so-called “experimental philosophy” — can help reveal the underlying distinctions and principles we make in moral judgment. I’ve followed a growing trend suggesting that our automatic intuitions are often in conflict with the prescriptions of utilitarianism, but I suggest that these intuitions aren’t necessarily due to factors that are morally irrelevant and so shouldn’t necessarily be rejected.

How might these issues affect everyday moral problems?

Here’s a medical example. Many people think a doctor shouldn’t help end a terminal patient’s life as a way to halt their immense suffering. That’s illegal in most states. But we don’t have such a problem with a doctor’s administering heavy doses of morphine to treat severe pain, even if she and the patient know it will hasten death. The patient’s death is then merely a foreseen and unintended side effect. Is this a quirk about euthanasia, or do we systematically treat harming as a mere side effect as more acceptable than harming as means?

I’m currently working on a series of studies that involve presenting participants with hypothetical cases in different contexts to see if their judgments change just based on the difference in how the harm was brought about. I hope this will inform whether the distinction is a viable one.

I understand you’ve been introducing UAB students to neuroethics with a course first offered this past spring?

It was a seminar, offered at the 200 and 400 levels, serving as a capstone for the Philosophy major. It was a blast! I had some excellent students, some of whom had backgrounds in neuroscience. Mike Sloane, the director of the University Honors Program, sat in as well, and he added some great insights from his discipline (psychology).

We covered a wide range of questions, including: Can the results of a brain scan constitute self-incrimination (thus violating the Fifth Amendment)? Does subconscious neural activity determine our behavior prior to conscious awareness? Is someone responsible for a criminal act if it was the result of a brain tumor? Do psychopaths have such an impaired understanding of morality that they can’t be liable for criminal acts? Which areas of the brain are responsible for moral thought and action? Is there something wrong with making oneself a better person by altering one’s brain directly (e.g., via pills or deep brain stimulation)? Can altering one’s brain yield a fundamentally different person? How does this affect consenting to brain interventions?

I’m hoping to cover similar topics in a future seminar, but perhaps down the line this could become a more regular offering.