Systems and the Future of Bioethics

Title
Systems and the Future of Bioethics
Headline
Systems and the Future of Bioethics
Date
Hero Image
Heider College of Business Influencer Night
Listing Image
Doctor working on patient chart
Short Description
Mark D. Robinson, PhD, Assistant Professor of Interdisciplinary Studies at Creighton University examines whether there is there an obligation to create systems that help to support or sustain — even in small ways — ethical aims.
Content

How Can Software Expand Bioethics?

Written by Mark D. Robinson, PhD

To suggest that there is a role for software in ethics is not at all to say that it is a fix. Software cannot “fix” people. Rather, what I propose is far, far more nuanced. Given be-a-good-person modes of moral intervention (training people on ethics; sanctioning bad actors) is proven to be limited in terms of its effectiveness, is there an obligation to also create systems that help to support or sustain — even in small ways — ethical aims? Here are some following points to consider around BTG, moral reflection, systems and bioethics.

Doctor working on patient chart

What if Such Software Went Further

Some protocols around patient health information allow patients to digitally customize which aspects of their record are accessible and by whom. Might patients wish to hide reproductive health history from billing departments, who might be granted limited access? What about dynamic consent, whereby patients can change their preferences electronically as their health status, information, and needs change? Only through a digital means could such preferences be updated over time to reflect “reasonable” patient requests.

Moral Reflection

The ability of software to pose morally meaningful questions could really be exploited. However, future versions could give patients, at the moment they are selecting their privacy preferences, critical information about what these options mean. By providing information at the moment of decision-making, systems could help the broader aim of informing patients enough to provide consent. This also matters because one must balance patient desires for privacy against the need to ensure good care. For patients that have selected to keep aspects of their record private (such as sexually transmitted infections, for example) software could give authorized providers digital prompts around “best practices” for supporting specific patient populations, providing crucial information and strategies.

Cognitive Friction

For many critics, modern technology destroyed critical thinking. These critics are missing the point. Software can, by design, force thinking. Software designed to intentionally compel “cognitive friction” forces users to mentally “slow down” and is a key part of how asking users’ moral questions, for example, may compel something approaching a kind of momentary mental focus, or ethical “slow down.” BTG processes work by forcing this same cognitive friction, intentionally causing users to stop default modes of action and thought.

People are Not Perfect

Approaches that rely solely on a world composed of good people (a vision in which all hospital workers could simply be educated to never break moral rules, for example) always fail. There is a kind of “semi-automatic ethics” that systems can help to support (never perfectly) that does not rely on pristine moral actors with perfect memories and copious amounts of mental clarity — a presumption that informs much of clinical ethics and bioethics generally. Can staff who are overworked, mentally exhausted, and multi-tasking achieve the kind of Herculean morality in bioethics textbooks?

Interestingly, there is something similar in the story of seatbelts. While there were decreases in driving-related fatalities with the introduction of highway speed limits (an approach that relies on driver adherence and punishment via police), it was adding a technology–seatbelts–to an ecosystem of supports, which dramatically (though not entirely) reduced accident deaths. Seatbelts still rely on behavior, but seatbelts work even when drivers are speeding, making poor decisions, or victims of other drivers. Moral ecosystems are most powerful when they assume that not all actors will, can, or care to “do the right thing.” The brilliance of the seatbelt is its assumption of a world where cars crash and drivers fail.

Software Is no Panacea

Some will misunderstand my argument as an argument that technology will solve our moral problems. Some will point out how technologies can fail, be corrupted, or be without enforcement. Indeed, software is no panacea. Technologies are not a perfect solution. Yet, I believe that is a role for assistive devices – discrete structural supports to help particular aims. What would it mean for bioethics to also consider systems that can help, assess, collect data around, and otherwise support larger bioethical aims as part of a moral ecosystem?

By adding systems to our bioethical repertoire, we move beyond classic ethical approaches that rely largely or solely on educating individuals to be morally good people—namely, compelling staff to “respect privacy” or have greater empathy. We should both encourage people (and educate them) to be better, but also imagine how systems could support these flawed selves and intervene precisely in areas where we need moral support most.

Mark D. Robinson, PhD, is an Assistant Professor in the Department of Interdisciplinary Studies and teaches in the graduate program in Bioethics.

Areas of Interest
School/College/Department
Websites