Can Software Trigger Moral Thinking?
Can software be designed to compel instances of moral reflection?
Written by Mark D. Robinson, PhD
Can software be designed to compel instances of moral reflection? In the following article, Mark D. Robinson, PhD, Assistant Professor within Creighton’s Department of Interdisciplinary Studies examines the impact BTG software could have on moral thinking. This kind of thought leadership and questioning is just one example of the many ethical issues in healthcare that are examined in Creighton University’s Biothics (HCE) master’s program. Learn more about becoming an ethical leader in healthcare by visiting the Bioethics program page.
Given its use for health data, it is natural to ask if a tool such as BTG software is effective for patients’ privacy, trust, and overall patient wellbeing. As it is so new, research on its impact on patient protection is only now emerging. While privacy and HIPAA are important ethical concerns, there is a second moral aspect of this software for bioethics, which is how systems can be designed to compel — if only momentarily — instances of moral reflection. In other domains, systems do this all the time. Electronic lane departure systems in cars, for examples, alert drivers in ways that compel the driver to pay attention, slow down, take stock one’s surroundings, and so on.
In some versions of BTG software, users requesting records access encounter a set of strange questions. For example, in the case below, the user is asked, “Are you sure you want to view another employee of the hospitals[sic] medical records?”
The posing of this question is compelling for several reasons.
For one, it induces users to take a moment to reflect. Am I accessing the file for the right reasons? Is there another way to get to this solution? What alternatives exist? In most BTG systems, users are still able to access patient data once the glass is “broken,” but they are required to enter a reason, as well as information about their role in the health organization (see example 1).
However, this prompt, which likely forces a user to take a moment to reflect, shows the way that a system can be designed to compel the user, potentially, into a deeper level of thought.
Practically, taking moment to consider our actions may make the difference between one action and another; between a bad outcome and a better one. In truth, it often does.
But the ability to create questions designed to produce certain modes of moral thinking is powerful. Imagine if in digital job application systems, contexts where ethnic or female-sounding names are at a distinct disadvantage, strategic messaging reminded HR recruiters of likely inadvertent bias related to those names. The implications are far-reaching.
For scholars, the importance of moral reflection is plentiful. For philosophy, moral reflection fuels the engine of our virtues. For moral psychology, reflective capacity is crucial to one’s agency, one’s very capacity to act in the world. Given this, moral thinking may be a powerful stage for ethical intervention.
Of course, I am not suggesting that software acts in some magical way.
BTG must still be part of a set of guidelines, subject to review, enforcement, and audit. Also, clinicians and staff must still be educated on the ethical importance of patient privacy and trust. Technology often doesn’t work and some imperfect, biased person must program it. And yes, much of clinical ethics becomes a mundane “check-box” of to-dos.
Still, there is something to the ability to redirect the power of digital systems, now omnipresent throughout global healthcare, towards focused moral aims. What if BTG software did more? Could software be used to intervene strategically during other critical moral moments?