Autonomy, responsibility, and privacy in neuroscience
Model of 'Braingate' from http://en.wikipedia.org/wiki/File:BrainGate.jpg |
I take umbrage with Tamburrini’s notion that a person using a brain computer interface (BCI) is in the same legal position of responsibility as a dog owner or parent. The latter two categories work only as analogies in that both dogs and children have a form of autonomy that is at times unpredictable and potentially uncontrollable by the owner. Neither of them however requires a sacrifice of personal control. I have a feeling that irate mothers and owners of very large dogs will argue this point; nevertheless, dogs and children control only their own actions and attempt to influence the owner/parent – the autonomy of the person holding the leash so to speak, is preserved. A better analogy in my opinion would be that of a caregiver who interprets the will of the invalid in much the same way a BCI (brain computer interface) attempts to read his or her brain signals before making a decision based on probabilistic weightings. Tamburrini argues that it is possible that the perceptual systems of the mobility device might make mistakes; in like manner, suppose it’s dark outside and the caregiver doesn’t see that a manhole has been removed... Who is responsible in this instance? The caregiver by definition owes a duty of care to the client and failure to provide this care could be construed as criminal negligence leading to bodily harm – the caregiver however would likely appear in court and mitigating circumstances (e.g. low light conditions, fatigue, stress etc. would be taken into account). There is no recourse for a person using a BCI, they are immediately at fault for any wrong caused by misinterpretation by the mobility device and in essence are being punished for being disabled; a flagrant violation of human rights.
I propose two possible alternates to the solution suggested by Tamburrini. First, and most disturbing; if a person surrenders free will to a device or another person, one must ask, does this in any way diminish their personhood and therefore responsibility? This question can be compared to cases of intoxication leading to criminal negligence. The intoxicated defendant may not have been able to control his or faculties to the extent necessary to avoid a collision or even make an informed decision about the wisdom (or legality) of driving in the first place. That said, provided the defendant chose to get intoxicated and was not unknowingly drugged, he or she was responsible for circumstances that lead to the negligent act. Bringing this back to BCI, the argument could be made that a person who gave informed consent to use a BCI knowing full well the limitations of said device (i.e. perceptual systems may misinterpret external cues and an EEG system may misinterpret internal signals from the user), is therefore responsible for anything it does. The second alternate would be far more ethically responsible to my mind. If BCI use became widespread, I would hope that a recording device would be activated in the event of a collision or accident and the EEG signals, used by the device would be stored along with a record of the machine’s interpretation and action. This of course leads to a question of brain privacy where we must weigh the potential gains against harms. If the device acts against the will of the user, such recordings have the potential to exonerate him or her; likewise this information could criminally implicate the user if other evidence was ambiguous.
These type of questions regarding brain privacy touch on issues raised by Canali et al., 2007, who discuss the efficacy of neuroscience techniques from a national security stance. I found the article rather disturbing since it explores military and government use of imaging techniques to determine for example a person’s familiarity with a terrorist organization (apparently the P300 EEG signal is a reliable indicator of familiarity), and even the possibility of using implants and other devices like TMS to manipulate cooperativeness. Particularly troubling from the viewpoint that brain privacy should be paramount is their idea that a person who deliberately refuses to comply with imaging protocol i.e. rolling their eyes, counting backwards, or curling their toes in an attempt to preserve the integrity of their thoughts must therefore have guilty knowledge since if you have nothing to hide, why not comply?
What really got to me were not so much the issues that Canali et al explored, but the sense that the neuroethics section of this article felt incidental to it.
Many articles in this field note the potential for neuroethical challenges; few however have gone beyond this recognition and taken a stance for or against the use or abuse of neuroscience. If this does not change, lawmakers and the military will likely define situational implementation for us and society. Neuroethics is off to a good start as a field, but I suggest it requires neuroscientists to take a more active stance so that they are not sidestepped on the way to progress.
Bibliography
Canali, T., Brandon, S., Casebeer, W., Crowley, P., DuRosseau, D., Greely, H. T., et al. (2007). Neuroethics and National Security. The American Journal of Bioethics , 3-13.
Tamburrini, G. (2009). Brain to Computer Communication: Ethical Perspectives on Interaction Models. Neuroethics , 137-149.
Comments
Post a Comment