Skip to main content

Who’s holding the leash?


Autonomy, responsibility, and privacy in neuroscience
Model of 'Braingate' from
http://en.wikipedia.org/wiki/File:BrainGate.jpg 

I take umbrage with Tamburrini’s notion that a person using a brain computer interface (BCI) is in the same legal position of responsibility as a dog owner or parent. The latter two categories work only as analogies in that both dogs and children have a form of autonomy that is at times unpredictable and potentially uncontrollable by the owner. Neither of them however requires a sacrifice of personal control. I have a feeling that irate mothers and owners of very large dogs will argue this point; nevertheless, dogs and children control only their own actions and attempt to influence the owner/parent – the autonomy of the person holding the leash so to speak, is preserved. A better analogy in my opinion would be that of a caregiver who interprets the will of the invalid in much the same way a BCI (brain computer interface) attempts to read his or her brain signals before making a decision based on probabilistic weightings. Tamburrini argues that it is possible that the perceptual systems of the mobility device might make mistakes; in like manner, suppose it’s dark outside and the caregiver doesn’t see that a manhole has been removed... Who is responsible in this instance? The caregiver by definition owes a duty of care to the client and failure to provide this care could be construed as criminal negligence leading to bodily harm – the caregiver however would likely appear in court and mitigating circumstances (e.g. low light conditions, fatigue, stress etc. would be taken into account). There is no recourse for a person using a BCI, they are immediately at fault for any wrong caused by misinterpretation by the mobility device and in essence are being punished for being disabled; a flagrant violation of human rights.

I propose two possible alternates to the solution suggested by Tamburrini. First, and most disturbing; if a person surrenders free will to a device or another person, one must ask, does this in any way diminish their personhood and therefore responsibility? This question can be compared to cases of intoxication leading to criminal negligence. The intoxicated defendant may not have been able to control his or faculties to the extent necessary to avoid a collision or even make an informed decision about the wisdom (or legality) of driving in the first place. That said, provided the defendant chose to get intoxicated and was not unknowingly drugged, he or she was responsible for circumstances that lead to the negligent act. Bringing this back to BCI, the argument could be made that a person who gave informed consent to use a BCI knowing full well the limitations of said device (i.e. perceptual systems may misinterpret external cues and an  EEG system may misinterpret internal signals from the user), is therefore responsible for anything it does. The second alternate would be far more ethically responsible to my mind. If BCI use became widespread, I would hope that a recording device would be activated in the event of a collision or accident and the EEG signals, used by the device would be stored along with a record of the machine’s interpretation and action. This of course leads to a question of brain privacy where we must weigh the potential gains against harms. If the device acts against the will of the user, such recordings have the potential to exonerate him or her; likewise this information could criminally implicate the user if other evidence was ambiguous.

These type of questions regarding brain privacy touch on issues raised by Canali et al., 2007, who discuss the efficacy of neuroscience techniques from a national security stance. I found the article rather disturbing since it explores military and government use of imaging techniques to determine for example a person’s familiarity with a terrorist organization (apparently the P300 EEG signal is a reliable indicator of familiarity), and even the possibility of using implants and other devices like TMS to manipulate cooperativeness. Particularly troubling from the viewpoint that brain privacy should be paramount is their idea that a person who deliberately refuses to comply with imaging protocol i.e. rolling their eyes, counting backwards, or curling their toes in an attempt to preserve the integrity of their thoughts must therefore have guilty knowledge since if you have nothing to hide, why not comply?

What really got to me were not so much the issues that Canali et al explored, but the sense that the neuroethics section of this article felt incidental to it.

Many articles in this field note the potential for neuroethical challenges; few however have gone beyond this recognition and taken a stance for or against the use or abuse of neuroscience. If this does not change, lawmakers and the military will likely define situational implementation for us and society. Neuroethics is off to a good start as a field, but I suggest it requires neuroscientists to take a more active stance so that they are not sidestepped on the way to progress.

Bibliography
Canali, T., Brandon, S., Casebeer, W., Crowley, P., DuRosseau, D., Greely, H. T., et al. (2007). Neuroethics and National Security. The American Journal of Bioethics , 3-13.
Tamburrini, G. (2009). Brain to Computer Communication: Ethical Perspectives on Interaction Models. Neuroethics , 137-149.

Comments

Popular posts from this blog

Video games allow for multitasking? A review of "Improving multi-tasking ability through action videogames"

W ould you believe that playing video games could make you more likely to succeed as a pilot? There's been quite a bit of colloquial evidence on multitasking and task-switching floating around the internet recently, thanks in no small part to work by Ophir et al., (2009) who showed that people who are habitual media-multitaskers (i.e. those annoying people who text while watching a movie or even worse while driving) tend to perform worse on a wide variety of cognitive measures of attention.  They concluded that people who are chronic media multitaskers have a "leaky filter", and as such cannot block out irrelevant (and distracting) information.  This agrees nicely other theories of attention, and particularly inhibitory theory (Hasher & Zacks, Michael C. Anderson, Gazzaley). One reason that multitasking has come under such scrutiny is that it is now encouraged and even expected in many professions and by many employers.  Even in situations where multitasking...

Dopamine helps the brain change gears: A review of "Dopamine-supports coupling of attention-related networks."

Before we begin, some background information... It seems that networks and the idea of connectivity is on everybody's minds these days (pardon the pun).  It's a fairly safe bet to say that nearly everyone affiliated with neuroscience has more than a passing familiarity with the default mode and dorsal attention networks (i.e. see Raichle et al., 2001).  These networks are anticorrelated which is to say that when one is being more heavily accessed, the activity in the other is suppressed.  Broadly, the default mode is active at rest, during mind wandering and when thinking about the autobiographical past - it has been described as being related to internally directed attention.  The dorsal attention or task positive network on the other hand is engaged when attention needs to be directed outwards at a cognitively demanding task or situation.  From Dang et al., 2012 What about the frontoparietal cognitive control network?  This network was discove...

On the use and abuse of Hairballs. What might graph-theory mean for cognitive neuroscience?

Olaf Sporns Discovering the Human Connectome H aving just returned from the OHBM (Organization for Human Brain Mapping) it's not hard to be phenomenally ecstatic about the state of research and raring to apply new techniques and theories to my data.   There is much to be hopeful for; I encountered many brilliant people who are excited about their research and how neuroscience is helping to unlock the secrets of the mind. That said, there did seem to be a great deal of research that was following the coat-tails of last year's big ideas.  This year the "big thing" was graph theory - the branch of mathematics and computer science that was previously applied to other fields such as social network visualization and biology  by Olaf Sporns and applied to brain networks.   Graph theory works on the notion that you can reduce a matrix (normally a correlation matrix) to a set of nodes (which represent specific brain regions) and edges (which represent either phys...