Emerging issues of ethics and fMRI imaging technology
(image on the right from Oxford Centre for Functional Magnetic Resonance Imaging of the Brain)
Modern neuroimaging techniques including ERP, fMRI, SPECT and MEG have been used – perhaps prematurely, to bridge philosophical and psychological traditions. Illes & Bird, 2006 quote Stephen Morse as saying that images produced by these techniques, particularly fMRI are exremely seductive to the general public and appear to be “more accurate” than other data. Having reconstructed, preprocessed and analyzed fMRI data, I find these statements alarming. For the public to understand what these images actually represent (i.e. statistical map of correlations and anticorrelations/covariances etc) laid upon an anatomical average, is essential to avoid false statements of causality and beliefs about the research’s prospects. Illes & Bird underscore the need for public discourse training to ensure accurate reproduction of original research and also to address social concerns releated to the research that neuroscientists might not think of: they do not however suggest how this should be accomplished.
Interpretations of functional neuroimaging data is difficult at the best of times as many studies fail to report effect sizes and since thresholding can inflate the appearance of “blobs” superimposed on anatomical images – there is often no real way to determine the weight effects carry beyond visual inspection and statistical significance. Superimposing functional data on anatomical images carries its own caveat; it literally makes a particular region of the brain "light up”. While this is a brilliant visual aid, it is extremely tempting to simply point to the picture and say we have activations in A, B, and C, that reach statistical significance – there they are, see for yourself. How to convey what these images mean to a general public who are not well versed in statistical methods is another challenge that needs to be addressed in public relations between neuroscientists and the media.
Some neuroscientists might argue that they are already doing everything in their power to ensure that their research is accurately reproduced and that once the interview is over their power to control what is printed by the media is severely limited. To those researchers I would argue that didactic responsibility with regards to their research should not be a one time event; researchers should ensure that they are quoted accurately and follow up instances of misinformation. Some journals are going further, Neuroimage has just released its’ own channel on youtube which they hope will make scientists research more accessible to a wider audience; granted, this is still an esoteric forum, but the potential of each video to be viewed is far, far greater than the average journal article.
Given the widespread miscommunication that often occurs between scientists and the public, it is somewhat disturbing to contemplate some of the arenas where this technology is being introduced. Companies such as No Lie MRI ®, take advantage of public belief in the mind reading capabilities of this technology (machine learning is not nearly close to the mind reading most people imagine – though in time it could improve), and have even managed to insinuate their way into the courts. These events have lead to a heated debate about brain privacy. At present, if a researcher encounters an incidental finding should this be reported? If it is reported, what happens if the clients’ insurance company gets hold of this information – legally, and ethically can they use it to adjust their policy? Glannon, 2006, goes so far to suggest (and I agree), that controls should have the right to refuse to know about any results from brain scans since the harm and emotional damage caused to the client could outweigh any potential benefits they would incur by participating.
I can see why some people might see neuroscience as a threat; it is the ultimate answer to dualism – mind is machine, and neuroscience allows for a partial examination of the circuits (though not quite to the extent that we have mapped a “person network”, as Farah & Heberlein claim). Levy, 2008 addresses the general disquiet that has followed revelations that humans are generally not rational, that they are perhaps not therefore autonomous and that morality follows suit.
Having come from the relative certainty of neuroscience research where one can discuss brain regions and networks with academic detachment, re-contextualizing these issues in social, ethical, moral and legal perspectives has reawakened me both to the enormity of the questions the general public.
Bibliography
Farah, M. J. (2005). Neuroethics: the practical and the philosophical. Trends in Cognitive Sciences , 34-40.
Farah, M. J., & Heberlein, A. S. (2007). Personhood and Neuroscience: Naturalizing or Nihilating? The American Journal of Bioethics , 37-48.
Glannon, W. (2006). Neuroethics. Bioethics , 37-52.
Illes, J., & Bird, S. J. (2006). Neuroethics: a modern context for ethics in neuroscience. Trends in Neurosciences , 511-517.
Kesterton, M. (2010, January 7). Globe Life, Facts & Arguments. Retrieved Janurary 11, 2010, from The Globe and Mail: http://www.theglobeandmail.com/life/facts-and-arguments/farming-with-zaps-like-getting-a-new-car-obvious-passwords/article1420044/
Levy, N. (2008). Introducing Neuroethics. Neuroethics , 1-8.
Roskies, A. (2002). Neuroethics for the New Millenium. Neuron , 21-23.
Comments
Post a Comment