The billions for the brain battle
More than 100,000 papers on neuroscience are published each year, but the transformation of this quickly growing pile of data and knowledge into working therapies and usable insights for human daily life is slow and cumbersome. Two “big science” projects recently funded in the EU and the US at the billion-euro level aim to change this – but they face opposition from within neuroscience.
Markus Christen
Probably never before in human history are so many scientists studying the brain. This observation not only refers to basic biological research and neurology: related fields are also experimenting with theories and insights from neuroscience. For example, psychiatrists frame mental diseases increasingly as “brain disorders”, and engineers in information technology and robotics try to turn neuronal architectures and functions into technologies.
These multiple research activities generate huge amounts of data and scientific papers on all levels of neuronal organization: on gene expression in neurons, neuronal connectivity, brain activity patterns captured with neuroimaging, and human and animal behavior, just to name a few.
So far, this huge endeavor has only barely translated into economic success and practical use. Ironically, the pharmaceutical industry – which, until recently, funded nearly half the budget for research and drug development for brain disorders – has cut brain research dramatically in the last few years, because their investments haven’t paid off. And, the public has begun to be bored by excessive claims of “revolutionary insights” from neuroscientists.
More and more books question “brain myths” and the tendency to align human existence with the functions of brains. At the same time, however, the burden of diseases brought about by brain malfunctions – dementia, stroke and depression, for example – increases exponentially with a growing and aging population.
While medicine has been remarkably successful in tackling cardiac, respiratory, hepatic and other organ diseases, medical therapies for brain-based diseases have lagged far behind because of the complexity and lack of fundamental knowledge about the human brain.
A substantial investment of public money into “big neuroscience” seeks to change this – the most prominent examples being the European Human Brain Project (HBP) with its home base in Lausanne and the United States initiative on Brain Research through Advancing Innovative Neurotechnologies (BRAIN Initiative) which both started last year.
Empty promise?
Within ten years, these projects aim to invest several billions of taxpayer funds from Switzerland, other European countries and the US for a coordinated research effort to “gain fundamental insights into what it means to be human, develop new treatments for brain diseases and build revolutionary new information and communications technologies” as, for example, the Human Brain Project claims.
But will this turn out to be an empty promise? Right from the beginning, both projects were confronted with criticism from neuroscientists. And this week more than 200 scientists sent an open message to the European Commission to “express their concern with the course of the Human Brain Project” due to an “overly narrow approach”.
As an external observer of the Human Brain Project and a member of the independent Ethical Legal and Social Aspects Committee for the Human Brain Project expressing his personal opinion, I have three observations.
First, the brain projects start from the right diagnosis of the problem, namely a fragmentation of knowledge in neuroscience. In the public eye, the HBP was conceived (and partly also advertised) as a project that wants to “rebuild a brain in a computer” – but this impression misses the point. Rather, the projects are a strategy to consolidate neuroscientific knowledge by creating an “atlas” of the human brain (in the BRAIN initiative) and a toolbox of computer simulations (in the HBP) as tools for future brain research, most of which cannot be done ethically on living human subjects.
The EU Human Brain Project and the US BRAIN initiative aim to unify knowledge on several levels and guide empirical research – for example, regarding the expression of neuronal genes, the identification of types of neurons, their locations and interconnections, and the behavioral effects that result from the coordinated activity of many neurons. Because it is not possible to “decode” the individual connectivity pattern of neurons for human brains in the same way we can now sequence the genome of a person, neuroscientists will need tools that tell them what to look for in real brains.
Simulations and atlases could become both the “integrators” of knowledge and “lenses” through which scientists look to tackle the complexity of the brain. This is indeed a fascinating idea and one of the few practical ways to narrow the gap between brain data and real-world applications of neuroscience.
Second, a central but neglected challenge from an ethical point of view relates to the consequences of this strategy on brain research itself. Certainly, brain research poses many ethical issues, such as those related to the use of animals and the protection of human research participants – but those are rather “classical” problems for ethicists, and we have tools to address them. When, however, knowledge production itself is transformed through “big neuroscience”, novel issues emerge.
For example: How should you select which data enters the simulation code and which not, in case of conflicting data? How should peer review work regarding the code used to “mine” the enormous pile of scientific publications, or the simulation code itself (i.e. computer programs that may have many thousand lines of code)? How do you ensure ethical working collaborations among quite different scientific cultures – biologists and software engineers, for example – as well as across multiple countries with different research ethics standards regarding informed consent, for example? How should you structure simulation results such that the visualization (which is artificial) does not misguide the users given the aim that simulations should guide experimental work?
These are complex questions and speculations about “conscious computers” somehow emerging out of a brain simulation are unduly facile when it comes to expressing the ethical concerns that underlie more granular choices in both brain research projects.
Third, “big science” is not just about “big money”; doing big science also has significant effects on the way scientific collaboration itself is generated, structured, managed and promoted. This may be an unavoidable dilemma of large-scale publicly funded projects. Whenever a big chunk of money is conspicuously (and correctly) visible to the public eye, its purpose should be publicly explained and its responsible use also accounted for publicly. But this may lead to unrealistic justifications on the part of researchers and unrealistic public expectations based on simplified often exciting and enticing explanations in mass media.
In addition, big science, big money and public attention often combine in a way that generates pressure to create a governance and oversight structure that can conflict with the bottom-up discovery ethos of many scientific fields. As more is “at stake”, the pressure to deliver practical applications often increases, which may make it inevitable to “narrow” the scientific approach and leave out relevant subfields and collaborators. Ethics should also focus some of its attention on such important side-effects of “big neuroscience”.
Not to be underestimated
These issues in neuroscience research should not be underestimated given the comparable experience in climate research, where simulations have a central role both in allocating research investment as well as informing political decision making. Observers from sociology and science studies who looked at the actual process of using simulations in climate research found that collaboration between modelers and empirical scientists are tricky, that visualizations tend to blur important differences between simulation data and real world data, and that various psychological mechanisms are at work that may undermine critical appraisal of the simulation results.
For example, Myanna Lahsen investigated the collaboration of modelers and meteorologists and found that the latter didn’t feel appropriately involved when developing the general circulation models, which were the forerunners of today’s climate models. Empiricists, who believe that sensory experience is the only source of knowledge, had an acquired humility about the accuracy of forecasts of atmospheric conditions, which they trace to experiences of regularly seeing synoptic and numerical weather forecasts proven wrong. They complain that model developers often freeze others out and tend to be resistant to critical input, living in a “fortress mentality”. Then, climate change sceptics refer to such problems when challenging the results of climate research.
Failure to address such problems thus undermine the whole scientific endeavor – to create simulations that enable further research and generate useful knowledge and applications which are publicly and scientifically trustworthy. In that sense, the recent concerns among neuroscientists expressed by the “open message” concerning the Human Brain Project may just replicate what other fields have experienced when simulations become increasingly important tools.
Jean-Pierre Changeux, a French neuroscientist and member of the HBP, called for an “epistemic ethics” when advancing “big neuroscience” that uses Big Data techniques and simulations to address the major challenges of brain research. This means that the aim to “integrate” knowledge and to use computer power to guide research will need a careful architecture of collaboration across disciplines, support from the scientific community as well as an informed public, and deeper exploration of ethical aspects and effects of big neuroscience.