Humanizing Bioinformatics

About Author



I was invited last week to give a talk at this year's meeting of the Graduate School Structure and Function of Biological Macromolecules, Bioinformatics and Modeling (SFMBBM). It ended up being a day with great talks, by some bright PhD students and postdocs. There were 2 keynotes (one by Prof Bert Poolman from Groningen (NL) and one by myself), and a panel discussion on what the future holds for people nearing the end of their PhDs.

My talk was titled "Humanizing Bioinformatics" and received quite well (at least some people still laughed at my jokes (if you can call them that); even at the end). I put the slides up on slideshare, but I thought I'd explain things here as well, because those slides will probably not convey the complete story.

Let's ruin the plot by mentioning it here: we need data visualization to counteract the alienationthat's happening between bioinformaticians and bright data miners on the one hand, and the user/clinician/biologist on the other. We need to make bioinformatics human again.

Jim Gray from Microsoft wrote a very interesting book "The Fourth Paradigm - Data-intensive Scientific Discovery". Get it. Read it. He describes how the practice of research has changed over the centuries. In the First Paradigm, science was very much about describing things; the Second Paradigm (last couple of centuries) saw a more theoretical approach, with people like Keppler and Newton defining "laws" that described the universe around them. The last few decades saw the advent of computation in the research field, which allowed us to take a closer look at reality by simulating it (the Third Paradigm). But just recently - so Jim Gray says - we're moving into yet another fundamental way of doing science. We have moved into an age where there is just so much data generated that we don't know what to do with it. This Fourth Paradigm is that of data exploration. As I see it (but that's just one way of looking at it, and it doesn't want to say anything about what's "better" than what), this might be a definition for the difference between computational biology and bioinformatics: computational biology fits within the Third Paradigm, while bioinformatics fits in the Fourth.

Being able to automatically generate these huge amounts of data (e.g. in genome sequencing) does mean that biologists have to work with ever bigger datasets, using ever more advanced algorithms that use ever more complicated data structures. This is not about just some summary statistics anymore; it's support vector machine recursive feature elimination, manifold learning and adaptive cascade sharing trees and stuff. Result: biologist is at a loss. Remember Dr McCoy in Star Trek saying "Dammit Jim, I'm a doctor, not an electrician/cook/nuclear physicist" whenever the captain let him do stuff that is - well - not doctorly? (Great analogy found by Christophe Lambert). It's exactly the same for a clinician nowadays. In order to do a (his job: e.g. decide on a treatment plan for a cancer patient), he has to first do b (set up hardware that can handle the 100s of Gb of data) and c (devise some nifty data mining trickery to get his results). Neither of which he has the time or training for. "Dammit Jim, I'm a doctor, not a bioinformatician". Result: we're alienating the user. Data mining has become so complicated and advanced, that the clinician is at a complete loss. Heck, I'm working at a bioinformatics department and don't understand half of what they're talking about. So what can the clinician do? His only option is to trust some bioinformatician to come up with some results. But this is a blind trust: he has no way of assessing the results he gets back. This trust is even more blind than the one you give the guy who repairs your car.

As I see it, there are (at least) four issues.

What's the question?
Data generation used to be really geared towards proving or disproving a specific hypothesis. The researcher would have a question, formulate some hypothesis around it, and then generate data. Although that same data could already be used to answer other unanticipated questions as well, this really became an issue with easy, automated data generation; DNA sequencing being a prime example. You might ask yourself "does this or that gene have a mutation that lead to this disease?", but the data you generate (i.c. exome sequences) to answer this question can be used to answer hundreds of other questions as well. You just don't know what questions yet...
Statistical analysis and data mining are indispensable for (dis)proving hypothesis, but what if we don't know the hypothesis? As many others in the field, I believe that data visualization can give us some clues at what to investigate further.



Let's for example look at this example hive plot by Martin Krzewinski (for what B means: see the explanation at the hive plot website). Suppose you're given a list of genes in E.coli (or a list of functions in the linux operating system) and the network between those genes (or functions). Using clever visualization, we can define some interesting questions that we can look into using statistics or data mining. For example: why do we see so many workhorse genes in E.coli? Does this reflect reality, and what would that mean? Or does it mean that our input network is biased? What is so special about that very small number of workhorse functions in linux that have that high connectivity? These are questions that we need to be presented to us.

What parameters should I use?
Second issue: the outcome from most data mining/filtering algorithms depend tremendously on the right parameters. But it can be very difficult to actually find out what those parameters should be. Does there actually exist a "right" set of parameters for this or that algorithm? Also, tweaking some arguments just a little bit can have vast effects on the results, while you can change other parameters as much as you want, but it won't affect the outcome whatsoever.

Turnbull et al. Nature Genetics 2010
Can I trust this output?
Issue number 3: if I am a clinician/biologist and a bioinformatician hands me some results, how do I know if I can trust those results? Heck, being a bioinformatician myself and writing some program to filter putative SNPs, how do I know that my results are correct? Suppose there are 3 filters that I can apply consecutively, with different combinations of settings.



Looking at exome data, we main information that we can use for assessing the results of SNP filtering are the fact that you should end up with 20k-25k SNPs, and a transition/transversion ratio of 2.1 (if I remember correctly). But there's many different combinations of filters that can give these summary statistics. The state of the art (believe it or not) is to just run many different algorithms and filters independently, and then take the intersection of the results...

I can't wrap my head around this...
And finally, there's the issue of too much information. Not just the sheer amount, but of different data sources. It's actually not really too much information per se, but too much to keep into one head. Someone trying to decide on a treatment plan for a cancer patient, for example, will have to combine data from heterogeneous datasets, multiple abstraction levels and multiple sources. He'll have to look into patient and clinical data, family/population data, MR/CT/Xray scans, tissue samples, gene expression data and pathways. That's just too much. His cognitive capacities are fully engaged in trying to integrate all that information, rather than in answering the initial question.

Visualization... part of the solution
I'm not saying anything new here when I suggest that data visualization might be part of the solution to these problems. As current technologies and analysis methods have alienated the end-user from his own results, visualization can reach over and cross this gap. The rest of the presentation is basically about some basic principles in data visualization, which I'll not go further into here.

All in all, I think the presentation went quite well: