Review on Deep Learning in Biology and Medicine

By | June 26, 2017

Deep neural networks are everywhere. They are revolutionizing our day-to-day lives, and this phenomenon no longer needs any introduction or description.

Deep learning is especially suitable to find structures in overwhelming amounts of data. Recently, biological data became exactly that – overwhelming, and application of deep learning toolsets to it indeed looks very natural.

You probably want to search the entire Internet for all the scattered cases of deep learning application to biomedical data, right? Please check out this collective effort of 27 researchers from 23 institutions who summarized the state of the field in a pre-print review recently posted to BioRxiv1

The review is sizeable (102 pages with 432 references) and the approach to its writing was, as its subject, also quite revolutionary:  “…we collaboratively wrote this review in the open, enabling anyone with expertise to contribute. We wrote the manuscript in markdown and tracked changes using git. Contributions were handled through GitHub, with individuals submitting “pull requests” to suggest additions to the manuscript” (page 62).

Coverage of the review ranges from the issues of “AI diagnostics” (patients categorization) to digging deeper into biology and to better understand such fundamental processes as gene expression, splicing, and protein folding. Considering permanent excitement about GWAS, it is not surprising that Variant Calling challenge is also being addressed with the Deep Learning tools.

The area of drug development / drug repurposing seems to be another promising field of Deep Learning application.

While deep learning algorithms can achieve a “human-level performance across a number of biomedical domains”, they still keep failing in certain cases in comparison to humans because  “…these algorithms do not understand the semantics of the objects presented” (page 61). As a possible remedy, the authors mention successful collaboration between humans and AI The latter idea may be noticed as a public manifest of some companies. The issue of limited interpretability of the neural networks-based models (the “black box” phenomenon) that was the major target of criticism for decades, is still there as well (page 47).

What do these limitations of deep learning mean, if we use our imagination?

Hypothesis 1: humans are irreplaceable.

Hypothesis 2: the more refined artificial intelligence is not designed yet…

1.
Ching T, Himmelstein DS, Beaulieu-Jones BK, et al. Opportunities And Obstacles For Deep Learning In Biology And Medicine. May 2017. doi: 10.1101/142760
Share this:

Leave a Reply