推荐活动

让谷歌告诉你:如何用大数据推动新药研发

首页 » 产业 » 企业 2015-03-04 转化医学网 赞(4)
分享: 
导读
从在搜索领域回答相关的健康问题到建立健康数据平台,谷歌(Google)公司已经慢慢开始植根于人们日常幸福和健康习惯的领域了,然而从幕后来讲,谷歌这家世界互联网巨头公司也在不断致力于推进药物的研发工作来来有效提升人类疾病的治疗进程。

  从在搜索领域回答相关的健康问题到建立健康数据平台,谷歌(Google)公司已经慢慢开始植根于人们日常幸福和健康习惯的领域了,然而从幕后来讲,谷歌这家世界互联网巨头公司也在不断致力于推进药物的研发工作来来有效提升人类疾病的治疗进程。

  在同斯坦福大学Pande实验室联合后,谷歌研究院向大家介绍了一篇名为“Massively Multitask Networks for Drug Discovery”(大规模多任务化的药物发现网络)的文章,这篇文章中,研究人员揭示了如何利用无数不同来源的数据来更好地进行化合物的筛选鉴定,从而加速治疗疾病的有效药物的研究过程。

  尽管这篇文章本身并不能揭示任何医药领域的重大突破,但其摆出观点来阐明如何利用深度学习来压缩巨大的数据集,从而加速药物的发现过程;深度学习是一种系统性的学习模式,其包括一种名为人工神经网络方法的训练系统,这种系统可以对基于关键数据输入的大量信息进行分析,随后将新的信息引入到复杂的数据集中,这样我们或许就可以在2015年新出现的5家进行深度学习的初创企业中进行观察研究。

  谷歌研究院表示,这项工作可以帮助我们利用来自许多不同实验的研究数据来增加对多种疾病准确性的预测,据我们所知这也是在该领域内第一次对额外添加的数据进行量化处理,而当前所获得的大量数据也将改善后期的研究进展。目前这项研究的规模是此前研究规模的18倍,同时其也利用了200多个个体机体生物学过程总共3780万个数据点来进行相关的研究分析。

  由于这项研究规模较大,因此研究人员将会非常仔细地探寻这些模型的敏感性以及模型结构和输入数据的多种改变,这项研究中,研究者并不仅仅检测了数据模型的性能,而其解释了为何可以运行良好并且在未来进行相关的预测,当然这就造就了目前我们所观察到的大趋势,许多大型的技术公司目前也在进行深度学习资源的大量投资,去年Twitter、Google和Yahoo都获得了深度学习的创业公司,而Facebook和Baidu也在该领域取得了较大的突破。

  在去年10月份举办的VentureBeat HealthBeat会议上,我们也看到了未来的卫生保健领域如何大量依靠当前的机器人、分析学技术以及人工智能;而投身进入诊断要素领域或许可以导致新型疗法的发现,这就将会增加向人工智能、大数据及深度学习的倾向,当然这就是当前谷歌公司和斯坦福大学进行的最新研究领域。

  通过自动化改善预测性技术,这不仅可以加速药物的发现过程,也能够有效降低开支,来自谷歌报告中的一段这样写道,发现治疗人类疾病的新型疗法是一项非常复杂的挑战,我们所预期的药物必须攻击疾病的根源,但其同时也必须满足其在机体中的代谢性限制及毒性限制;从传统意义上来讲药物的发现是一项非常漫长的过程,其从开始到完成往往需要花费数年,而且其中的失败率往往较高。

  简言之,对数百万种化合物检测需要花费很长时间,因此任何增加寻找有效化合物的技术都将是一件非常好的选择,而大规模的机器学习或将带给我们巨大的帮助。(转化医学网360zhyx.com)

  以上为转化医学网原创翻译整理。如需转载,请联系 info@360zhyx.com。

转化医学网推荐的新闻阅读:

From answering heath-related questions in its search results to a fitness data platform for developers, Google is becoming increasingly ingrained in the fabric of our daily health-and-wellbeing habits. But behind the scenes, the Internet giant is also working to expedite the discovery of drugs that could prove vital to finding cures for many human ills.
Working with Stanford University’s Pande Lab, Google Research has introduced a paper called “Massively Multitask Networks for Drug Discovery” [PDF], which looks at how using data from a myriad of different sources can better determine which chemical compounds will serve as “effective drug treatments for a variety of diseases.”
While the paper itself doesn’t reveal any major medical breakthroughs, it does point to how deep learning can be used to crunch huge data-sets and accelerate drug discovery. Deep learning is a system that involves training systems called artificial neural networks on lots of information derived from key data inputs, and then introducing new information to the mix. You might want to check out our guide to five emerging deep learning startups to watch in 2015.
“One encouraging conclusion from this work is that our models are able to utilize data from many different experiments to increase prediction accuracy across many diseases,” explained the multi-authored Google Research blog post. “To our knowledge, this is the first time the effect of adding additional data has been quantified in this domain, and our results suggest that even more data could improve performance even further.”
Google said it worked at a scale “18x larger than previous work,” and tapped a total of 37.8 million data points across 200+ individual biological processes.
“Because of our large scale, we were able to carefully probe the sensitivity of these models to a variety of changes in model structure and input data,” Google said. “In the paper, we examine not just the performance of the model but why it performs well and what we can expect for similar models in the future.”
This feeds into a bigger trend we’ve seen of late, with many of the big tech companies investing resources in deep learning. Last year, Twitter, Google, and Yahoo all acquired deep learning startups, while Facebook and Baidu made significant hires in this realm. Netflix and Spotify carried out work in this area too.
At VentureBeat’s HealthBeat conference last October, we looked at how the future of health care could lean heavily on robotics, analytics, and artificial intelligence (AI). Feeding into this diagnostic element is treatment discovery, which is increasingly turning to AI, big data, and deep learning too, as we’re seeing with this latest research from Google and Stanford.
By automating and improving predictive techniques, this should not only speed up the drug discovery process but cut the costs. From the Google report:
Discovering new treatments for human diseases is an immensely complicated challenge. Prospective drugs must attack the source of an illness, but must do so while satisfying restrictive metabolic and toxicity constraints. Traditionally, drug discovery is an extended process that takes years to move from start to finish, with high rates of failure along the way.
In short, testing millions of compounds can take a long time, so anything that can increase the chances of striking a successful combination can only be a good thing. This is where machine learning at scale may help.

评论:
评 论
共有 0 条评论

    还没有人评论,赶快抢个沙发