DeepMind’s Neural Networks Predicts Protein Diseases

Krambeck DeepMind 2

DeepMind, Alphabet’s UK-based artificial intelligence firm, has said it can predict the protein structure. This is a development that could significantly speed up the discovery of new drugs. 

Scientists have spent decades. Trying to find out how proteins fold into three-dimensional forms. Starting as strings of chemical compounds, which then describe their behavior.

Usually, it takes years to identify the appearance of even a single protein but DeepMind’s AlphaFold method delivers accurate results within days. The results were accurate to within the width of an atom.

Google purchased DeepMind in 2014 for £400 Million.

Protein Study Helping Curb Pandemic

With their work on almost all diseases, including Covid-19, an understanding of proteins and the ways in which they function may benefit researchers.

Neurosymbolic models combine algorithms with methods of symbolic reasoning. They tend to be better suited for predicting, describing, and considering counterfactual possibilities than neural networks.

But researchers at DeepMind say that neural networks under the right testing conditions will outperform neurosymbolic models.

Co-authors define a spatiotemporal reasoning architecture on videos in which all components are learned and all intermediate representations are spread across the neural network layers (rather than symbolic). 

The team says that in a common dataset, it beats the output of neurosymbolic models across all questions. The Greatest advantage are on counterfactual issues.

The study of DeepMind may have significance for the creation of machines that can describe their experiences. 

According to the researchers, contrary to the findings of some previous studies, models based solely on distributed representations can still perform well in visual-based tasks that test high-level cognitive functions.

And can at least to the degree that they surpass current neurosymbolic models.

The Proof of Surpassing Current Tech

The researchers tested their neural network against CoLlision Video REpresentation and Reasoning (CLEVRER) Cases. A dataset that draws on psychological insights.

CLEVRER comprises more than 20,000 5-second colliding object videos (three shapes of two materials and eight colors).

All focus on four logical reasoning elements: descriptive (e.g. “what colour”), explanatory (“what’s responsible for”), predictive (“Future”), and counterfactual (“what if”).

According to the co-authors of DeepMind, their neural network equaled the output without pre-training or labeled data of the best neurosymbolic models and with 40 percent less training data, questioning the idea that neural networks are more data-hungry than neurosymbolic models. 

In addition, on the most difficult counterfactual questions, it scored 59.8 percent, higher than both chance and all other models, and it extended to other functions, including those in CATER.

Exit mobile version