The latest news about biotechnologies, biomechanics
synthetic biology, genomics, biomediacl engineering...
Posted: Sep 26, 2013
Scientists develop a more effective molecular modeling process
(Nanowerk News) It’s difficult and time-consuming to produce accurate computer models of molecules, primarily because traditional modeling methods are limited in their ability to handle alternative molecular shapes and, consequently, are subject to multiple errors.
Moreover, the traditional approach uses mathematical formulas or algorithms that are run sequentially, refining the structural details of the model with each separate algorithm—a method that has been revolutionized by personal computing, but still requires labor-intensive human intervention for error correction.
A new method developed by scientists on the Florida campus of The Scripps Research Institute (TSRI) takes another tack entirely, combining existing formulas in a kind of algorithmic stew to gain a better picture of molecular structural diversity that is then used to eliminate errors and improve the final model.
The new process, called Extensive Combinatorial Refinement (ExCoR), could help improve the development of drug candidates that depend to a great degree on detailed structural analysis to determine how they work against specific disease targets.
“Our combinatorial method creates computerized molecular models in a more automated way,” said Kendall Nettles, a TSRI associate professor who led the study. “This is an important component of drug discovery—to do them in a more automated fashion will significantly help the process.”
Improvement and Some Surprises
In the study, the scientists subjected more than 50 molecular structures to 256 distinct combinations of algorithms and refinement factors that eventually totaled more than 12,000 independent refinement runs.
Nettles and his colleagues measured the improvement in the models by what is known as the R-factor, which measures the similarity between the actual structure of the molecule and the experimental model—in other words, just how closely the refined structure model can predict the factual data.
“Lowering that R-factor is the goal—that’s the selection process for finding the best algorithms,” Nettles said.
While the study found that no single algorithm consistently produced the best model, the scientists did find some surprises.
“Some algorithms, if you combine them, tend to work better at producing a refined model,” said Research Associate Jerome C. Nwachukwu, the first author of the study. “What we didn’t expect was two algorithms that worked separately but didn’t work in combination.”
It is this strange overlap makes it impossible to predict which combinations of algorithms will work best for an individual structure.
“The refinement effects of the various algorithms depend on the structure itself,” Nwachukwu said.
In addition to Nettles and Nwachukwu, authors of the study, “Improved Crystallographic Structures using Extensive Combinatorial Refinement,” which will appear in the November 5, 2013 print issue of Structure, included Mark R. Southern of TSRI; James R. Kiefer of Genentech, Inc.; Pavel V. Afonine of Lawrence Berkley National Laboratory; Paul D. Adams of The University of California, Berkley; and Thomas C. Terwilliger of Los Alamos National Laboratory.
The study was supported by the National Institutes of Health (Grant numbers CA132022, DK077085, 5U01GM102148 and GM063210) and by the US Department of Energy (Contract No. DE-AC02-05CH11231).
Source: The Scripps Research Institute
If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks!
Check out these other trending stories on Nanowerk: