[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [modeller_usage] Regarding Best Selection of model decoy
- To: Ashish Runthala <ashishr AT bits-pilani.ac.in>
- Subject: Re: [modeller_usage] Regarding Best Selection of model decoy
- From: Modeller Caretaker <modeller-care@ucsf.edu>
- Date: Tue, 16 Feb 2010 11:20:36 -0800
- Cc: modeller_usage@listsrv.ucsf.edu
On 02/13/2010 03:12 AM, Ashish Runthala wrote:
How do i consider the following analysis during the modeler run
itself like assess.dope, assess.GA341 etc in model-mult.py etc.
1. Z score as Modeler doesn't reach below 1 in z scores.
Normalized DOPE Z scores are typically the most reliable method of model
assessment.
2. RMSD to the template structure with just one template considered.
This doesn't make a whole lot of sense, since Modeller always builds
models that are structurally similar to the template. This will simply
give you a score that strongly correlates with the sequence identity.
3. GDT-TS scores etc.
Sure, if you have the known (native) structure, one benchmark is to
compare the model against the native using the GDT-TS score.
You can add additional assessment methods to automodel by simply writing
a Python function and adding that Python function to the assess_methods
list in the automodel constructor. For example, the GA341 assessment
method simply looks like:
def GA341(atmsel):
"""Returns the GA341 score of the given model."""
mdl = atmsel.get_model()
return ('GA341 score', mdl.assess_ga341())
And to modeler caretaker, i have something to share with you. How to
go about that.
You can find contact details on our website:
http://salilab.org/modeller/contact.html
Ben Webb, Modeller Caretaker
--
modeller-care@ucsf.edu http://www.salilab.org/modeller/
Modeller mail list: http://salilab.org/mailman/listinfo/modeller_usage