Hi All, 1) phenix.refine has a standard set of geometry restrains as other refinement programs: bond, angle, planarity, chirality, dihedral, non-bonded. Clash score is not something that is included into refinement as a target and then used directly in refinement. Clash scores may be bad at the beginning of structure refinement and should be good at the end. 2) Using H atoms as a riding model should normally help in fixing clashes, however it's not 100% guaranteed: some clashes may be "locked" so the refinement cannot overcome the barriers even with H atoms. In this case what's Luca Jovine suggesting (see one of the emails in this thread) is an excellent thing to do (to add to his receipt: you may also consider using more than default number of macro-cycles). Running simulated annealing is also helpful (use "simulated_annealing=true" flag for this). 3) Since handling of H atoms in phenix.refine is significantly improved in newer versions, I would warmly recommend to use the latest phenix.refine available (currently from CCI Apps: http://www.phenix-online.org/download/cci_apps/). 4) Also, clashes related concerns may be a result of inoptimal weight between Xray target and restrains. Although one can play with the weight adjusting scales manually by trying different values of "wxc_scale" parameter ("wxu_scale" is analog for b-factors refinement) the new version of phenix.refine has an automatic procedure for finding the best weight (here "the best" means the weight that leads to the lowest Rfree) (Carsten's suggestion). To use this option: % phenix.refine model.pdb data.mtz your_parameters.par optimize_wxc=true Note this this may take a while to run (depending on resolution and structure size) so consider running it overnight. If none of the above helps, I would be *very* interested to look at the data and model myself to 1) help you find the best strategy to overcome this problem, and 2) find and fix problem in phenix.refine (if any). As always, I promise to keep all data confidential. Pavel.
Hi Pavel,
4) Also, clashes related concerns may be a result of inoptimal weight between Xray target and restrains. Although one can play with the weight adjusting scales manually by trying different values of "wxc_scale" parameter ("wxu_scale" is analog for b-factors refinement) the new version of phenix.refine has an automatic procedure for finding the best weight (here "the best" means the weight that leads to the lowest Rfree) (Carsten's suggestion). To use this option:
% phenix.refine model.pdb data.mtz your_parameters.par optimize_wxc=true
Wouldn't it be better to try and get the highest Free Log Likelihood Gain (LLG) rather than the lowest Rfree? I would worry that trying to optimise Rfree rather than LLG or even a free Rxpct (a-la BUSTER) would be less stable/reliable. Cheers, Stephen -- Dr Stephen Graham Nuffield Medical Fellow Division of Structural Biology Wellcome Trust Centre for Human Genetics Roosevelt Drive Oxford OX3 7BN United Kingdom Phone: +44 1865 287 549
Hi Stephen, the answer - I don't know. Optimizing the weight you can use different things, like Rfree, divinations from ideal stereochemistry, LLG, or combination Rfree and divinations from ideal stereochemistry, etc... Rfree seemed to me the most obvious and easy, but again, I have no strong feelings or experience what is better (and what "better" actually means in this context). What do you mean by "less stable/reliable" ? The procedure that phenix.refine uses is very simple... The overall target is (for xyz refinement; similar for B-factors): Etotal = wxc * wxc_scale * Exray + wc * Egeom wxc is determined as in CNS (ratio of gradient norms), wc = 1.0, wxc_scale is adjustable parameter which is by default set to 0.5 or so. In most of cases at "normal" resolutions the automatic weight is good. If not, then you need to either play with wxc_scale manually or have phenix.refine do it for you automatically by using optimize_wxc. And what "optimize_wxc=true" does is just a grid search: it tries different wxc_scale values and chooses the one that produces the lowest Rfree. I don't see why it could be unstable or not reliable. Obviously, one can use any other criterion instead of Rfree. Cheers, Pavel. On 2/13/2008 3:25 AM, Stephen Graham wrote:
Hi Pavel,
4) Also, clashes related concerns may be a result of inoptimal weight between Xray target and restrains. Although one can play with the weight adjusting scales manually by trying different values of "wxc_scale" parameter ("wxu_scale" is analog for b-factors refinement) the new version of phenix.refine has an automatic procedure for finding the best weight (here "the best" means the weight that leads to the lowest Rfree) (Carsten's suggestion). To use this option:
% phenix.refine model.pdb data.mtz your_parameters.par optimize_wxc=true
Wouldn't it be better to try and get the highest Free Log Likelihood Gain (LLG) rather than the lowest Rfree? I would worry that trying to optimise Rfree rather than LLG or even a free Rxpct (a-la BUSTER) would be less stable/reliable.
Cheers,
Stephen
the answer - I don't know. Optimizing the weight you can use different things, like Rfree, divinations from ideal stereochemistry, LLG, or combination Rfree and divinations from ideal stereochemistry, etc... Rfree seemed to me the most obvious and easy, but again, I have no strong feelings or experience what is better (and what "better" actually means in this context).
Based on Ian Tickle's paper http://journals.iucr.org/d/issues/2007/12/00/gx5119/gx5119.pdf , -LLG is slightly more stable and better than Rfree as a refinement indicator.
What do you mean by "less stable/reliable" ?
The procedure that phenix.refine uses is very simple... The overall target is (for xyz refinement; similar for B-factors):
Etotal = wxc * wxc_scale * Exray + wc * Egeom
wxc is determined as in CNS (ratio of gradient norms), wc = 1.0, wxc_scale is adjustable parameter which is by default set to 0.5 or so. In most of cases at "normal" resolutions the automatic weight is good. If not, then you need to either play with wxc_scale manually or have phenix.refine do it for you automatically by using optimize_wxc. And what "optimize_wxc=true" does is just a grid search: it tries different wxc_scale values and chooses the one that produces the lowest Rfree. I don't see why it could be unstable or not reliable. Obviously, one can use any other criterion instead of Rfree.
I do have a problem with the default wxc_scale and the optimize_wxc = true. But my structure is at low resolution 3.5 A. The default wxc_scale really couldn't hold up the geometry and the optimize_wxc = true just made it worse. I have to manually set the wxc_scale = 0.1. Jianghai
Based on Ian Tickle's paper http://journals.iucr.org/d/issues/2007/12/00/gx5119/gx5119.pdf, -LLG is slightly more stable and better than Rfree as a refinement indicator.
What does "stable" mean?
I do have a problem with the default wxc_scale and the optimize_wxc = true. But my structure is at low resolution 3.5 A. The default wxc_scale really couldn't hold up the geometry and the optimize_wxc = true just made it worse. I have to manually set the wxc_scale = 0.1.
This is because optimize_wxc minimizes your Rfree and it does not look at the geometry (this is said in the Manual): it chooses whatever weight that gives you the lowest Rfree. I agree that using combined "Rfree + geometry" could be a better thing to do (especially at lower resolutions). I'll put it in my list of things to try and implement if successful. Pavel.
Based on Ian Tickle's paper http://journals.iucr.org/d/issues/2007/12/00/gx5119/gx5119.pdf, -LLG is slightly more stable and better than Rfree as a refinement indicator.
What does "stable" mean?
I think it means that -LLG has less ups and downs at the minimum. I could be wrong. Jianghai
participants (4)
-
Jianghai Zhu
-
Pavel Afonine
-
Pavel Afonine
-
Stephen Graham