While philosophically I see no difference between a spherical resolution cutoff and an elliptical one, a drop in the free R can't be the justification for the switch. A model cannot be made more "publishable" simply by discarding data. We have a whole bunch of empirical guides for judging the quality of this and that in our field. We determine the resolution limit of a data set (and imposing a "limit" is another empirical choice made) based on Rmrg, or Rmes, or Rpim getting too big or I/sigI getting too small and there is no agreement on how "too big/small" is too "too big/small". We then have other empirical guides for judging the quality of the models we produce (e.g. Rwork, Rfree, rmsds of various sorts). Most people seem to recognize that the these criteria need to be applied differently for different resolutions. A lower resolution model is allowed a higher Rfree, for example. Isn't is also true that a model refined to data with a cutoff of I/sigI of 1 would be expected to have a free R higher than a model refined to data with a cutoff of 2? Surely we cannot say that the decrease in free R that results from changing the cutoff criteria from 1 to 2 reflects an improved model. It is the same model after all. Sometimes this shifting application of empirical criteria enhances the adoption of new technology. Certainly the TLS parametrization of atomic motion has been widely accepted because it results in lower working and free Rs. I've seen it knock 3 to 5 percent off, and while that certainly means that the model fits the data better, I'm not sure that the quality of the hydrogen bond distances, van der Waals distances, or maps are any better. The latter details are what I really look for in a model. On the other hand, there has been good evidence through the years that there is useful information in the data beyond an I/sigI of 2 or an Rmeas > 100% but getting people to use this data has been a hard slog. The reason for this reluctance is that the R values of the resulting models are higher. Of course they are higher! That does not mean the models are of poorer quality, only that data with lower signal/noise has been used that was discarded in the models you used to develop your "gut feeling" for the meaning of R. When you change your criteria for selecting data you have to discard your old notions about the acceptable values of empirical quality measures. You either have to normalize your measure, as Phil Jeffrey recommends, by ensuring that you calculate your R's with the same reflections, or by making objective measures of map quality. Dale Tronrud P.S. It is entirely possible that refining a model to a very optimistic resolution cutoff and calculating the map to a lower resolution might be better than throwing out the data altogether. On 5/1/2012 10:34 AM, Kendall Nettles wrote:
I have seen dramatic improvements in maps and behavior during refinement following use of the UCLA anisotropy server in two different cases. For one of them the Rfree went from 33% to 28%. I don't think it would have been publishable otherwise. Kendall
On May 1, 2012, at 11:10 AM, Bryan Lepore wrote:
On Mon, Apr 30, 2012 at 4:22 AM, Phil Evans
wrote:Are anisotropic cutoff desirable?
is there a peer-reviewed publication - perhaps from Acta Crystallographica - which describes precisely why scaling or refinement programs are inadequate to ameliorate the problem of anisotropy, and argues why the method applied in Strong, et. al. 2006 satisfies this need?
-Bryan _______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb