On Sat, Apr 27, 2013 at 4:21 PM, Alex Theodossis
Given that PHENIX is now giving me the best results and all scenarios give reasonable results I am less inclined to be concerned. However, I still find these inconsistencies rather puzzling and am curious to learn what is causing them.
The routines in Phenix strip out and replace hydrogens when calculating the clash score, so I would not expect Phenix versus Molprobity with existing H atoms to be the same in any case. There are many other possible explanations for the remaining discrepancies, but the fundamental reason is that the only actual code in common between the Phenix validation and MolProbity is Reduce and Probe (and KiNG, but that's just for visualization). I haven't looked at the actual MolProbity code in years, but it's a combination of other scripts built up over the years, and I don't think they're even written by the same people. With different implementations it is extremely difficult to guarantee numerically identical output. Even worse, we can't even guarantee identical output for the same code built with different compilers. (I often get very different results running Phenix on Windows.) Of course the underlying statistics and overall methodology/philosophy used is the same, which is why we tend to also refer to the various Phenix tools as "MolProbity". In general, unless there are specific features in the MolProbity server which you need and which we haven't implemented in Phenix yet, there is no point to running both tools. Jeff has begun modernizing the server code to use CCTBX as the backend, which should make the implementations more consistent in the future - however, even then I would not expect the results to always be in perfect agreement. -Nat