Dear CCP4 user and experts, I refined (with PHENIX) a 3.0 A dataset obtaining Rfree of about 0.24 (with good geometry according to Ramachandran, Beta Outliers, etc.) .... Everything seems to be ok (especially in relation to the resolution) .... because the mtz.file I used was quite old and I cannot find my xscale.hkl file, I processed the data set again (this time with optimizng and polishing) and received a "better" file according to almost everything (resolution limit, I/sigma, CC(1/2), Rmeas), thus I used this new mtz.file and put it in my last refinement step (the refinement which led to the above mentioned Rfree = 0.24). Suprisingly, the refinement starts at Rwork = 0.18 and Rfree = 0.19 but ending up with 0.20 and 0.22, respectively. So I wanted to know if this is usual? I was expecting my data to become slightlly better but what is irritating me is the starting R-values of the refinement and that it get worse during refinement. Maybe I did something wrong? Is it reasonable to replace the mtz.file with a new one in the last refinement step or should I start the refinement from the scratch? In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. Sorry, but I am still a beginner in this field, so I would be very grateful if somebody could explain me this situation and my mistake and if I need to start refinement from the beginning. Thank you in advance! Best Regards. Aleksandar -- ------------------------------------------- Aleksandar Bijelic, MSc. Institut für Biophysikalische Chemie Universität Wien Althanstrasse 14 A-1090 Wien Tel: +43 1 4277 52536 e-Mail: [email protected] --------------------------------------------
Dear Aleksandar, the small gap between R and Rfree are indicative that you contaminated your Rfree set, and very likely you overrefined your structure. Did you copy the Rfree set from the old mtz-file to the new one? If not, this would explain the small gap between R and Rfree. You can add some noise to your PDB file to make it independent from the Rfree set. Maybe this will also help to reduce the overrefinement a bit. Do you have many water molecules in the structure? at 3A resolution you should not see too many of them, and the more you add the lower your R-value gets. Also make sure that all atoms have full occupancy. Less experienced users seem to refine occupancies in a chemically non-realistic way. Regards, Tim On 07/21/2014 02:32 PM, Aleksandar Bijelic wrote:
Dear CCP4 user and experts,
I refined (with PHENIX) a 3.0 A dataset obtaining Rfree of about 0.24 (with good geometry according to Ramachandran, Beta Outliers, etc.) .... Everything seems to be ok (especially in relation to the resolution) .... because the mtz.file I used was quite old and I cannot find my xscale.hkl file, I processed the data set again (this time with optimizng and polishing) and received a "better" file according to almost everything (resolution limit, I/sigma, CC(1/2), Rmeas), thus I used this new mtz.file and put it in my last refinement step (the refinement which led to the above mentioned Rfree = 0.24). Suprisingly, the refinement starts at Rwork = 0.18 and Rfree = 0.19 but ending up with 0.20 and 0.22, respectively. So I wanted to know if this is usual? I was expecting my data to become slightlly better but what is irritating me is the starting R-values of the refinement and that it get worse during refinement. Maybe I did something wrong? Is it reasonable to replace the mtz.file with a new one in the last refinement step or should I start the refinement from the scratch? In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. Sorry, but I am still a beginner in this field, so I would be very grateful if somebody could explain me this situation and my mistake and if I need to start refinement from the beginning. Thank you in advance!
Best Regards.
Aleksandar
-- Dr Tim Gruene Institut fuer anorganische Chemie Tammannstr. 4 D-37077 Goettingen GPG Key ID = A46BEE1A
Hi Aleksandar, the answer is in your statement: """ In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. """ meaning that Rfree flags in new and old files are not consistent. In turn, this means comparing R-factors in this case is nonsensical. Once you switched to the new file simple forget about previous one along with corresponding R-factors. Of course in new file free-r reflections are not fully free, so you need to remove memory from them by running some refinement. May be a cleaner way is to transfer free-r flags from old to new file, and then new flags for portions of new reflections that do not match the old one. Again, R-factors will not be comparable between refinements against old and new files. Pavel On 7/21/14, 5:32 AM, Aleksandar Bijelic wrote:
Dear CCP4 user and experts,
I refined (with PHENIX) a 3.0 A dataset obtaining Rfree of about 0.24 (with good geometry according to Ramachandran, Beta Outliers, etc.) .... Everything seems to be ok (especially in relation to the resolution) .... because the mtz.file I used was quite old and I cannot find my xscale.hkl file, I processed the data set again (this time with optimizng and polishing) and received a "better" file according to almost everything (resolution limit, I/sigma, CC(1/2), Rmeas), thus I used this new mtz.file and put it in my last refinement step (the refinement which led to the above mentioned Rfree = 0.24). Suprisingly, the refinement starts at Rwork = 0.18 and Rfree = 0.19 but ending up with 0.20 and 0.22, respectively. So I wanted to know if this is usual? I was expecting my data to become slightlly better but what is irritating me is the starting R-values of the refinement and that it get worse during refinement. Maybe I did something wrong? Is it reasonable to replace the mtz.file with a new one in the last refinement step or should I start the refinement from the scratch? In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. Sorry, but I am still a beginner in this field, so I would be very grateful if somebody could explain me this situation and my mistake and if I need to start refinement from the beginning. Thank you in advance!
Best Regards.
Aleksandar
Certainly the replacement of the free flags with novel values will explain the observation that the "free R" became about equal to the working R, but it does not explain the sharp drop in the working R when you switched to the new version of you observations. This change is hard to understand without some details of your "optimizing and polishing". Did you end up with about the same number of "unique reflections"? This result is possible if you discarded a bunch of your weal, poorly estimated, reflections and the new data set had a lower completeness. Without details this is pure speculation. Dale Tronrud On 07/21/2014 07:55 AM, Pavel Afonine wrote:
Hi Aleksandar,
the answer is in your statement:
""" In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. """
meaning that Rfree flags in new and old files are not consistent. In turn, this means comparing R-factors in this case is nonsensical. Once you switched to the new file simple forget about previous one along with corresponding R-factors. Of course in new file free-r reflections are not fully free, so you need to remove memory from them by running some refinement.
May be a cleaner way is to transfer free-r flags from old to new file, and then new flags for portions of new reflections that do not match the old one. Again, R-factors will not be comparable between refinements against old and new files.
Pavel
On 7/21/14, 5:32 AM, Aleksandar Bijelic wrote:
Dear CCP4 user and experts,
I refined (with PHENIX) a 3.0 A dataset obtaining Rfree of about 0.24 (with good geometry according to Ramachandran, Beta Outliers, etc.) .... Everything seems to be ok (especially in relation to the resolution) .... because the mtz.file I used was quite old and I cannot find my xscale.hkl file, I processed the data set again (this time with optimizng and polishing) and received a "better" file according to almost everything (resolution limit, I/sigma, CC(1/2), Rmeas), thus I used this new mtz.file and put it in my last refinement step (the refinement which led to the above mentioned Rfree = 0.24). Suprisingly, the refinement starts at Rwork = 0.18 and Rfree = 0.19 but ending up with 0.20 and 0.22, respectively. So I wanted to know if this is usual? I was expecting my data to become slightlly better but what is irritating me is the starting R-values of the refinement and that it get worse during refinement. Maybe I did something wrong? Is it reasonable to replace the mtz.file with a new one in the last refinement step or should I start the refinement from the scratch? In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. Sorry, but I am still a beginner in this field, so I would be very grateful if somebody could explain me this situation and my mistake and if I need to start refinement from the beginning. Thank you in advance!
Best Regards.
Aleksandar
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
Am 21.07.2014 19:26, schrieb Dale Tronrud:
Certainly the replacement of the free flags with novel values will explain the observation that the "free R" became about equal to the working R, but it does not explain the sharp drop in the working R when you switched to the new version of you observations. This change is hard to understand without some details of your "optimizing and polishing". Did you end up with about the same number of "unique reflections"? This result is possible if you discarded a bunch of your weal, poorly estimated, reflections and the new data set had a lower completeness. Without details this is pure speculation.
Dale Tronrud
On 07/21/2014 07:55 AM, Pavel Afonine wrote:
Hi Aleksandar,
the answer is in your statement:
""" In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. """
meaning that Rfree flags in new and old files are not consistent. In turn, this means comparing R-factors in this case is nonsensical. Once you switched to the new file simple forget about previous one along with corresponding R-factors. Of course in new file free-r reflections are not fully free, so you need to remove memory from them by running some refinement.
May be a cleaner way is to transfer free-r flags from old to new file, and then new flags for portions of new reflections that do not match the old one. Again, R-factors will not be comparable between refinements against old and new files.
Pavel
On 7/21/14, 5:32 AM, Aleksandar Bijelic wrote:
Dear CCP4 user and experts,
I refined (with PHENIX) a 3.0 A dataset obtaining Rfree of about 0.24 (with good geometry according to Ramachandran, Beta Outliers, etc.) .... Everything seems to be ok (especially in relation to the resolution) .... because the mtz.file I used was quite old and I cannot find my xscale.hkl file, I processed the data set again (this time with optimizng and polishing) and received a "better" file according to almost everything (resolution limit, I/sigma, CC(1/2), Rmeas), thus I used this new mtz.file and put it in my last refinement step (the refinement which led to the above mentioned Rfree = 0.24). Suprisingly, the refinement starts at Rwork = 0.18 and Rfree = 0.19 but ending up with 0.20 and 0.22, respectively. So I wanted to know if this is usual? I was expecting my data to become slightlly better but what is irritating me is the starting R-values of the refinement and that it get worse during refinement. Maybe I did something wrong? Is it reasonable to replace the mtz.file with a new one in the last refinement step or should I start the refinement from the scratch? In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. Sorry, but I am still a beginner in this field, so I would be very grateful if somebody could explain me this situation and my mistake and if I need to start refinement from the beginning. Thank you in advance!
Best Regards.
Aleksandar
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
Thank you for your response, @Pavel: I would prefer to transfer the old flags to my new reflection file because then I can check if my new data is indeed better. But as I am a less experienced user I do dnot know how to transfer Rfree flags .... Can I do this with the reflection file editor? And how I can get new R-flags for new reflections? A long time ago I read that XPLORE can be used for this reason, but then I have to convert my reflection file to "XPLORE" file and then back to mtz or I am totally wrong? I would be very grateful if you could explain me a method how to do this or a program .... maybe it is very easy but as I already mentioned I am a beginner in this field ... thank you in advance. @Dale: I ended up with 56884 unique reflections (compl. 98.55%) for my new file. In comaprison my old file ended up with just 36075 unique reflections (compl. 99.75%). Thus, there is a big difference. Optimization and polishing means that I tried recommended procedures like re-integrating with the correct space group and refined geometry, using refined beam divergence values and comparing STRICT_ABSORPTION_CORRECTION=TRUE with =FALSE. Indeed, I discarded some reflactions (images) for my new file because the data became worse at a certain image no. (due to radiation damage). This was not done with my old mtz.file, since I processed it by myself without any knowledge. Best Regards, Aleksandar -- ------------------------------------------- Aleksandar Bijelic, MSc. Institut für Biophysikalische Chemie Universität Wien Althanstrasse 14 A-1090 Wien Tel: +43 1 4277 52536 e-Mail: [email protected] --------------------------------------------
Go to "Reflection Tools" and open "Reflection file editor".
Add the two files, the one you want your Rfree flags from and the one with
your new reflections.
Scroll down and click "Copy all arrays". Now delete the ones you don't want
in the output arrays.
Go to the "Output options" tab. "Extend existing..." should be ticked by
default.
Now, since you have almost doubled your number of reflections you are still
heavily biased on the Rfree. There will probably be a heap of people who
disagree but I think the best way to use a simple simulated annealing. It
takes a while but you get a bunch of rounds of random noise + refinement.
And you will probably end up with a slightly higher Rfree. I think 24% is
very low for 3 Å data but then you have a large unit cell and probably a
very high NCS. Possibly started from some very good search models. If you
have a complex and one component is structure determined at a better
resolution then you can use that as restraint (constraint? I can never
remember the difference).
And yes, data processing has improved immensely in later years. AIMLESS is
great.
Cheers,
Morten
On 22 July 2014 08:15, Aleksandar Bijelic
Am 21.07.2014 19:26, schrieb Dale Tronrud:
Certainly the replacement of the free flags with novel values will
explain the observation that the "free R" became about equal to the working R, but it does not explain the sharp drop in the working R when you switched to the new version of you observations. This change is hard to understand without some details of your "optimizing and polishing". Did you end up with about the same number of "unique reflections"? This result is possible if you discarded a bunch of your weal, poorly estimated, reflections and the new data set had a lower completeness. Without details this is pure speculation.
Dale Tronrud
On 07/21/2014 07:55 AM, Pavel Afonine wrote:
Hi Aleksandar,
the answer is in your statement:
""" In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. """
meaning that Rfree flags in new and old files are not consistent. In turn, this means comparing R-factors in this case is nonsensical. Once you switched to the new file simple forget about previous one along with corresponding R-factors. Of course in new file free-r reflections are not fully free, so you need to remove memory from them by running some refinement.
May be a cleaner way is to transfer free-r flags from old to new file, and then new flags for portions of new reflections that do not match the old one. Again, R-factors will not be comparable between refinements against old and new files.
Pavel
On 7/21/14, 5:32 AM, Aleksandar Bijelic wrote:
Dear CCP4 user and experts,
I refined (with PHENIX) a 3.0 A dataset obtaining Rfree of about 0.24 (with good geometry according to Ramachandran, Beta Outliers, etc.) .... Everything seems to be ok (especially in relation to the resolution) .... because the mtz.file I used was quite old and I cannot find my xscale.hkl file, I processed the data set again (this time with optimizng and polishing) and received a "better" file according to almost everything (resolution limit, I/sigma, CC(1/2), Rmeas), thus I used this new mtz.file and put it in my last refinement step (the refinement which led to the above mentioned Rfree = 0.24). Suprisingly, the refinement starts at Rwork = 0.18 and Rfree = 0.19 but ending up with 0.20 and 0.22, respectively. So I wanted to know if this is usual? I was expecting my data to become slightlly better but what is irritating me is the starting R-values of the refinement and that it get worse during refinement. Maybe I did something wrong? Is it reasonable to replace the mtz.file with a new one in the last refinement step or should I start the refinement from the scratch? In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. Sorry, but I am still a beginner in this field, so I would be very grateful if somebody could explain me this situation and my mistake and if I need to start refinement from the beginning. Thank you in advance!
Best Regards.
Aleksandar
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
Thank you for your response,
@Pavel: I would prefer to transfer the old flags to my new reflection file because then I can check if my new data is indeed better. But as I am a less experienced user I do dnot know how to transfer Rfree flags .... Can I do this with the reflection file editor? And how I can get new R-flags for new reflections? A long time ago I read that XPLORE can be used for this reason, but then I have to convert my reflection file to "XPLORE" file and then back to mtz or I am totally wrong? I would be very grateful if you could explain me a method how to do this or a program .... maybe it is very easy but as I already mentioned I am a beginner in this field ... thank you in advance.
@Dale: I ended up with 56884 unique reflections (compl. 98.55%) for my new file. In comaprison my old file ended up with just 36075 unique reflections (compl. 99.75%). Thus, there is a big difference. Optimization and polishing means that I tried recommended procedures like re-integrating with the correct space group and refined geometry, using refined beam divergence values and comparing STRICT_ABSORPTION_CORRECTION=TRUE with =FALSE. Indeed, I discarded some reflactions (images) for my new file because the data became worse at a certain image no. (due to radiation damage). This was not done with my old mtz.file, since I processed it by myself without any knowledge.
Best Regards,
Aleksandar
-- ------------------------------------------- Aleksandar Bijelic, MSc.
Institut für Biophysikalische Chemie Universität Wien Althanstrasse 14 A-1090 Wien
Tel: +43 1 4277 52536 e-Mail: [email protected]
--------------------------------------------
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
-- Morten K Grøftehauge, PhD Pohl Group Durham University
On Tue, Jul 22, 2014 at 7:35 AM, Morten Grøftehauge < [email protected]> wrote:
you can use that as restraint (constraint? I can never remember the difference).
See here: https://www.phenix-online.org/version_docs/1.9-1692. Short version: restraints keep parameters similar (or moving together), effectively adding "observations"; constraints actually reduce the number of parameters. For example, the individual_sites and individual_adp strategies in phenix.refine use restraints (on geometry and ADP similarity), while the rigid_body and group_adp strategies use constraints (treating groups of atoms collectively). So yes, "restraint" is the correct term for the reference model feature. But it's amazing how often I see the terms confused in methods sections! -Nat
Dear All, Is there a way to convert the LINK data in .geo or elbows.edits to LINK records that can be added to the PDB file? Does anybody have a script to do this? There are too many metal ions to do manually. The RCSB will do this eventually but it’d be nice to see the links in Coot. Thanks.
Am 22.07.2014 16:35, schrieb Morten Grøftehauge:
Go to "Reflection Tools" and open "Reflection file editor". Add the two files, the one you want your Rfree flags from and the one with your new reflections. Scroll down and click "Copy all arrays". Now delete the ones you don't want in the output arrays. Go to the "Output options" tab. "Extend existing..." should be ticked by default.
Now, since you have almost doubled your number of reflections you are still heavily biased on the Rfree. There will probably be a heap of people who disagree but I think the best way to use a simple simulated annealing. It takes a while but you get a bunch of rounds of random noise + refinement.
And you will probably end up with a slightly higher Rfree. I think 24% is very low for 3 Å data but then you have a large unit cell and probably a very high NCS. Possibly started from some very good search models. If you have a complex and one component is structure determined at a better resolution then you can use that as restraint (constraint? I can never remember the difference).
And yes, data processing has improved immensely in later years. AIMLESS is great.
Cheers, Morten
On 22 July 2014 08:15, Aleksandar Bijelic
mailto:[email protected]> wrote: Am 21.07.2014 19 tel:21.07.2014%2019:26, schrieb Dale Tronrud:
Certainly the replacement of the free flags with novel values will explain the observation that the "free R" became about equal to the working R, but it does not explain the sharp drop in the working R when you switched to the new version of you observations. This change is hard to understand without some details of your "optimizing and polishing". Did you end up with about the same number of "unique reflections"? This result is possible if you discarded a bunch of your weal, poorly estimated, reflections and the new data set had a lower completeness. Without details this is pure speculation.
Dale Tronrud
On 07/21/2014 07:55 AM, Pavel Afonine wrote:
Hi Aleksandar,
the answer is in your statement:
""" In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. """
meaning that Rfree flags in new and old files are not consistent. In turn, this means comparing R-factors in this case is nonsensical. Once you switched to the new file simple forget about previous one along with corresponding R-factors. Of course in new file free-r reflections are not fully free, so you need to remove memory from them by running some refinement.
May be a cleaner way is to transfer free-r flags from old to new file, and then new flags for portions of new reflections that do not match the old one. Again, R-factors will not be comparable between refinements against old and new files.
Pavel
On 7/21/14, 5:32 AM, Aleksandar Bijelic wrote:
Dear CCP4 user and experts,
I refined (with PHENIX) a 3.0 A dataset obtaining Rfree of about 0.24 (with good geometry according to Ramachandran, Beta Outliers, etc.) .... Everything seems to be ok (especially in relation to the resolution) .... because the mtz.file I used was quite old and I cannot find my xscale.hkl file, I processed the data set again (this time with optimizng and polishing) and received a "better" file according to almost everything (resolution limit, I/sigma, CC(1/2), Rmeas), thus I used this new mtz.file and put it in my last refinement step (the refinement which led to the above mentioned Rfree = 0.24). Suprisingly, the refinement starts at Rwork = 0.18 and Rfree = 0.19 but ending up with 0.20 and 0.22, respectively. So I wanted to know if this is usual? I was expecting my data to become slightlly better but what is irritating me is the starting R-values of the refinement and that it get worse during refinement. Maybe I did something wrong? Is it reasonable to replace the mtz.file with a new one in the last refinement step or should I start the refinement from the scratch? In the sake of completeness, I deleted the header of the used pdb.file because of the R-flag error which occurs, since Phenix reconizes that the pdb.file was already used with other Rflags. Sorry, but I am still a beginner in this field, so I would be very grateful if somebody could explain me this situation and my mistake and if I need to start refinement from the beginning. Thank you in advance!
Best Regards.
Aleksandar
_______________________________________________ phenixbb mailing list [email protected] mailto:[email protected] http://phenix-online.org/mailman/listinfo/phenixbb
_______________________________________________ phenixbb mailing list [email protected] mailto:[email protected] http://phenix-online.org/mailman/listinfo/phenixbb
Thank you for your response,
@Pavel: I would prefer to transfer the old flags to my new reflection file because then I can check if my new data is indeed better. But as I am a less experienced user I do dnot know how to transfer Rfree flags .... Can I do this with the reflection file editor? And how I can get new R-flags for new reflections? A long time ago I read that XPLORE can be used for this reason, but then I have to convert my reflection file to "XPLORE" file and then back to mtz or I am totally wrong? I would be very grateful if you could explain me a method how to do this or a program .... maybe it is very easy but as I already mentioned I am a beginner in this field ... thank you in advance.
@Dale: I ended up with 56884 unique reflections (compl. 98.55%) for my new file. In comaprison my old file ended up with just 36075 unique reflections (compl. 99.75%). Thus, there is a big difference. Optimization and polishing means that I tried recommended procedures like re-integrating with the correct space group and refined geometry, using refined beam divergence values and comparing STRICT_ABSORPTION_CORRECTION=TRUE with =FALSE. Indeed, I discarded some reflactions (images) for my new file because the data became worse at a certain image no. (due to radiation damage). This was not done with my old mtz.file, since I processed it by myself without any knowledge.
Best Regards,
Aleksandar
-- ------------------------------------------- Aleksandar Bijelic, MSc.
Institut für Biophysikalische Chemie Universität Wien Althanstrasse 14 A-1090 Wien
Tel: +43 1 4277 52536 tel:%2B43%201%204277%2052536 e-Mail: [email protected] mailto:[email protected]
--------------------------------------------
_______________________________________________ phenixbb mailing list [email protected] mailto:[email protected] http://phenix-online.org/mailman/listinfo/phenixbb
-- Morten K Grøftehauge, PhD Pohl Group Durham University
Thank you very much. Indeed, I started from a good search model (maybe) explaining this low Rfree value. Unfortunately I am now confused, I copied the Rfree flags from my old mtz.file to my new one (as described by you), but Phenix giving me still the error saying that this flags were already used in previous runs ..... is this usual? The funny thing is when I change the refinement by unchecking "update water" Phenix is running without complainig ... why? Sorry for my "stupid" questions but in my lab there is no one who I could ask since I am the only one who is trying to solve a structure. Aleks -- ------------------------------------------- Aleksandar Bijelic, MSc. Institut für Biophysikalische Chemie Universität Wien Althanstrasse 14 A-1090 Wien Tel: +43 1 4277 52536 e-Mail: [email protected] --------------------------------------------
On Tue, Jul 22, 2014 at 10:01 AM, Aleksandar Bijelic < [email protected]> wrote:
Thank you very much. Indeed, I started from a good search model (maybe) explaining this low Rfree value. Unfortunately I am now confused, I copied the Rfree flags from my old mtz.file to my new one (as described by you), but Phenix giving me still the error saying that this flags were already used in previous runs ..... is this usual?
Yes, if you are starting refinement from a previous phenix.refine output, it will have the md5 hash corresponding to the old R-free array (at lower resolution), which will not match with the new extended R-free flags. It's fine to ignore the error message in this case. The funny thing is when I change the refinement by unchecking "update
water" Phenix is running without complainig ... why?
That is not normal - are you certain this is the only change? -Nat
Hi Aleksandar, okay, from amount of emails it starts seeming to me that the problem is really exaggerated, while I don't see any problem at all. Here is the simplest solution that can't fail: - given new data, forget about old data, remove PDB file header; - modify model: shake coordinates by 0.5A, shake B-factors, remove all water; - run phenix.refine first with all defaults and ask it to generate new free-r set; - run phenix.refine again using the outcome of previous run. This time: -- run say 10 macro-cycles; -- enable automated water update; -- enable weights optimization; -- enable whatever else is appropriate given your data resolution. This is it. Pavel
participants (7)
-
Aleksandar Bijelic
-
Dale Tronrud
-
Morten Grøftehauge
-
Nathaniel Echols
-
Pavel Afonine
-
Phan, Jason
-
Tim Gruene