Hi Has anyone successful submitted phenix jobs to a SLURM batch queue? I guess, in theory, I did successfully submit a job but it took a while to start and after completion the log files did not successfully link back to the interface losing the graphic feedback. I can’t tell how the job was submitted. Our submission command is sbatch, which is what I put in the Phenix Preference > Processes queue command. But locally we have a header on our sbatch submission which spells out our wall time, account and MPI expectations/requests. Is there an environment file that Phenix reads to provide this information or does it submit generically? I wondered if this was responsible for the long delay of the start of our job (sure – could have been a busy queue). Anyone know how to get the GUI to read the output files when it loses connection to the run? Heidi
There are two things I'd like to do, for which phenix.fmodels look appropriate, but which I've been able to do: 1. Run phenix.fmodel on a phenix-refined pdb file, and get the same FMODEL parameters as phenix.refine itself put into its output mtz file. Can I extract from the log files the scaling factors that phenix.refine used, and if so, how? 2. Run phenix.fmodel on a pdb file, and get (unscaled) FMODEL values that include bulk solvent contributions as well as the ususal atomic-based structure factors. I think my problem arises from trying to understand the documentation at phenix-online.org/documentation/reference/fmodel.html. Specifically: the equation at the very top of the documentation states: Fmodel = scale * exp(AnisoScale) * (Fcalc + k_sol * exp(-b_sol*s^2/4) * Fmask) But I had thought(?) that k_sol and b_sol were no longer being used. I don't see any reference to these in my phenix.refine log files. Example 2 in the documentation page suggests using k_sol=0.35 and b_sol=50, but I don't know how to find the "best" values for these, i.e. the ones that would be closest to what phenix.refine is using. If I try the values in example 2, I don't get the same FMODEL values as phenix.refine reports. Thanks in advance for pointers...dave case
Hi David,
1. Run phenix.fmodel on a phenix-refined pdb file, and get the same FMODEL parameters as phenix.refine itself put into its output mtz file. Can I extract from the log files the scaling factors that phenix.refine used, and if so, how?
instead why not use Fmodel that is reported in MTZ file after any refinement run?
2. Run phenix.fmodel on a pdb file, and get (unscaled) FMODEL values that include bulk solvent contributions as well as the ususal atomic-based structure factors.
I think my problem arises from trying to understand the documentation at phenix-online.org/documentation/reference/fmodel.html. Specifically: the equation at the very top of the documentation states:
Fmodel = scale * exp(AnisoScale) * (Fcalc + k_sol * exp(-b_sol*s^2/4) * Fmask)
But I had thought(?) that k_sol and b_sol were no longer being used. I don't see any reference to these in my phenix.refine log files.
Indeed, k_sol and b_sol are not available. There are still tools in cctbx to obtain k_sol and b_sol. Pavel
Hi Heidi,
Sorry, we don't have a SLURM environment for any rigorous testing, so we
just use "sbatch" for submitting jobs by default. We use some default flags
for job names ("-J") and files for the output/error logs ("-o", "-e"), but
other than that, the job submission step is not really customizable.
When you say that the job took a while to start, do you mean that the
Phenix jobs showed up in the queue (by checking "squeue -u <user name>"),
but was in the pending state ("PD") instead of the running state ("R")? Or
does the job take a while to even show up in the queue?
Lastly, for the output to show up in the GUI, did the jobs finish
successfully? If you restart the GUI and try to restore the job after it
finished, do you see any output? For example, in phenix.refine, you should
see graphs for the progress and some final statistics. Another thing to
check is the hidden directory in the project directory that stores
information for the GUI. Again, for phenix.refine, if that is your first
job, you would see 3 files (refine_1.eff, refine_1.log, and refine_1.pkl)
in your "<project directory>/.phenix/project_data" directory. The number
just refers to the job number. If those files are not there, the GUI cannot
update.
--
Billy K. Poon
Research Scientist, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
1 Cyclotron Road, M/S 33R0345
Berkeley, CA 94720
Tel: (510) 486-5709
Fax: (510) 486-5909
Web: https://phenix-online.org
On Tue, Sep 5, 2017 at 1:12 PM, Heidi Schubert
Hi
Has anyone successful submitted phenix jobs to a SLURM batch queue?
I guess, in theory, I did successfully submit a job but it took a while to start and after completion the log files did not successfully link back to the interface losing the graphic feedback.
I can’t tell how the job was submitted. Our submission command is sbatch, which is what I put in the Phenix Preference > Processes queue command. But locally we have a header on our sbatch submission which spells out our wall time, account and MPI expectations/requests. Is there an environment file that Phenix reads to provide this information or does it submit generically? I wondered if this was responsible for the long delay of the start of our job (sure – could have been a busy queue).
Anyone know how to get the GUI to read the output files when it loses connection to the run?
Heidi
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb Unsubscribe: [email protected]
participants (4)
-
Billy Poon
-
David A Case
-
Heidi Schubert
-
Pavel Afonine