pyyaml - iotbx.pdb incompatibility
Hello All, There is a strange incompatibility between pyyaml and iotbx.pdb that depends on the order of import. If yaml is imported first, there is no problem. If iotbx.pdb is imported first, then importing yaml crashes. This is the yaml that comes with enthought python, but I don't think it's using libyaml, which is the C implementation. So the yaml, from what I can tell, is pure python. I'm wondering if anyone else can reproduce this error. James chernev 185% cctbx.python Enthought Python Distribution -- www.enthought.com Version: 7.3-2 (64-bit) Python 2.7.3 |EPD 7.3-2 (64-bit)| (default, Apr 12 2012, 11:14:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "credits", "demo" or "enthought" for more information. py> import yaml py> from iotbx import pdb py> raise SystemExit() chernev 186% cctbx.python Enthought Python Distribution -- www.enthought.com Version: 7.3-2 (64-bit) Python 2.7.3 |EPD 7.3-2 (64-bit)| (default, Apr 12 2012, 11:14:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "credits", "demo" or "enthought" for more information. py> from iotbx import pdb py> import yaml show_stack(1): /Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/yaml/constructor.py(256) SafeConstructor show_stack(2): /Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/yaml/constructor.py(161) <module> show_stack(3): /Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/yaml/loader.py(8) <module> show_stack(4): /Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/yaml/__init__.py(8) <module> show_stack(5): <stdin>(1) <module> libc backtrace (49 frames, most recent call last): 50 python2.7 0x0000000100000f54 start + 52 49 Python 0x0000000100108576 Py_Main + 3318 48 Python 0x00000001000f2c6e PyRun_AnyFileExFlags + 126 47 Python 0x00000001000f2b2e PyRun_InteractiveLoopFlags + 78 46 Python 0x00000001000f28eb PyRun_InteractiveOneFlags + 379 45 Python 0x00000001000cd2c6 PyEval_EvalCode + 54 44 Python 0x00000001000ccfc5 PyEval_EvalCodeEx + 1733 43 Python 0x00000001000c77fe PyEval_EvalFrameEx + 9502 42 Python 0x00000001000c4267 PyEval_CallObjectWithKeywords + 87 41 Python 0x000000010000d052 PyObject_Call + 98 40 Python 0x00000001000be823 builtin___import__ + 131 39 Python 0x00000001000e5cf8 PyImport_ImportModuleLevel + 344 38 Python 0x00000001000e536a load_next + 234 37 Python 0x00000001000e511a import_submodule + 314 36 Python 0x00000001000e4f59 load_package + 409 35 Python 0x00000001000e4632 load_source_module + 466 34 Python 0x00000001000e3981 PyImport_ExecCodeModuleEx + 209 33 Python 0x00000001000cd2c6 PyEval_EvalCode + 54 32 Python 0x00000001000ccfc5 PyEval_EvalCodeEx + 1733 31 Python 0x00000001000c77fe PyEval_EvalFrameEx + 9502 30 Python 0x00000001000c4267 PyEval_CallObjectWithKeywords + 87 29 Python 0x000000010000d052 PyObject_Call + 98 28 Python 0x00000001000be823 builtin___import__ + 131 27 Python 0x00000001000e5cf8 PyImport_ImportModuleLevel + 344 26 Python 0x00000001000e536a load_next + 234 25 Python 0x00000001000e511a import_submodule + 314 24 Python 0x00000001000e4632 load_source_module + 466 23 Python 0x00000001000e3981 PyImport_ExecCodeModuleEx + 209 22 Python 0x00000001000cd2c6 PyEval_EvalCode + 54 21 Python 0x00000001000ccfc5 PyEval_EvalCodeEx + 1733 20 Python 0x00000001000c77fe PyEval_EvalFrameEx + 9502 19 Python 0x00000001000c4267 PyEval_CallObjectWithKeywords + 87 18 Python 0x000000010000d052 PyObject_Call + 98 17 Python 0x00000001000be823 builtin___import__ + 131 16 Python 0x00000001000e5cf8 PyImport_ImportModuleLevel + 344 15 Python 0x00000001000e536a load_next + 234 14 Python 0x00000001000e511a import_submodule + 314 13 Python 0x00000001000e4632 load_source_module + 466 12 Python 0x00000001000e3981 PyImport_ExecCodeModuleEx + 209 11 Python 0x00000001000cd2c6 PyEval_EvalCode + 54 10 Python 0x00000001000ccfc5 PyEval_EvalCodeEx + 1733 9 Python 0x00000001000cb0e6 PyEval_EvalFrameEx + 24070 8 Python 0x00000001000ccfc5 PyEval_EvalCodeEx + 1733 7 Python 0x00000001000c7ea0 PyEval_EvalFrameEx + 11200 6 Python 0x0000000100010af8 PyNumber_Multiply + 40 5 Python 0x000000010000c569 binary_op1 + 137 4 ??? 0x0000000000000aa9 0x0 + 2729 3 libsystem_c.dylib 0x00007fff84b0ccfa _sigtramp + 26 2 boost_python_meta_ext.so 0x00000001007013c0 initboost_python_meta_ext + 0 Floating-point error (Python and libc call stacks above) This crash may be due to a problem in any imported Python module, including modules which are not part of the cctbx project. To disable the traps leading to this message, define these environment variables (e.g. assign the value 1): BOOST_ADAPTBX_FPE_DEFAULT BOOST_ADAPTBX_SIGNALS_DEFAULT This will NOT solve the problem, just mask it, but may allow you to proceed in case it is not critical. -- James Stroud Affiliate Howard Hughes Medical Institute UCLA-DOE Institute for Genomics and Proteomics
Hello All, Below shows how to reproduce the error directly. James chernev 314% cctbx.python Enthought Python Distribution -- www.enthought.com Version: 7.3-2 (64-bit) Python 2.7.3 |EPD 7.3-2 (64-bit)| (default, Apr 12 2012, 11:14:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "credits", "demo" or "enthought" for more information. py> inf = 1e300 py> inf * inf inf py> from iotbx import pdb py> inf * inf show_stack(1): <stdin>(1) <module> libc backtrace (13 frames, most recent call last): 14 python2.7 0x0000000100000f54 start + 52 13 Python 0x0000000100108576 Py_Main + 3318 12 Python 0x00000001000f2c6e PyRun_AnyFileExFlags + 126 11 Python 0x00000001000f2b2e PyRun_InteractiveLoopFlags + 78 10 Python 0x00000001000f28eb PyRun_InteractiveOneFlags + 379 9 Python 0x00000001000cd2c6 PyEval_EvalCode + 54 8 Python 0x00000001000ccfc5 PyEval_EvalCodeEx + 1733 7 Python 0x00000001000c7ea0 PyEval_EvalFrameEx + 11200 6 Python 0x0000000100010af8 PyNumber_Multiply + 40 5 Python 0x000000010000c569 binary_op1 + 137 4 ??? 0x0000000000000002 0x0 + 2 3 libsystem_c.dylib 0x00007fff84b0ccfa _sigtramp + 26 2 boost_python_meta_ext.so 0x00000001007013c0 initboost_python_meta_ext + 0 Floating-point error (Python and libc call stacks above) This crash may be due to a problem in any imported Python module, including modules which are not part of the cctbx project. To disable the traps leading to this message, define these environment variables (e.g. assign the value 1): BOOST_ADAPTBX_FPE_DEFAULT BOOST_ADAPTBX_SIGNALS_DEFAULT This will NOT solve the problem, just mask it, but may allow you to proceed in case it is not critical. On Apr 15, 2013, at 9:24 PM, James Stroud wrote:
Hello All,
There is a strange incompatibility between pyyaml and iotbx.pdb that depends on the order of import. If yaml is imported first, there is no problem. If iotbx.pdb is imported first, then importing yaml crashes. This is the yaml that comes with enthought python, but I don't think it's using libyaml, which is the C implementation. So the yaml, from what I can tell, is pure python.
I'm wondering if anyone else can reproduce this error.
James
On Mon, Apr 15, 2013 at 8:24 PM, James Stroud
There is a strange incompatibility between pyyaml and iotbx.pdb that depends on the order of import. If yaml is imported first, there is no problem. If iotbx.pdb is imported first, then importing yaml crashes. This is the yaml that comes with enthought python, but I don't think it's using libyaml, which is the C implementation. So the yaml, from what I can tell, is pure python. ... Floating-point error (Python and libc call stacks above) This crash may be due to a problem in any imported Python module, including modules which are not part of the cctbx project. To disable the traps leading to this message, define these environment variables (e.g. assign the value 1): BOOST_ADAPTBX_FPE_DEFAULT BOOST_ADAPTBX_SIGNALS_DEFAULT This will NOT solve the problem, just mask it, but may allow you to proceed in case it is not critical.
We're using a feature in Boost that traps certain C++ issues whose effect might otherwise be undefined. Some of these wouldn't normally crash the interpreter, but they are genuinely bugs, and we want to catch as many as possible in our code. (There should be relatively few of these in CCTBX aside from one module in particular, but we do see this error occasionally in related code such as Phaser or Resolve, etc.) The problem, of course, is that it catches bugs in other people's code too - I had assumed these were all in C/C++, but your example of "inf * inf" indicates otherwise. (I realize this probably isn't really a bug, but that would be the exception.) We disable the traps entirely for GUI programs because some of the modules we use for those have the same effect. As the error message above implies, you can avoid the problem by setting those environment variables. This can be done just for the CCTBX dispatchers: create a file named "dispatcher_include_epd.sh" in your build directory containing these lines: export BOOST_ADAPTBX_FPE_DEFAULT=1 export BOOST_ADAPTBX_SIGNALS_DEFAULT=1 and run libtbx.refresh, and from now on cctbx.python (or any other cctbx command) will have the traps disabled. (But note the warning above.) For the record, you can make any other temporary modifications you wish to the environment using the include mechanism; this is how we set the paths for various GUI modules in Phenix. (And starting next month, also in a subset of CCTBX bundles.) Is your example more or less what YAML does that triggers the crash? -Nat
On Apr 15, 2013, at 11:21 PM, Nathaniel Echols wrote:
Is your example more or less what YAML does that triggers the crash?
Yes, the example I gave is essentially the YAML code distilled. For some reason yaml wants to make infs and nans. Here is around 256 of yaml/constructor.py: inf_value = 1e300 while inf_value != inf_value*inf_value: # ⇐ Line 256 inf_value *= inf_value nan_value = -inf_value/inf_value # Trying to make a quiet NaN (like C99). But what they are doing really doesn't make any sense to do, because infs and nans can be made in python without all of the gymnastics: >>> inf_value == float('inf') >>> nan_value = flaot('nan') I would agree that boost's trapping the conditional on line 256 is not a bug. If 1e300 * 1e300 were defined, they wouldn't "need" to use a loop to find the nan. James
On Apr 15, 2013, at 11:21 PM, Nathaniel Echols wrote:
We're using a feature in Boost that traps certain C++ issues whose effect might otherwise be undefined. Some of these wouldn't normally crash the interpreter,
Is there any hope that this feature could instead raise an exception or warning, based on the value of some environment variable? As it is, you either have to be blind to the potential for such problems or you must accept that your interpreter is subject to sudden crashing. James
Hi James,
Is there any hope that this feature could instead raise an exception or warning, based on the value of some environment variable? As it is, you either have to be blind to the potential for such problems or you must accept that your interpreter is subject to sudden crashing.
there is a more fine-grained solution: from boost.python import ext try: division_by_zero = ext.is_division_by_zero_trapped() ext.trap_exceptions(division_by_zero=False) <here you make the call to the library that has issues with NaN and Inf> finally: ext.trap_exceptions(division_by_zero=division_by_zero) Never ever use that without the try ... finally as it is utterly important to restore the default trapping in all circumstances. This would be best encapsulated as a context manager actually, so that we can write it simply as with boost.python.trapping(division_by_zero=False): <here you make the call to the library that has issues with NaN and Inf> I leave that as an exercise to whomever is interested! FYI we have C++ tools that make it easy to safeguard a call to a C/C++ function from an external library that has such issues.
Sorry James, I hit "Send" by mistake before I was finished correcting the cut-n-paste I did from a case where I only needed to protect against divisions by zero: The code you need is actually try: division_by_zero = ext.is_division_by_zero_trapped() invalid = ext.is_invalid_trapped() overflow = ext.is_overflow_trapped() ext.trap_exceptions(division_by_zero=False, invalid=False, overflow=False) finally: ext.trap_exceptions(division_by_zero=division_by_zero, invalid=invalid, overflow=overflow) Best wishes, Luc
This would be best encapsulated as a context manager actually, so that we can write it simply as
with boost.python.trapping(division_by_zero=False): <here you make the call to the library that has issues with NaN and Inf>
I have just committed the code making it possible to use that terser syntax. In your case, again, you want instead to pass overflow=False and invalid=False to which you can still add division_by_zero=False just in case future versions of pyyaml cause problem with that too! Best wishes, Luc
participants (3)
-
James Stroud
-
Luc Bourhis
-
Nathaniel Echols