Dear CCTBX Maintainers, Maybe some of you remember me last year asking on this mailing list[1]. I was asked by the Debian Science Team[2] to contribute my former work to get cctbx finally packaged in Debian. The last month I was working with Baptiste Carvello and Picca Frédéric-Emmanuel to get this done. Our packaging efforts + patches can be found in our git[3]. We organize our packaging efforts in a wiki[4]. But we don't want to overlook you and give some of our work back into the main project. We have a few patches to give back upstream that would really help us and hopefully help to increase the cctbx user community. To explain a little bit what we've done. First of all we tried respectfully not to change too much about your build system. We integrated a few options in libtbx/env_config which shouldn't break anything. The primary changes are: * Option to check and use system libraries (in case any library is already installed no need to compile the bundled shlib) * Install targets for headers and shlibs and binaries: calling "scons install" * Shlib versioning: This is achieved by utilizing libtool (the GNU standard) * --install-prefix, --install-destdir and --libdir option * support for LD_FLAGS in --use_environment_flags option * Fallback to avoid using "LIBTBX_BUILD" env variable after install * Full integration of your scons build system in distutils: setup.py and sconsutils.py. You can just call 'python setup.py install' * Adaption of your test system to be called by distutils * Autogenerate pkg-config files for shlibs What we now need to know is if our changes are appreciated or if we need to spend more work on this. Keep in mind that our work is still work in progress! One big concern is always the ABI/API stability. But if we can count on the shlib versioning as mentioned here[5] it would help us and other developers a lot. Libtool should be available on most systems, works with common compilers and it provides the quasi standard for versioning shlibs. We have other issues too but I guess I won't overkill this mail to much for now. I hope we can discuss and work on this together. Please tell us your honest opinion and critics. kind regards Radostan Riedel P.S: This work is intended to work on other distributions too. We also hope that Fedora, openSUSE, Gentoo etc. can benefit from it too. [1] http://phenix-online.org/pipermail/cctbxbb/2011-June/000192.html [2] http://wiki.debian.org/DebianScience [3] http://anonscm.debian.org/gitweb/?p=debian-science/packages/cctbx.git;a=summ... [4] http://wiki.debian.org/DebianScience/cctbx [5] http://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.h...
On Tue, Aug 7, 2012 at 11:19 AM, Radostan Riedel
But we don't want to overlook you and give some of our work back into the main project.
We have a few patches to give back upstream that would really help us and hopefully help to increase the cctbx user community.
Thanks for investing your time in this. We agree that it would be very helpful in making CCTBX more accessible - our main concern is that any changes not impact the existing projects which depend on it (Phenix, Olex2, CCP4, LABELIT, etc.), including the ability to support these on all current platforms*. Within this constraint, in principle we're happy to accept any changes that you feel would be worth incorporating. This should be done slowly and incrementally; conservative modifications that don't modify the default behavior should go first, but I suspect you will eventually need access to our testing systems in Berkeley in order to run tests. Obviously you should be working from the trunk when you do this. I noticed that you've also modified the ccp4io_adaptbx tree, which is kept on our local SVN (for reasons that are obscure to me); we can deal with that later**. A random aside: we will be switching to building against the Boost release branch very soon. Not sure if this affects you or not.
But if we can count on the shlib versioning as mentioned here[5] it would help us and other developers a lot. Libtool should be available on most systems, works with common compilers and it provides the quasi standard for versioning shlibs.
This is one point that came up in discussion here: how will this impact Mac? Apple's libtool appears to be an entirely different beast. Or will your changes only affect Linux right now? -Nat (* At the moment, this means Fedora 3 or newer using gcc [and Intel's compiler, I guess, but we're less firm on this]; Mac OS 10.4 or later using Xcode 2.4 or later, including both PPC and Intel; and Windows XP or newer using VC++ 8.0 or newer. And Python 2.4-2.7 in any case.) (** It looks like you're building against gpp4 instead of the ccp4 libs - or is it in addition to ccp4? I suspect this may be problematic in the long term.)
Hi Radostan,
I think the integration with distutils would be A Good Thing to increase
accessibility of the cctbx to users.
One addition to Nat's comments:
I noticed this commit with the log "fix cif parser to work with antlr3c
3.2":
http://anonscm.debian.org/gitweb/?p=debian-science/packages/cctbx.git;a=blob...
This is essentially a revert of the cctbx svn revision 14621 ("ucif:
upgrade to antlr3.4"):
http://cctbx.svn.sourceforge.net/viewvc/cctbx/trunk/ucif/parser.h?r1=14583&r2=14621
The ANTLR C runtime changed its interface slightly between 3.2 and 3.4,
hence the changes in revision 14621. From what I remember it isn't trivial
to determine the ANTLR runtime version at compile time which was why we
made no attempt to support both versions of the runtime. If you wish to be
able to compile against 3.2 in addition to 3.4 we will need to figure out
how to determine the runtime version. We also use a slightly modified
version of the ANTLR C runtime in order to obtain savings on memory
(particularly when reading large ribosome data files).
Cheers,
Richard
On 7 August 2012 14:27, Nathaniel Echols
But we don't want to overlook you and give some of our work back into
On Tue, Aug 7, 2012 at 11:19 AM, Radostan Riedel
wrote: the main project.
We have a few patches to give back upstream that would really help us and hopefully help to increase the cctbx user community.
Thanks for investing your time in this. We agree that it would be very helpful in making CCTBX more accessible - our main concern is that any changes not impact the existing projects which depend on it (Phenix, Olex2, CCP4, LABELIT, etc.), including the ability to support these on all current platforms*. Within this constraint, in principle we're happy to accept any changes that you feel would be worth incorporating. This should be done slowly and incrementally; conservative modifications that don't modify the default behavior should go first, but I suspect you will eventually need access to our testing systems in Berkeley in order to run tests. Obviously you should be working from the trunk when you do this. I noticed that you've also modified the ccp4io_adaptbx tree, which is kept on our local SVN (for reasons that are obscure to me); we can deal with that later**.
A random aside: we will be switching to building against the Boost release branch very soon. Not sure if this affects you or not.
But if we can count on the shlib versioning as mentioned here[5] it would help us and other developers a lot. Libtool should be available on most systems, works with common compilers and it provides the quasi standard for versioning shlibs.
This is one point that came up in discussion here: how will this impact Mac? Apple's libtool appears to be an entirely different beast. Or will your changes only affect Linux right now?
-Nat
(* At the moment, this means Fedora 3 or newer using gcc [and Intel's compiler, I guess, but we're less firm on this]; Mac OS 10.4 or later using Xcode 2.4 or later, including both PPC and Intel; and Windows XP or newer using VC++ 8.0 or newer. And Python 2.4-2.7 in any case.) (** It looks like you're building against gpp4 instead of the ccp4 libs - or is it in addition to ccp4? I suspect this may be problematic in the long term.) _______________________________________________ cctbxbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/cctbxbb
Here are the patches to the ANTLR 3.4 C runtime to which I referred in my
previous email:
http://cctbx.svn.sourceforge.net/viewvc/cctbx?view=revision&revision=14622
With this patch we saw approximately 30% reduction in memory usage during
parsing of large CIF files.
Cheers,
Richard
On 7 August 2012 14:43, Richard Gildea
Hi Radostan,
I think the integration with distutils would be A Good Thing to increase accessibility of the cctbx to users.
One addition to Nat's comments:
I noticed this commit with the log "fix cif parser to work with antlr3c 3.2":
http://anonscm.debian.org/gitweb/?p=debian-science/packages/cctbx.git;a=blob...
This is essentially a revert of the cctbx svn revision 14621 ("ucif: upgrade to antlr3.4"):
http://cctbx.svn.sourceforge.net/viewvc/cctbx/trunk/ucif/parser.h?r1=14583&r2=14621
The ANTLR C runtime changed its interface slightly between 3.2 and 3.4, hence the changes in revision 14621. From what I remember it isn't trivial to determine the ANTLR runtime version at compile time which was why we made no attempt to support both versions of the runtime. If you wish to be able to compile against 3.2 in addition to 3.4 we will need to figure out how to determine the runtime version. We also use a slightly modified version of the ANTLR C runtime in order to obtain savings on memory (particularly when reading large ribosome data files).
Cheers,
Richard
On 7 August 2012 14:27, Nathaniel Echols
wrote: But we don't want to overlook you and give some of our work back into
On Tue, Aug 7, 2012 at 11:19 AM, Radostan Riedel
wrote: the main project.
We have a few patches to give back upstream that would really help us and hopefully help to increase the cctbx user community.
Thanks for investing your time in this. We agree that it would be very helpful in making CCTBX more accessible - our main concern is that any changes not impact the existing projects which depend on it (Phenix, Olex2, CCP4, LABELIT, etc.), including the ability to support these on all current platforms*. Within this constraint, in principle we're happy to accept any changes that you feel would be worth incorporating. This should be done slowly and incrementally; conservative modifications that don't modify the default behavior should go first, but I suspect you will eventually need access to our testing systems in Berkeley in order to run tests. Obviously you should be working from the trunk when you do this. I noticed that you've also modified the ccp4io_adaptbx tree, which is kept on our local SVN (for reasons that are obscure to me); we can deal with that later**.
A random aside: we will be switching to building against the Boost release branch very soon. Not sure if this affects you or not.
But if we can count on the shlib versioning as mentioned here[5] it would help us and other developers a lot. Libtool should be available on most systems, works with common compilers and it provides the quasi standard for versioning shlibs.
This is one point that came up in discussion here: how will this impact Mac? Apple's libtool appears to be an entirely different beast. Or will your changes only affect Linux right now?
-Nat
(* At the moment, this means Fedora 3 or newer using gcc [and Intel's compiler, I guess, but we're less firm on this]; Mac OS 10.4 or later using Xcode 2.4 or later, including both PPC and Intel; and Windows XP or newer using VC++ 8.0 or newer. And Python 2.4-2.7 in any case.) (** It looks like you're building against gpp4 instead of the ccp4 libs - or is it in addition to ccp4? I suspect this may be problematic in the long term.) _______________________________________________ cctbxbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/cctbxbb
On Tue, 07. Aug 14:43, Richard Gildea wrote:
I noticed this commit with the log "fix cif parser to work with antlr3c 3.2": http://anonscm.debian.org/gitweb/?p=debian-science/packages/cctbx.git;a=blob...
This is essentially a revert of the cctbx svn revision 14621 ("ucif: upgrade to antlr3.4"): http://cctbx.svn.sourceforge.net/viewvc/cctbx/trunk/ucif/parser.h?r1=14583&r2=14621
The ANTLR C runtime changed its interface slightly between 3.2 and 3.4, hence the changes in revision 14621. From what I remember it isn't trivial to determine the ANTLR runtime version at compile time which was why we made no attempt to support both versions of the runtime. If you wish to be able to compile against 3.2 in addition to 3.4 we will need to figure out how to determine the runtime version. We also use a slightly modified version of the ANTLR C runtime in order to obtain savings on memory (particularly when reading large ribosome data files). Once again this is more of a debian related patch what could break your logic and is only intended to be used in the package. I tried to figure it out how to determine the antlr version with the preprocessor but the version information is useless (at least with my small knowledge)
To explain further our intentions. We don't want to link any peace of code by bundled shlibs what we already have in debian. Then we'd have to link everything statically what is not so good in finding bugs Since fixing a shlib package is easier then fixing every package which bundles this shlib.
On Tue, 07. Aug 14:27, Nathaniel Echols wrote:
A random aside: we will be switching to building against the Boost release branch very soon. Not sure if this affects you or not. Right now I'm tried boost 1.48 and boost 1.49 and it worked good.
This is one point that came up in discussion here: how will this impact Mac? Apple's libtool appears to be an entirely different beast. Or will your changes only affect Linux right now? I don't have a mac but from reading the documentation of libtool it should work otherwise it's a bug in libtool. You are right that libtool is quite of a beast but that's why I did it as an option.
Once again to make that clear what didn't went to be so clear in my first mail. Not every change we did should go back to you. I understand that a few patches are just debian specific and they should only be applied there. But the changes we want to put back upstream are cherry picked to work by appending an option to libtbx/configure.py.
On 7 Aug 2012, at 23:02, Radostan Riedel wrote:
On Tue, 07. Aug 14:27, Nathaniel Echols wrote:
This is one point that came up in discussion here: how will this impact Mac? Apple's libtool appears to be an entirely different beast. Or will your changes only affect Linux right now? I don't have a mac but from reading the documentation of libtool it should work otherwise it's a bug in libtool. You are right that libtool is quite of a beast but that's why I did it as an option.
Just a side note: Apple's developer tools contain a libtool that is quite different from GNU's ditto. GNU's libtool is installed as glibtool. // Johan Postdoctoral Fellow @ Physical Biosciences Division __________________________________________________________________ Lawrence Berkeley National Laboratory * 1 Cyclotron Rd. Mail Stop 64R0121 * Berkeley, CA 94720-8118 * +1 (510) 495-8055
Dear Radostan, I appreciate the effort of yours. Making the cctbx installable with aptitude is great indeed. Then, as you mentioned, it would be great if it became installable with yum on Fedora-like system, etc. Making it installable with distutils may open the possibility to use easy_install or pip to install the cctbx. More of a gimmick imho but the average Python user seems to expect that these days. First a general comment: you have been using git in a manner that I find suboptimal. It would have been much easier for us (and much more in the spirit of git) if you had asked us to make a public git repository (I exclusively work with git for the record, using git svn to interact with sourceforge, so I could have provided one on no time), and then forked it. Indeed we would have been able to simply check out your repo into a branch of our public repo, and then immediately test your changes, and eventually apply those that passes the trial of fire. Actually, as pointed out in my comments below, we can't even apply your patches because some seem to be missing. Here is a review of the patches you propose. When I write "accepted" or "rejected", those are of course propositions as the final decision has to be collegial in this new era for the cctbx. 0001-remove-hardcoded-libtbx_build-env: accepted No problem on our side. You want to do that so as to be able to run cctbx-based script without /usr/bin/python instead of our wrappers like libtbx.python. Fair enough. 0002-fix-opengl-header-missing-gltbx: rejected Do you really want to force all cctbx users to install OpenGL? Even if they don't need it because e.g. they run cctbx-based scripts as the back end of a web server? 0003-correct-paths-in-dispatcher-creation, 0008-Fix-to-skip-pycbf-build, 0016-adapt-test_utils-in-libtbx-for-setup_py-test, 0018-Fix-to-use-systems-include-path, 0019-Fix-to-skip-build-of-clipper-examples I have put those together because they participate to the same philosophy. They make sense only in the Debian environment you are designing, where the cctbx will depend on other packages, that will therefore be installed in standard locations if the cctbx is installed. But in agnostic an environment, where the cctbx dynamic libraries and Python modules are not in standard places, those patches break the build system and part of the runtime system. For example, 0018 assumes there is gpp4/ccp4 somewhere on the header paths: that would require changing the packaging of Phenix to match that. This is so obvious that you can't have missed that. So am I missing something here? 0004-upstream-fix-for-declaration-errors-in-gcc4.7: already done This is Marat's fix in our trunk at rev 15462. 0005-fix-for-gcc4.7-compilation-error: already done This is the fix of mine in our trunk at rev 15576. 0006-options-for-system-libs-installtarget-and-prefix: needs thorough testing I approve the spirit of it but this patch introduces a truckload of changes and that needs to stand the trial of our nightly tests. Note that you use a couple of new methods, e.g. env_etc.check_syslib, that none of the patches define as far as I can tell. 0007-adding-shlib-versioning: accepted The new build option libtoolize seems properly introduced. Beyond that I must admit I am rather clueless about libtool. Anyway, if configure is not run with --libtoolize, this won't impact us! 0009-build-libann-statically: pending explanations could you explain why you need to build this one statically only? 0010-adding-setup_py: pending discussions I don't quite understand your code but this is orthogonal to our existing code. What do you need class build_py for e.g.? 0011-fix-missing-python-lib-during-linking: needs tidying up Why don't you append to env_etc.libs_python instead of created the string env_etc.py_lib? We try to use list as much as possible in the SConscript. 0012-fix-to-remove-cctbx.python-interpreter: rationale? And trunk has moved on anyway Why do you need to remove cctbx.python? uc1_2_reeke.py has been removed and there is now uc1_2_a.py that features cctbx.python too 0013-fix-to-support-LDFLAGS-in-use_enviroment_flags: not sure This seems done in orthodox a manner. However, this has the potential of wrecking havoc to Phenix on some machines where LDFLAGS is set in fancy ways. 0014-Fix-to-append-CPPFLAGS-to-CXXFLAGS: rejected CPPFLAGS is added to CCFLAGS which is eventually used for the both of C and C++ by SCons. This patch is therefore incorrect. 0015-fix-cif-parser-to-work-with-antlr3c-3.2: for Richard's eyes Richard (Gildea) is the expert when it comes to ANTLR 0017-autogenerate-pkgconfig-files: accepted Your business! Best wishes, Luc
On Tue, Aug 7, 2012 at 4:31 PM, Luc Bourhis
Here is a review of the patches you propose. When I write "accepted" or "rejected", those are of course propositions as the final decision has to be collegial in this new era for the cctbx.
Thanks for inspecting these thoroughly, Luc. I'll go through the patches marked as "accepted" tonight or tomorrow to double-check. If anyone else has input, we're all ears. As far as the rest of the comments go, we trust Luc's judgement; we can pick through these individually though. -Nat
First a general comment: you have been using git in a manner that I find suboptimal. It would have been much easier for us (and much more in the spirit of git) if you had asked us to make a public git repository (I exclusively work with git for the record, using git svn to interact with sourceforge, so I could have provided one on no time), and then forked it. Indeed we would have been able to simply check out your repo into a branch of our public repo, and then immediately test your changes, and eventually apply those that passes the trial of fire. Actually, as pointed out in my comments below, we can't even apply your patches because some seem to be missing. This is not much of a choice here. In Debian packaging it's a good practice to import the orig tarball of a release and not change the code but apply patches. 0003-correct-paths-in-dispatcher-creation, 0008-Fix-to-skip-pycbf-build, 0016-adapt-test_utils-in-libtbx-for-setup_py-test, 0018-Fix-to-use-systems-include-path, 0019-Fix-to-skip-build-of-clipper-examples I have put those together because they participate to the same philosophy. They make sense only in the Debian environment you are designing, where the cctbx will depend on other packages, that will therefore be installed in standard locations if the cctbx is installed. But in agnostic an environment, where the cctbx dynamic libraries and Python modules are not in standard places, those patches break the build system and part of the runtime system. For example, 0018 assumes there is gpp4/ccp4 somewhere on the header paths: that would require changing the packaging of Phenix to match that. This is so obvious that you can't have missed that. So am I missing something here? OK to make that clear a little bit. A few patches are really only for packaging
On Wed, 08. Aug 01:31, Luc Bourhis wrote: they can't and shouldn't go back upstream.
0009-build-libann-statically: pending explanations could you explain why you need to build this one statically only? This is also just debian specific since we already have a libann package which would break our existing shlib for the users since you are putting additional symbols (ann_selfinclude) into it.
0010-adding-setup_py: pending discussions I don't quite understand your code but this is orthogonal to our existing code. What do you need class build_py for e.g.?
0011-fix-missing-python-lib-during-linking: needs tidying up Why don't you append to env_etc.libs_python instead of created the string env_etc.py_lib? We try to use list as much as possible in the SConscript. This way every extension would be additionally linked by python libs as I understand boost_adaptbx/SConscript right. This is normally not the right way. But in gcc4.7 libsctbx_boost_python should be linked by python lib otherwise I'm getting undefined references.
0012-fix-to-remove-cctbx.python-interpreter: rationale? And trunk has moved on anyway Why do you need to remove cctbx.python? uc1_2_reeke.py has been removed and there is now uc1_2_a.py that features cctbx.python too We'd need an own interpreter in debian since we want to install the modules into sys.path.
0013-fix-to-support-LDFLAGS-in-use_enviroment_flags: not sure This seems done in orthodox a manner. However, this has the potential of wrecking havoc to Phenix on some machines where LDFLAGS is set in fancy ways. OK. Here is the problem that while automatically compiling in debian we like to set some special linkerflags like "-Wl,-z,relro". This can change from release to release.
0014-Fix-to-append-CPPFLAGS-to-CXXFLAGS: rejected CPPFLAGS is added to CCFLAGS which is eventually used for the both of C and C++ by SCons. This patch is therefore incorrect. OK I did not know that.
But first of all thanks for a review. regards
Hi gentlemen, it turns out that I am based in the South of Paris and therefore Frédéric-Emmanuel and I agreed to meet sometimes later this month. Nevertheless, if you allow me, I will answer some important remarks, responding to all your emails together, in a bit of a random fashion I am afraid. First, facilitating the use of cctbx at the ESRF and Soleil is definitively a further motivation on our side to make this Debian packaging work. Radostan Riedel wrote:
OK to make that clear a little bit. A few patches are really only for packaging they can't and shouldn't go back upstream.
So I had missed something indeed! This is my first experience of a Linux package in the making, you see. Unfortunately, the both of Baptiste and Frederic-Emmanuel answered my criticisms thinking they were aimed at your design of the cctbx Debian package. On the contrary, they had to be understood in the context of a installation of the cctbx by hand or using the package that CCI provides. I am afraid we talked past each other here! Nevertheless, my thanks to Frederic-Emmanuel and Baptiste to elaborate on those patches that were only meant for packaging and that I criticised most: it is still very useful for us to understand your rationales. Some specifics: PICCA Frédéric-Emmanuel wrote:
So yes for now apt-get install python-cctbx pull also openGL libraries (<30 Mo on my computer). The room on a server is no more a problem nowaday. we can find harddisk of 1To for less than 250euros. Indeed we should also split python-cctbx under finer grain packages, but is it worth the effort ? This can indeed be discussed.
Disk-space is not the problem I had in mind. Some people in charge of a web server are pretty paranoid about reliability and security and thus they fiercely try to minimise the number of packages installed. You may wish to consider that aspect. I have a few more remarks about your design choices for the packaging from my point of view as a cctbx developer but I would like to concentrate on upstream issues in the rest of this email.
Note that you use a couple of new methods, e.g. env_etc.check_syslib, that none of the patches define as far as I can tell.
let's see with Radostan
Could you provide us with the code for those, Radostan?
this is part of the questions we would like to ask you. We want to use the default Debian python interpreter, so we need to change all #!/usr/bin/env xxxx.python with #!/usr/bin/env python in all your files. If I remember correctly this job is also done by distutils [11] is declare as a script.
First, let me shed some light on those xxx.python dispatchers. They are all identical to each others and to the "python" dispatcher (save for the irrelevant variable LIBTBX_DISPATCHER_NAME). Actually, you should know that the "python" dispatcher is only created in what we call a development environment, that is when the sources are version controlled (the configuration scripts searches for proofs that CVS, subversion or git is used). The idea is that we don't want to override python, except for developers that are supposed to know what they are doing. This is why all the demos of the cctbx you can find out there always use libtbx.python or cctbx.python. Then, it seems to me that every single Python script in the cctbx featuring '#!/usr/bin/env python' or '#!/usr/bin/env xxx.python' is for internal use only. Conversely, every single script of any use for the users of your Debian packages are generated by the configure step and live in <build directory>/bin. As a result it seems to me that you made the right choice after all: we should change all those #!... strings to #!/usr/bin/env python as it will work for you and for us.
we do not know now how to deal with the mutiple scripts that you are provinding in your bin directory. we call them dispatcher scripts (look at the wiki [10]).my concerne is with third party softwares which are relying on them in the PATH. Policy of Debian says that for each binary in /usr/bin we should provide a man pages so in our case this should be an issue...
I guess this time you were writing about those scripts that do not wrap around python, e.g. iotbx.show_distances. For the issue at hand, there are 3 categories: i. those that are only useful for cctbx developers; ii. those that are useful to cctbx users and that are not documented; iii. those that are useful to cctbx users and that support the --help command line option. For the category iii, it should be easy to generate a man page from the output of xxx --help. Then we should document the category ii. Finally, we should give you a comprehensive list of the category i, so that you can ignore them.
0013-fix-to-support-LDFLAGS-in-use_enviroment_flags: not sure This seems done in orthodox a manner. However, this has the potential of wrecking havoc to Phenix on some machines where LDFLAGS is set in fancy ways.
This is also true with the other flag also, why do you treat LDFLAGS differently than others ?
In my experience, it is very unlikely a non-techie user would have permanently set CXXFLAGS or CCFLAGS but he may have set LDFLAGS in his .bashrc so as to help a program he installed to find the right shlibs. If that non-techie user then installs Phenix from source, esoteric bug reports may follow. But I may just be paranoid indeed.
In that case as explained by Radostan, Debian need to tune the build process by providing their own build flags. The trade-off would be to add a config flag that should allow or not to use the LDFLAGS --use-also-LDFLAGS
Indeed.
what is your opinion ?
Nat? What do you think of this issue?
0015-fix-cif-parser-to-work-with-antlr3c-3.2: for Richard's eyes Richard (Gildea) is the expert when it comes to ANTLR
this should be problematic as Debian provide only 3.2 for now. We shoul dask for the packaging of 3.4, but as you told us you also pacthed it. Did you forwarded you changed to the antlr3c upstream ?
It has definitively not been forwarded upstream because it is just a quick and dirty hack that completely goes against the original design. I honestly think that we will leave that hack as it is as any alternative would be too time-consuming. Especially in the light of ANTLR 4 coming, that will allegedly come with a proper C++ runtime (as opposed to the C runtime Richard's work is currently based on) that should not be affected by the memory allocation bloat solved by Richard's hack.
If not our last solution is to compile it statically for now.
I reckon you should just do that indeed, as that hack is very important to make the CIF parser usable on large structures. Baptiste Carvello wrote:
The patch 0016-adapt-test_utils-in-libtbx-for-setup_py-test [...]
It may be, though, that this can be achieved more cleanly by appropriately reconfiguring the pickled Environment object.
I would indeed think so but really it is up to you whether you want to maintain such a patch or not.
Radostan told me that this patch no longer applies on the newer nightly builds of cctbx, but I didn't look into this problem yet.
Most likely because of the dramatic change to the test framework since you forked. To make it use multi-processor machines more efficiently for the record. Which illustrates the issue of maintaining such a patch, although to be fair, this infrastructure code does not change very often. Now about the support of distools. Even though as we both agree this code is totally orthogonal to our existing base code, this is no small patch and it will go upstream and we will therefore need to maintain it. Hence my scrutiny. So thanks for your detailed explanations that will save me some time reading your code.
The "build_py" class is an experiment at adding "from __future__ import division" lines to python files, in case it would be needed to use them without the "-Qnew" option of python. Indeed, in Debian, cctbx will have to be compatible with the system python, without needing a global option.
I see.
Whether this "build_py" class is needed actually depends on you: if the cctbx community is committed to keeping cctbx working also without the "-Qnew" option, we don't need it, which is much better for us.
This is actually the other way around: we wanted the cctbx to always be run with -Qnew and we actually had to fix the code in quite a few places to make all tests pass with -Qnew. Having those Python dispatchers in the first place, the least intrusive change was definitively to add -Qnew. The alternative, adding "from __future__ import division" to every single Python module did not appeal to us. Thus that build_py class is definitively necessary. As I wrote above, the patch is not small and I definitively need to spend more time looking into it. And I will therefore come back to you with more questions. Radostan Riedel wrote:
0011-fix-missing-python-lib-during-linking: needs tidying up Why don't you append to env_etc.libs_python instead of created the string env_etc.py_lib? We try to use list as much as possible in the SConscript. This way every extension would be additionally linked by python libs as I understand boost_adaptbx/SConscript right. This is normally not the right way. You are right indeed.
But in gcc4.7 libsctbx_boost_python should be linked by python lib otherwise I'm getting undefined references.
Could you elaborate? As that sounds like a bug we should fix.
To sum up the patches that we would like to give back:
remove-hardcoded-libtbx_build-env options-for-system-libs-installtarget-and-prefix adding-shlib-versioning adding-setup_py
Ok, thanks for the clarification. We will concentrate our scrutiny on those then. Best wishes, Luc Bourhis
So I had missed something indeed! This is my first experience of a Linux package in the making, you see. Unfortunately, the both of Baptiste and Frederic-Emmanuel answered my criticisms thinking they were aimed at your design of the cctbx Debian package. On the contrary, they had to be understood in the context of a installation of the cctbx by hand or using the package that CCI provides. I am afraid we talked past each other here!
Nevertheless, my thanks to Frederic-Emmanuel and Baptiste to elaborate on those patches that were only meant for packaging and that I criticised most: it is still very useful for us to understand your rationales. Some specifics: Critics are very welcome and hopefully can help all of us. I guess we'll have some bugs in our patching and to work with all of you will help us. Normally if we upload a finished package into Debian (and this gets autoimported by Ubuntu btw.) if there are any problems with it a bug report will be filled by the user and the package maintainer will be informed. In our case we'd try to work with you to get things fixed. Disk-space is not the problem I had in mind. Some people in charge of a web server are pretty paranoid about reliability and security and thus they fiercely try to minimise the number of packages installed. You may wish to consider that aspect. Splitting the package into several small binary packages is not so complicated. I already did it once during the packaging. To be more specific and technical, during the build of an package we install everything in a tmp directory and then we cherry
On Thu, 09. Aug 03:35, Luc Bourhis wrote: pick everything with *.install files were we can use shell globbing etc. So we are focused now to build everything in the first place.
Could you provide us with the code for those, Radostan? The method is in libtbx/SConscript: env_etc.use_system_libs = False def check_syslib(lib, extra_libs=None): """ Check if a system library is available """ if not env_etc.use_system_libs: return False env_syslib = env_base.Clone(LIBS=extra_libs) conf = env_syslib.Configure() if not conf.CheckLib(library=lib): print 'Could not find %s library!'%(lib) conf.Finish() return False else: conf.Finish() return True env_etc.check_syslib = check_syslib .... if (libtbx.env.build_options.use_system_libs): env_etc.use_system_libs = True
The options are integrated in libtbx/env_config. I orientated on your design and did not change anything just add a new option.
I guess this time you were writing about those scripts that do not wrap around python, e.g. iotbx.show_distances. For the issue at hand, there are 3 categories: i. those that are only useful for cctbx developers; ii. those that are useful to cctbx users and that are not documented; iii. those that are useful to cctbx users and that support the --help command line option.
For the category iii, it should be easy to generate a man page from the output of xxx --help. Then we should document the category ii. Finally, we should give you a comprehensive list of the category i, so that you can ignore them. This would be great.
In that case as explained by Radostan, Debian need to tune the build process by providing their own build flags. The trade-off would be to add a config flag that should allow or not to use the LDFLAGS --use-also-LDFLAGS
Indeed.
what is your opinion ?
Nat? What do you think of this issue? I can prepare a patch for this.
If not our last solution is to compile it statically for now.
I reckon you should just do that indeed, as that hack is very important to make the CIF parser usable on large structures. OK, I'll delete the packaging patch. If I remember correctly antlr3 in cctbx is not build as a shlib so we do not need to change anything here.
Now about the support of distools. Even though as we both agree this code is totally orthogonal to our existing base code, this is no small patch and it will go upstream and we will therefore need to maintain it. Hence my scrutiny. So thanks for your detailed explanations that will save me some time reading your code. We can always collaborate in maintaining the code.
Could you elaborate? As that sounds like a bug we should fix. I don't remember exactly but after building libscitbx_boost_python debian has a helper script dhshlibs which checks for unnecessary linking etc. I saw some unreferenced symbols which pointed to a missing python_lib during linking. I'm going to fire up a build without that patch later to give you the specifics.
Ok, thanks for the clarification. We will concentrate our scrutiny on those then. It's a pleasure to work with all of you together on this.
kind regards Radi
On Thu, 09. Aug 03:35, Luc Bourhis wrote:
But in gcc4.7 libsctbx_boost_python should be linked by python lib otherwise I'm getting undefined references.
Could you elaborate? As that sounds like a bug we should fix. OK here the output: dpkg-shlibdeps -Tdebian/libscitbx-boost-python0.substvars debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py26.so.0.0.0 debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 dpkg-shlibdeps: warning: symbol PyImport_ImportModule used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries dpkg-shlibdeps: warning: symbol PyObject_IsTrue used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries dpkg-shlibdeps: warning: symbol PyExc_IndexError used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries dpkg-shlibdeps: warning: symbol PyErr_SetString used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries dpkg-shlibdeps: warning: symbol PyObject_CallFunction used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries dpkg-shlibdeps: warning: symbol PyRange_Type used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries dpkg-shlibdeps: warning: symbol _Py_NoneStruct used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries dpkg-shlibdeps: warning: symbol PyExc_RuntimeError used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries
I don't see this in gcc4.6.
From my understanding: env_etc.libs_python is set in libtbx/SConscript but only for 'nt' not posix. And it's used to link the extensions so I can't just use this attribute. That's the reason for this additional env_etc.py_lib attribute. As for the workflow this value can also be a list instead of a string.
Just ask if you have additional questions.
On 9 Aug 2012, at 12:53, Radostan Riedel wrote:
On Thu, 09. Aug 03:35, Luc Bourhis wrote:
But in gcc4.7 libsctbx_boost_python should be linked by python lib otherwise I'm getting undefined references.
Could you elaborate? As that sounds like a bug we should fix. OK here the output: dpkg-shlibdeps -Tdebian/libscitbx-boost-python0.substvars debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py26.so.0.0.0 debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 dpkg-shlibdeps: warning: symbol PyImport_ImportModule used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries [...] I don't see this in gcc4.6.
When something is linked to libscitbx_boost_python, it is always linked to libboost_python as well, which in turn is linked to libpythonx.y.so. Thus I would argue that dpkg-shlibdeps warnings are irrelevant. I mean, scitbx scripts run fine after compiling with gcc 4.7, don't they? Last time I tried I had no issue. If on the contrary, you get a crash running such a script, then definitively we need to investigate. Best wishes, Luc
On 09/08/12 14:28, Luc Bourhis wrote:
On 9 Aug 2012, at 12:53, Radostan Riedel wrote:
On Thu, 09. Aug 03:35, Luc Bourhis wrote:
But in gcc4.7 libsctbx_boost_python should be linked by python lib otherwise I'm getting undefined references.
Could you elaborate? As that sounds like a bug we should fix. OK here the output: dpkg-shlibdeps -Tdebian/libscitbx-boost-python0.substvars debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py26.so.0.0.0 debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 dpkg-shlibdeps: warning: symbol PyImport_ImportModule used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries [...] I don't see this in gcc4.6.
When something is linked to libscitbx_boost_python, it is always linked to libboost_python as well, which in turn is linked to libpythonx.y.so. Thus I would argue that dpkg-shlibdeps warnings are irrelevant. I mean, scitbx scripts run fine after compiling with gcc 4.7, don't they? Last time I tried I had no issue. If on the contrary, you get a crash running such a script, then definitively we need to investigate.
When using gold linker (which has a strong underlinking detection) and most probably when using -Wl,--as-needed the linker will stop with an error here. With this, the warning is not irrelevant and the issue needs a fix. Mostly everything except some runtime loaded plugins should be linked against all depending libraries. For further readings I can point you to the blog of one of our QA devs http://blog.flameeyes.eu/tag/linker http://blog.flameeyes.eu/tag/gold http://blog.flameeyes.eu/tag/asneeded Thanks, justin
On Thu, 09. Aug 14:28, Luc Bourhis wrote:
When something is linked to libscitbx_boost_python, it is always linked to libboost_python as well, which in turn is linked to libpythonx.y.so. Thus I would argue that dpkg-shlibdeps warnings are irrelevant. I mean, scitbx scripts run fine after compiling with gcc 4.7, don't they? Last time I tried I had no issue. If on the contrary, you get a crash running such a script, then definitively we need to investigate. To add some concerns to what Justin already said. At least gcc is getting stricter with everything by every release (-fpermissive issue). So in my opinion this problem is just postponed.
On 09/08/12 14:28, Luc Bourhis wrote:
On 9 Aug 2012, at 12:53, Radostan Riedel wrote:
On Thu, 09. Aug 03:35, Luc Bourhis wrote:
But in gcc4.7 libsctbx_boost_python should be linked by python lib otherwise I'm getting undefined references.
Could you elaborate? As that sounds like a bug we should fix. OK here the output: dpkg-shlibdeps -Tdebian/libscitbx-boost-python0.substvars debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py26.so.0.0.0 debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 dpkg-shlibdeps: warning: symbol PyImport_ImportModule used by debian/libscitbx-boost-python0/usr/lib/x86_64-linux-gnu/libscitbx_boost_python-py27.so.0.0.0 found in none of the libraries [...] I don't see this in gcc4.6.
When something is linked to libscitbx_boost_python, it is always linked to libboost_python as well, which in turn is linked to libpythonx.y.so. Thus I would argue that dpkg-shlibdeps warnings are irrelevant. I mean, scitbx scripts run fine after compiling with gcc 4.7, don't they? Last time I tried I had no issue. If on the contrary, you get a crash running such a script, then definitively we need to investigate.
Best wishes,
Luc
_______________________________________________ cctbxbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/cctbxbb
I just found this one, which really explains why we should link them all. http://blog.flameeyes.eu/2010/11/it-s-not-all-gold-that-shines-why-underlink...
About the gold linker and libscitbx_boost_python. Thanks for keeping me up-to-date as I was not aware of those underlinking issues at all. However… I have actually just realised I left out half of the phrase I wanted to write!
When something is linked to libscitbx_boost_python, it is always linked to libboost_python as well, which in turn is linked to libpythonx.y.so.
On the one hand, libboost_python.so on Linux (and libboost_python.dylib on MacOS X for the record) is not directly linked to libpythonx.y.so. On the other hand, libboost_python is only linked to Boost Python extensions that are loaded by Python at runtime, and then of course the Python exec is linked to libpythonx.y.so. So we first need to understand why the gold linker complains for libscitbx_boost_python but not for libboost_python or for any of the cctbx Boost Python extensions that directly use the Python API. I must admit I don't have a clue here. And then considering what you both told me, we need to fix the problem upstream. Best wishes, Luc
On the one hand, libboost_python.so on Linux (and libboost_python.dylib on MacOS X for the record) is not directly linked to libpythonx.y.so. On the other hand, libboost_python is only linked to Boost Python extensions that are loaded by Python at runtime, and then of course the Python exec is linked to libpythonx.y.so. So we first need to understand why the gold linker complains for libscitbx_boost_python but not for libboost_python or for any of the cctbx Boost Python extensions that directly use the Python API. I must admit I don't have a clue here. And then considering what you both told me, we need to fix the problem upstream. And I have to honestly admit that I don't really know what that
On Thu, 09. Aug 18:36, Luc Bourhis wrote: libscitbx_boost_python does. Could you explain it to me. I'm not a c++ developer and I haven't read the entire code yet.
And I have to honestly admit that I don't really know what that libscitbx_boost_python does. Could you explain it to me. I'm not a c++ developer and I haven't read the entire code yet.
It provides a medley of functions to be used in C++ code that need to interact with Python. But it is not a Boost Python extension, in the sense that it does not feature the BOOST_PYTHON_MODULE(name) { ... } section. Luc
It provides a medley of functions to be used in C++ code that need to interact with Python. But it is not a Boost Python extension, in the sense that it does not feature the
BOOST_PYTHON_MODULE(name) { ... } So it's extending libboost_python.so? From my small c++ knowledge if I'd like to develop some boost python modules which need some additional scientific (scitbx) functions I could just link my code against libscitbx_boost_python instead of
On Thu, 09. Aug 19:05, Luc Bourhis wrote: libboost_python? Are the features or functions documented somewhere? Thanx Radi
On 9 Aug 2012, at 19:14, Radostan Riedel wrote:
On Thu, 09. Aug 19:05, Luc Bourhis wrote:
It provides a medley of functions to be used in C++ code that need to interact with Python. But it is not a Boost Python extension, in the sense that it does not feature the
BOOST_PYTHON_MODULE(name) { ... } So it's extending libboost_python.so?
Correct.
From my small c++ knowledge if I'd like to develop some boost python modules which need some additional scientific (scitbx) functions I could just link my code against libscitbx_boost_python instead of libboost_python?
It conceptually makes sense. I may be missing a technical details but I'd say you are correct again. But I have to think thoroughly how to modify the SConscript to achieve the change you propose.
Are the features or functions documented somewhere?
Nope. Luc
On 09/08/12 18:36, Luc Bourhis wrote:
About the gold linker and libscitbx_boost_python. Thanks for keeping me up-to-date as I was not aware of those underlinking issues at all.
However… I have actually just realised I left out half of the phrase I wanted to write!
When something is linked to libscitbx_boost_python, it is always linked to libboost_python as well, which in turn is linked to libpythonx.y.so.
On the one hand, libboost_python.so on Linux (and libboost_python.dylib on MacOS X for the record) is not directly linked to libpythonx.y.so. On the other hand, libboost_python is only linked to Boost Python
on gentoo it is. # scanelf -n /usr/lib64/libboost_python-2.7.so TYPE NEEDED FILE ET_DYN libpython2.7.so.1.0,libpthread.so.0,libstdc++.so.6,libgcc_s.so.1,libc.so.6 /usr/lib64/libboost_python-2.7.so But we take very strongly care of correct linking, as we are support parallel installation of all python versions from 2.4-3.2 with all having their own boost_python-*.so
extensions that are loaded by Python at runtime, and then of course the Python exec is linked to libpythonx.y.so. So we first need to understand why the gold linker complains for libscitbx_boost_python but not for libboost_python or for any of the cctbx Boost Python extensions that directly use the Python API. I must admit I don't have a clue here. And then considering what you both told me, we need to fix the problem upstream.
For gold it was only a guess that we see chocking here, which I didn't tested yet. But I think we should try to ensure that libscitbx_boost_python and friends are correctly linked to libboost_python.so. This won't harm in best case and is required in worst case. And on gentoo as well as on all other distros which support multiple python ABIs to be installed, we need to ensure that the correct soname is recorded so that the correct python ABI version of libboost_python is loaded. thanks justin
On Thu, 09. Aug 18:50, justin wrote:
# scanelf -n /usr/lib64/libboost_python-2.7.so .... And on gentoo as well as on all other distros which support multiple python ABIs to be installed, we need to ensure that the correct soname is recorded so that the correct python ABI version of libboost_python is loaded. I took libboost_python as role model for libscitbx_boost_python. In debian we support multiple python ABIs too. libboost_python looks like this:
-rw-r--r-- 1 root root 666280 Jan 6 2012 /usr/lib/libboost_python-py27.a lrwxrwxrwx 1 root root 30 Jan 6 2012 /usr/lib/libboost_python-py27.so -> libboost_python-py27.so.1.48.0 -rw-r--r-- 1 root root 315264 Jan 6 2012 /usr/lib/libboost_python-py27.so.1.48.0 -rw-r--r-- 1 root root 659542 Jan 6 2012 /usr/lib/libboost_python-py32.a lrwxrwxrwx 1 root root 30 Jan 6 2012 /usr/lib/libboost_python-py32.so -> libboost_python-py32.so.1.48.0 -rw-r--r-- 1 root root 311136 Jan 6 2012 /usr/lib/libboost_python-py32.so.1.48.0 lrwxrwxrwx 1 root root 23 Jul 19 16:26 /usr/lib/libboost_python.so -> libboost_python-py27.so And when we build libscitbx_boost_python it builds all supported libscitbx_boost_python-py**.so* I try to detect this in my patch to use system libs and apply the python string. But seeing that gentoo is using a different python string makes me realize that I need to rework that patch a little bit. Right now it's done like this: env_etc.py_str = '-py%s%s'%(sys.version[0], sys.version[2]) ... if env_etc.check_syslib('boost_python%s'%env_etc.py_str, extra_libs=env_etc.py_lib): env_etc.boost_python = 'boost_python%s'%env_etc.py_str env_etc.scitbx_boost_python = 'scitbx_boost_python%s'%env_etc.py_str else: env_etc.boost_python = 'boost_python' env_etc.scitbx_boost_python = 'scitbx_boost_python'
On the one hand, libboost_python.so on Linux [...] is not directly linked to libpythonx.y.so. [...]
on gentoo it is.
Shall I understand that you patch our code to achieve that?
[...]
But we take very strongly care of correct linking, as we are support parallel installation of all python versions from 2.4-3.2 with all having their own boost_python-*.so
I understand it matters a lot then indeed. We can afford to be lazy because we ship one single version of boost_python.so
extensions that are loaded by Python at runtime, and then of course the Python exec is linked to libpythonx.y.so. So we first need to understand why the gold linker complains for libscitbx_boost_python but not for libboost_python or for any of the cctbx Boost Python extensions that directly use the Python API. I must admit I don't have a clue here. And then considering what you both told me, we need to fix the problem upstream.
[...] But I think we should try to ensure that libscitbx_boost_python and friends are correctly linked to libboost_python.so. This won't harm in best case and is required in worst case.
Ok. I can't think of any such friends right now in fact!
And on gentoo as well as on all other distros which support multiple python ABIs to be installed, we need to ensure that the correct soname is recorded so that the correct python ABI version of libboost_python is loaded.
One of the patch proposed by the Debian team aims at that goal if I understood correctly (0007-adding-shlib-versioning). Luc
On 09/08/12 19:06, Luc Bourhis wrote:
On the one hand, libboost_python.so on Linux [...] is not directly linked to libpythonx.y.so. [...]
on gentoo it is.
Shall I understand that you patch our code to achieve that?
We are unbundling everything you bundle, e.g. boost, scons. We are supporting parallel installation of all boost version of the >1.4* series, so we can depend on what ever cctbx needs. That's there reason why we have the correct linked boost here. And using the bundled will break once the active boost version is switched on gentoo and at least also on debian. So having an option from your side to not use the bundled boost would be great.
[...]
But we take very strongly care of correct linking, as we are support parallel installation of all python versions from 2.4-3.2 with all having their own boost_python-*.so
I understand it matters a lot then indeed. We can afford to be lazy because we ship one single version of boost_python.so
extensions that are loaded by Python at runtime, and then of course the Python exec is linked to libpythonx.y.so. So we first need to understand why the gold linker complains for libscitbx_boost_python but not for libboost_python or for any of the cctbx Boost Python extensions that directly use the Python API. I must admit I don't have a clue here. And then considering what you both told me, we need to fix the problem upstream.
[...] But I think we should try to ensure that libscitbx_boost_python and friends are correctly linked to libboost_python.so. This won't harm in best case and is required in worst case.
Ok. I can't think of any such friends right now in fact!
And on gentoo as well as on all other distros which support multiple python ABIs to be installed, we need to ensure that the correct soname is recorded so that the correct python ABI version of libboost_python is loaded.
One of the patch proposed by the Debian team aims at that goal if I understood correctly (0007-adding-shlib-versioning).
Luc
On 8/9/12 12:26 PM, justin wrote:
So having an option from your side to not use the bundled boost would be great. For what its worth: I just remove the boost directory and replace it with a link to wherever I have installed the boost downloaded from boost.org.
I haven't had any problems compiling cctbx or linking to the shared object libraries, but I only use iotbx.pdb so your mileage could vary. --Jeff
On 09/08/12 19:36, Jeffrey Van Voorst wrote:
On 8/9/12 12:26 PM, justin wrote:
So having an option from your side to not use the bundled boost would be great. For what its worth: I just remove the boost directory and replace it with a link to wherever I have installed the boost downloaded from boost.org.
We are installing it into /usr/(lib*/include) ... so there isn't a simple directory to link into the cctbx source. justin
On Wed, Aug 8, 2012 at 6:35 PM, Luc Bourhis
0013-fix-to-support-LDFLAGS-in-use_enviroment_flags: not sure This seems done in orthodox a manner. However, this has the potential of wrecking havoc to Phenix on some machines where LDFLAGS is set in fancy ways.
This is also true with the other flag also, why do you treat LDFLAGS differently than others ?
In my experience, it is very unlikely a non-techie user would have permanently set CXXFLAGS or CCFLAGS but he may have set LDFLAGS in his .bashrc so as to help a program he installed to find the right shlibs. If that non-techie user then installs Phenix from source, esoteric bug reports may follow. But I may just be paranoid indeed.
I really hope non-techie users aren't installing Phenix from source. In any case, won't this change be silent unless the --use_environment_flags is passed to configure.py? Installing Phenix from source is done through a massive shell script that completely obscures the messy details of configure flags - and definitely does not include --use_environment_flags. So although I also tend towards paranoia regarding environment variables, I don't see this impacting Phenix in any way. -Nat
On 9 Aug 2012, at 19:16, Nathaniel Echols wrote:
On Wed, Aug 8, 2012 at 6:35 PM, Luc Bourhis
wrote: 0013-fix-to-support-LDFLAGS-in-use_enviroment_flags: not sure This seems done in orthodox a manner. However, this has the potential of wrecking havoc to Phenix on some machines where LDFLAGS is set in fancy ways.
This is also true with the other flag also, why do you treat LDFLAGS differently than others ?
In my experience, it is very unlikely a non-techie user would have permanently set CXXFLAGS or CCFLAGS but he may have set LDFLAGS in his .bashrc so as to help a program he installed to find the right shlibs. If that non-techie user then installs Phenix from source, esoteric bug reports may follow. But I may just be paranoid indeed.
I really hope non-techie users aren't installing Phenix from source.
Ok ;-) But then I could have written olex2.refine!
In any case, won't this change be silent unless the --use_environment_flags is passed to configure.py?
Indeed. So no problem here then. Sorry for the noise, Luc
Hi Luc, I'll discuss mostly my part, and a few general remarks. Le 09/08/2012 03:35, Luc Bourhis a écrit :
Radostan Riedel wrote:
OK to make that clear a little bit. A few patches are really only for packaging they can't and shouldn't go back upstream.
Well, they shouldn't go into the packages. But if they end up beeing shared with other distros in the future, it could make sense to host them in a branch of your source repository. Time will tell.
So I had missed something indeed! This is my first experience of a Linux package in the making, you see. Unfortunately, the both of Baptiste and Frederic-Emmanuel answered my criticisms thinking they were aimed at your design of the cctbx Debian package. On the contrary, they had to be understood in the context of a installation of the cctbx by hand or using the package that CCI provides. I am afraid we talked past each other here!
I wouldn't say that we talked past each other. Discussing our design choices helps reduce the "impedance mismatch" between your code and ours. Also, you know the cctbx code better than us, so your advice is always helpful. Regarding installation by hand: the setup.py file could a priori also be used by end-users to install cctbx to the system python library. I didn't discuss this possibility because such a system install doesn't seem to be a supported way to install cctbx.
Nevertheless, my thanks to Frederic-Emmanuel and Baptiste to elaborate on those patches that were only meant for packaging and that I criticised most: it is still very useful for us to understand your rationales. Some specifics:
[...]
Baptiste Carvello wrote:
The patch 0016-adapt-test_utils-in-libtbx-for-setup_py-test [...]
It may be, though, that this can be achieved more cleanly by appropriately reconfiguring the pickled Environment object.
I would indeed think so but really it is up to you whether you want to maintain such a patch or not.
By combining this idea, and the strategy that is used in function get_module_tests of the new "libtbx/test_utils/parallel.py" module, I can avoid patching upstream code altogether. That way, all my changes will be confined in the setup.py and sconsutils.py files, which is much better. I'll update my patches accordingly in the second half of next week.
Now about the support of distools. Even though as we both agree this code is totally orthogonal to our existing base code, this is no small patch and it will go upstream and we will therefore need to maintain it. Hence my scrutiny. So thanks for your detailed explanations that will save me some time reading your code.
It's a big chunk of code indeed. I tried to keep the different classes independent, so that they can be reviewed in isolation. The main difficulty is the need to understand the underlying distutils architecture: beeing mostly glue code, the classes don't do much by themselves.
Whether this "build_py" class is needed actually depends on you: if the cctbx community is committed to keeping cctbx working also without the "-Qnew" option, we don't need it, which is much better for us.
This is actually the other way around: we wanted the cctbx to always be run with -Qnew and we actually had to fix the code in quite a few places to make all tests pass with -Qnew. Having those Python dispatchers in the first place, the least intrusive change was definitively to add -Qnew. The alternative, adding "from __future__ import division" to every single Python module did not appeal to us. Thus that build_py class is definitively necessary.
The thing is, right now, most modules in cctbx don't need an additional __future__ line at all. They run equally well with or without "-Qnew", for one of 2 reasons: * either because they already have the __future__ line, probably because it was already there, and has not been actively stripped, * or because int division is not used at all. The most common style, from the little I could see, seems to be using explicit constructs in all cases, for example "x//2" (explicit integer division) or "x/2." (explicit floating-point division). This style nicely sidesteps the problem. If we think that such a style will stay the norm in the future, we don't need any workaround.
As I wrote above, the patch is not small and I definitively need to spend more time looking into it. And I will therefore come back to you with more questions.
Sure. I'll take a few days off starting friday, but I'll be back in the middle of next week. Also remember that the "test" part is still a work in progress (so don't waste too much time on the current implementation), and, as a consequence, the rest is also quite undertested ;-) The big design bug I know of is the fact that some of the tests, as well as the test data, are completely ignored. I won't have time to fix it before tomorrow. Cheers, Baptiste
On Thu, Aug 9, 2012 at 10:27 AM, Baptiste Carvello
Well, they shouldn't go into the packages. But if they end up beeing shared with other distros in the future, it could make sense to host them in a branch of your source repository. Time will tell.
Personally, I am very reluctant to introduce branching if there's some way to avoid it.
Regarding installation by hand: the setup.py file could a priori also be used by end-users to install cctbx to the system python library. I didn't discuss this possibility because such a system install doesn't seem to be a supported way to install cctbx.
It isn't, but it would be great if this became possible at some point in the future. We (Berkeley folks) haven't invested any effort towards this goal in the past because it wouldn't make it any easier for us to distribute Phenix, which is a much larger and messier infrastructure, and since we still depend on the current, non-standard CCTBX build system for those other modules, we will never be able to get rid of it entirely. However, I suspect the current system discourages adoption of CCTBX by other potential users, so an alternative would be very welcome.
The thing is, right now, most modules in cctbx don't need an additional __future__ line at all. They run equally well with or without "-Qnew", for one of 2 reasons:
* either because they already have the __future__ line, probably because it was already there, and has not been actively stripped,
* or because int division is not used at all. The most common style, from the little I could see, seems to be using explicit constructs in all cases, for example "x//2" (explicit integer division) or "x/2." (explicit floating-point division). This style nicely sidesteps the problem.
If we think that such a style will stay the norm in the future, we don't need any workaround.
I actually think introducing -Qnew was a mistake, because there are third-party modules which we rely on (mostly in Phenix, although some stuff has trickled into CCTBX) which simply break when true division is forced. The Python Imaging Library is the worst offender, and unfortunately it looks like it's no longer maintained. I definitely agree that it would be better if everyone continues to use the explicit division styles instead of relying on the Python interpreter's default behavior, and if there is code in CCTBX where this isn't happening, it is probably worth changing. Unfortunately, once again we're constrained by the rest of Phenix, which is less easily fixed, so -Qnew needs to stay in the default dispatchers for now. -Nat
On 9 Aug 2012, at 19:39, Nathaniel Echols wrote:
I actually think introducing -Qnew was a mistake, because there are third-party modules [...] which simply break when true division is forced. [...] I definitely agree that it would be better if everyone continues to use the explicit division styles instead of relying on the Python interpreter's default behavior
I agree with you that using -Qnew has annoying side-effects. However I beg to differ as to the best solution. Imho we should enforce from __future__ import division at the beginning of every single Python module by adding another diagnostic to libtbx.find_clutter. Best wishes, Luc
On Thu, Aug 9, 2012 at 1:24 PM, Luc Bourhis
I agree with you that using -Qnew has annoying side-effects. However I beg to differ as to the best solution. Imho we should enforce from __future__ import division at the beginning of every single Python module by adding another diagnostic to libtbx.find_clutter.
Are you volunteering? :) -Nat
On 9 Aug 2012, at 22:32, Nathaniel Echols wrote:
On Thu, Aug 9, 2012 at 1:24 PM, Luc Bourhis
wrote: I agree with you that using -Qnew has annoying side-effects. However I beg to differ as to the best solution. Imho we should enforce from __future__ import division at the beginning of every single Python module by adding another diagnostic to libtbx.find_clutter.
Are you volunteering? :)
Yes! It has been on my todo list with a horizon around early 2013. But since the Debian guys need it, let's move it up! Luc
On Thu, Aug 9, 2012 at 1:38 PM, Luc Bourhis
Yes! It has been on my todo list with a horizon around early 2013. But since the Debian guys need it, let's move it up!
Okay, fine with me - but just to clarify, we're keeping -Qnew in the dispatchers for now, correct? I'm not very enthusiastic about converting the rest of Phenix, especially since most of the non-CCTBX developers don't like to run libtbx.find_clutter. -Nat
On 9 Aug 2012, at 22:49, Nathaniel Echols wrote:
On Thu, Aug 9, 2012 at 1:38 PM, Luc Bourhis
wrote: Yes! It has been on my todo list with a horizon around early 2013. But since the Debian guys need it, let's move it up!
Okay, fine with me - but just to clarify, we're keeping -Qnew in the dispatchers for now, correct? I'm not very enthusiastic about converting the rest of Phenix, especially since most of the non-CCTBX developers don't like to run libtbx.find_clutter.
Removing -Qnew was not in my plans, don't worry! Luc
Greetings, A non gentoo issue, but definitely with debian is using recent versions of libraries. On the debian systems where I work, the stable boost version is 1.42.0, and by the time a particular cctbx release might be considered stable it will probably be several years old, and then, suffer from inertia. Maybe my view of software development is incorrect, but real work is needed to add functions to libraries and then after that having to wait for some time to actual use such a function on all platforms is quite frustrating. My question to the "debian group" is how can we/I help with the debian process to keep a recent version of cctbx marked as stable for each debian release? Regards, Jeff Van Voorst
On Fri, 10. Aug 08:23, Jeffrey Van Voorst wrote:
Greetings, Hello,
A non gentoo issue, but definitely with debian is using recent versions of libraries. On the debian systems where I work, the stable boost version is 1.42.0, and by the time a particular cctbx release might be considered stable it will probably be several years old, and then, suffer from inertia. Maybe my view of software development is incorrect, but real work is needed to add functions to libraries and then after that having to wait for some time to actual use such a function on all platforms is quite frustrating. My question to the "debian group" is how can we/I help with the debian process to keep a recent version of cctbx marked as stable for each debian release? OK, I'll try to answer although I guess Fred as Debian Developer can add more here. The process in Debian focuses on stability and this only comes with time. But to make some thinks clear. We won't be keeping cctbx 2012-05-08 till the next Debian release probably 2015. We will continue to upgrade to new upstream versions and build them with Debians unstable version Sid. Next stable will be Wheezy which probably will be released beginning of 2013. Stable boost python will be 1.49.
If you really need faster releases and more recent software you can use Debian based OS's like Ubuntu[1] or Aptosid[2], they auto import packages from Sid. Another way to benefit from the packages is to build an own repository what users can use. This can be done by everyone. I keep my own Personal Package Archives on Launchpad[3]. This is only for Ubuntu. Another great way is to use OpenSuse-Build-Service[4]. This is a very cool and free solution to build and distribute your packages for multiple distro's (Fedora, openSuse, RedHat, Debian, Ubuntu), although sadly no Gentoo or Arch-Linux yet. I'll be setting up my own public personal build service when we are done myself since I have a few Django projects which depend on cctbx. kind regards Radi [1] http://www.ubuntu.com/ [2] http://aptosid.com/ [3] https://help.launchpad.net/Packaging/PPA/ [4] https://build.opensuse.org/
I've just had a look at http://wiki.debian.org/DebianScience/cctbx. I think ssm (or libssm) is missing in the list of dependencies. The sources are at ftp://ftp.ccp4.ac.uk/opensource/ The packages there have dates appended to versions, because they are actually repository snapshot. I've setup OBS repository (to evaluate usefulness of OBS) with a few ccp4 libraries, only RPMs for now: https://build.opensuse.org/project/show?project=home:wojdyr:ccp4 It's a really nice service, user's interface looks like this (superpose is a small program, part of the ssm library): http://software.opensuse.org/download?project=home:wojdyr:ccp4&package=superpose I'd like to add also debian packages there, but I have less experience with debs, so I'll wait until you guys make packages for Debian and then I'll copy it. Regarding forked versions of mmdb and libccp4: Eugene actively maintains mmdb and fixes bugs from time to time, so it's better to take it directly from us. gpp4 is a fork of a subset of libccp4 (maybe it was forked before libccp4 was separated from the ccp4 codebase), but the difference is not big. libccp4 requires another additional dependency, ccif, but it can be avoided using --disable-ccif (ccif is not needed for cctbx and coot). AFAIK coot works fine with the latest versions of the libraries and the versions in cctbx are updated nightly from our repositories. And a question about so-versions: is it a formal requirement for Debian packages? What will be the initial so-version of cctbx? 0.0.0? Marcin
Hi Marcin,
I've just had a look at http://wiki.debian.org/DebianScience/cctbx. I think ssm (or libssm) is missing in the list of dependencies.
I can't find any mention of ssm in SConscript but there is ccp4io/lib/ssm. Actually ccp4io is not mentioned on that web page. Could somebody explain that to me? Best wishes, Luc
Hi Luc
On 30 August 2012 14:56, Luc Bourhis
I can't find any mention of ssm in SConscript
see ssm_sources in cctbx_sources/ccp4io_adaptbx/SConscript
Actually ccp4io is not mentioned on that web page. Could somebody explain that to me?
ccp4io is a name used in cctbx. The content of is updated from three ccp4 repositories. Typically Linux distros don't like duplication of code and don't allow to package something that includes code already packaged. Since mmdb, ssm and libccp4 are used also by coot and other programs, before packaging cctbx it's necessary to hack it so it can be built with external libraries. At least that's my understanding. Marcin
On Thu, 30. Aug 16:40, Marcin Wojdyr wrote:
ccp4io is a name used in cctbx. The content of is updated from three ccp4 repositories. Typically Linux distros don't like duplication of code and don't allow to package something that includes code already packaged. Since mmdb, ssm and libccp4 are used also by coot and other programs, before packaging cctbx it's necessary to hack it so it can be built with external libraries. At least that's my understanding. Thats correct. It is uncomfortable to download software which already distributes other software which they don't develop. Imho this is unnecessary. If everyone would do that we'd have dozens of duplicate libraries.
On Thu, Aug 30, 2012 at 9:04 AM, Radostan Riedel
Typically Linux distros don't like duplication of code and don't allow to package something that includes code already packaged. Since mmdb, ssm and libccp4 are used also by coot and other programs, before packaging cctbx it's necessary to hack it so it can be built with external libraries. At least that's my understanding. Thats correct. It is uncomfortable to download software which already distributes other software which they don't develop. Imho this is unnecessary. If everyone would do that we'd have dozens of duplicate libraries.
It is indeed an unfortunate situation, but historically* this was necessary because the license of the CCP4 libraries was ambiguous and/or unacceptable, and we were stuck using an obsolete version for a time. This has been resolved (the libraries are now LGPL, I believe), but Ralf maintained a separate source tree because (among other reasons) he couldn't convince the CCP4 maintainers to accept all of his patches. So the libraries we distribute are not necessarily the same as what you'd be installing system-wide. (I do not actually know what the changes are, but I will ask Ralf.) Note that we also use the ccp4 libraries in SOLVE/RESOLVE, which is part of Phenix, and I suspect that some of the customizations are for that purpose, not CCTBX directly. -Nat (* this predates my involvement, so I apologize if I'm butchering the details)
On Thu, 30. Aug 09:26, Nathaniel Echols wrote:
It is indeed an unfortunate situation, but historically* this was necessary because the license of the CCP4 libraries was ambiguous and/or unacceptable, and we were stuck using an obsolete version for a time. This has been resolved (the libraries are now LGPL, I believe), but Ralf maintained a separate source tree because (among other reasons) he couldn't convince the CCP4 maintainers to accept all of his patches. So the libraries we distribute are not necessarily the same as what you'd be installing system-wide. (I do not actually know what the changes are, but I will ask Ralf.)
Note that we also use the ccp4 libraries in SOLVE/RESOLVE, which is part of Phenix, and I suspect that some of the customizations are for that purpose, not CCTBX directly. OK but I was talking in general not only about ccp4. cctbx is distributing a lot of bundled libraries which are not maintained by this project: boost, scons, antlr3c, ann, cbflib, mmdb, clipper, gl2ps.
On Thu, Aug 30, 2012 at 9:54 AM, Radostan Riedel
OK but I was talking in general not only about ccp4. cctbx is distributing a lot of bundled libraries which are not maintained by this project: boost, scons, antlr3c, ann, cbflib, mmdb, clipper, gl2ps.
This is because we want people to be able to install and use CCTBX with minimal fuss, instead of wasting days trying to put together all of the third-party libraries and dealing with the inevitable compile-and-link problems. (Even for the libraries like Boost or SCons that are likely to be available on Linux distributions, there is no guarantee that the version installed by "apt-get" et al. will actually be compatible with our code.) Most users don't like dealing with this - and it is a huge barrier to adoption. At any rate, there's no requirement that these packages be distributed with CCTBX (and they're not part of the SVN repository on SF); however, the binary bundles would be absolutely useless without them. -Nat
On Thu, 30. Aug 10:05, Nathaniel Echols wrote:
This is because we want people to be able to install and use CCTBX with minimal fuss, instead of wasting days trying to put together all of the third-party libraries and dealing with the inevitable compile-and-link problems. (Even for the libraries like Boost or SCons that are likely to be available on Linux distributions, there is no guarantee that the version installed by "apt-get" et al. will actually be compatible with our code.) Most users don't like dealing with this - and it is a huge barrier to adoption.
At any rate, there's no requirement that these packages be distributed with CCTBX (and they're not part of the SVN repository on SF); however, the binary bundles would be absolutely useless without them. Yeah I understand the reasons if you distribute binary bundles but source bundles? Normally if I want to compile something myself I'm aware of the work I'll have to do. Also cctbx is not mainstream as I understand it. It's for software developers. And I naively believe software developers know how to compile and install software :D. End-Users are using Phenix or Olex2 etc. Olex2 for example is bundling cctbx too so mainstream users are normally not confronted with that. But I don't want to change your distribute system I'd just think it'd be nice to have an additional minimal unbundled package for freaks like myself ;).
About the compatibility issues it would be nice to have a check for a specific library and version. How can I be sure it will work with a specific shlib version? Are you just using your test system to evaluate? I ran the tests with my system libraries and they worked fine is this a guarantee for me that it's working? kind regards Radi
On Thu, Aug 30, 2012 at 10:27 AM, Radostan Riedel
Yeah I understand the reasons if you distribute binary bundles but source bundles? Normally if I want to compile something myself I'm aware of the work I'll have to do. Also cctbx is not mainstream as I understand it. It's for software developers. And I naively believe software developers know how to compile and install software :D.
They may know how, but some of us are quite lazy and/or impatient, and our experience has been that it's difficult to convince other developers to adopt a package that requires so much effort to set up.
End-Users are using Phenix or Olex2 etc. Olex2 for example is bundling cctbx too so mainstream users are normally not confronted with that. But I don't want to change your distribute system I'd just think it'd be nice to have an additional minimal unbundled package for freaks like myself ;).
Can't you just use the SVN checkout for this?
Are you just using your test system to evaluate? I ran the tests with my system libraries and they worked fine is this a guarantee for me that it's working?
Yes, I think this should be relatively safe - if you find cases where this turns out not to be the case, it would mean that our test coverage is incomplete. -Nat
Hi Nat,
This is because we want people to be able to install and use CCTBX with minimal fuss, instead of wasting days trying to put together all of the third-party libraries and dealing with the inevitable compile-and-link problems.
I fully agree with you here. But when I work on Linux, sudo yum has proven times and again that it takes away those chores quickly and efficiently...
So the libraries we distribute are not necessarily the same as what you'd be installing system-wide.
... However, that is the serious problem here indeed. The most serious issue is timing and funding here. I apologise if what I am going to write sounds blunt to Radostan, Baptiste, Marcin and the others but we are not paid to nicely package the cctbx. I would even go as far as to say that we are not paid to develop the cctbx actually. The guys in Berkeley are funded to develop Phenix; I get a salary from Bruker to write close-source software for them; The Olex2 team can only justify putting work that will shine through Olex2. At times, to meet our tight deadlines, anything goes: we happily make incompatible hacks at the environment we use and we don't have the time to do it nicely and to put the effort to discuss those changes with upstream. As time passes, and more and more code of ours depends on those changes, while upstream has more and more code that becomes incompatible with those changes, it is very likely that it becomes so difficult to change that situation, that it gets frozen for ever. It is everywhere to be seen in the cctbx. Best wishes, Luc
On Fri, 31. Aug 02:29, Luc Bourhis wrote:
The most serious issue is timing and funding here. I apologise if what I am going to write sounds blunt to Radostan, Baptiste, Marcin and the others but we are not paid to nicely package the cctbx. I would even go as far as to say that we are not paid to develop the cctbx actually. The guys in Berkeley are funded to develop Phenix; I get a salary from Bruker to write close-source software for them; The Olex2 team can only justify putting work that will shine through Olex2. At times, to meet our tight deadlines, anything goes: we happily make incompatible hacks at the environment we use and we don't have the time to do it nicely and to put the effort to discuss those changes with upstream. As time passes, and more and more code of ours depends on those changes, while upstream has more and more code that becomes incompatible with those changes, it is very likely that it becomes so difficult to change that situation, that it gets frozen for ever. It is everywhere to be seen in the cctbx. I do understand this. We don't get paid either. For me this is not an issue although I spent already a few months of work. I'd be frustrated at first if our efforts would be for nothing but I can accept this and I'd probably try to work around this somehow. I've my own projects based on cctbx and I'm really a big fan of cctbx. But I can't distribute my projects properly without bundling cctbx with it. We are building a cascade or snowball system were everyone is bundling one another. This leads to chaos. I already had some weird problems because of the environment variables. It's messing up the system in some rare cases!
I contributed for a few free open source projects and from my experience I have to say it's always good to build and hold a community to benefit from peoples work. But if this is not what you want and you want to keep your development in a small circle than it's your choice and we can't do anything about it besides to fork (which is unrealistic and unlikely). Again nobody is trying to force you to anything. We are just offering suggestions and help if this is not needed please be honest with us so we don't have to spent more time on this. kind regards Radi
Hi Radovan,
For me this is not an issue although I spent already a few months of work. I'd be frustrated at first if our efforts would be for nothing
I won't come to that extreme. We will try our best to see how and when we can stop relying on our own patched versions of libraries. However, the example of ANTLR that made the limelights earlier in this thread demonstrates we won't always be able to achieve that goal. We will definitely discuss those issues during the next Phenix workshop in Cambridge in September. Marcin will be there, so we can see about what can be done about that ccp4io of ours.
but I can accept this and I'd probably try to work around this somehow. I've my own projects based on cctbx and I'm really a big fan of cctbx. But I can't distribute my projects properly without bundling cctbx with it. We are building a cascade or snowball system were everyone is bundling one another. This leads to chaos.
As it should be clear, I fully agree with you on the principles. But we need to be realistic as well.
I contributed for a few free open source projects and from my experience I have to say it's always good to build and hold a community to benefit from peoples work. But if this is not what you want and you want to keep your development in a small circle than it's your choice and we can't do anything about it besides to fork (which is unrealistic and unlikely).
The cctbx has always been strongly advocated as a open project that welcomed contributions and a significant amount of time has been spent on teaching people about that, e.g. at the IUCR computing schools or the computing fayres organised during various crystallography conferences. With mixed results to be honest. It has nevertheless always been a strong motivation for Ralf and for myself as well. So it makes us very happy that you are so much interested into the cctbx that you have found the motivation to do all that work. But it happens that the cctbx core contributors tried to build such a community on their own terms, by using adhoc tools and methods, like distributing cctbx binaries with all deps for cctbx users, simply asking new developers to replace the source directory by a checkout. They have always been aware that this was at odd with the way most open source software are developed on Linux but this sounded like a good compromise for everybody. So now it appears that it does not work, not only for you, but for Soleil as an organisation. So definitively, that's strong enough a motivation to see how we can accommodate your needs. That your proposed patches carefully try to keep your way and ours orthogonal by relying on extra options passed to libtbx/configure.py make them even more likeable. We will apply your patches in due course. But I would like to make it clear that 100% of the cctbx dependencies we currently bundle will not go away, and that it will take quite some time before we start reducing the count. Best wishes, Luc
The cctbx has always been strongly advocated as a open project that welcomed contributions and a significant amount of time has been spent on teaching people about that, e.g. at the IUCR computing schools or the computing fayres organised during various crystallography conferences. With mixed results to be honest. It has nevertheless always been a strong motivation for Ralf and for myself as well. So it makes us very happy that you are so much interested into the cctbx that you have found the motivation to do all that work.
But it happens that the cctbx core contributors tried to build such a community on their own terms, by using adhoc tools and methods, like distributing cctbx binaries with all deps for cctbx users, simply asking new developers to replace the source directory by a checkout. They have always been aware that this was at odd with the way most open source software are developed on Linux but this sounded like a good compromise for everybody.
So now it appears that it does not work, not only for you, but for Soleil as an organisation. So definitively, that's strong enough a motivation to see how we can accommodate your needs. That your proposed patches carefully try to keep your way and ours orthogonal by relying on extra options passed to libtbx/configure.py make them even more likeable. We will apply your patches in due course. But I would like to make it clear that 100% of the cctbx dependencies we currently bundle will not go away, and that it will take quite some time before we start reducing the count. Thanx for your kind words. I don't need you to unbundle cctbx for your public releases (I can already do it automatically with a script during packaging ;)). I just asked for the reasons (which sound reasonable to me now) and for an
On Sat, 01. Sep 04:18, Luc Bourhis wrote: optional package, but svn is fine too. I'm ok to compromise. Let's all work together on the things we can agree on and postpone the other issues to the future.
On Thu, 30. Aug 13:18, Marcin Wojdyr wrote:
I've setup OBS repository (to evaluate usefulness of OBS) with a few ccp4 libraries, only RPMs for now: https://build.opensuse.org/project/show?project=home:wojdyr:ccp4 It's a really nice service, user's interface looks like this (superpose is a small program, part of the ssm library): http://software.opensuse.org/download?project=home:wojdyr:ccp4&package=superpose I'd like to add also debian packages there, but I have less experience with debs, so I'll wait until you guys make packages for Debian and then I'll copy it. It's really a nice service and you're welcome to do so when I manage to pack them.
Regarding forked versions of mmdb and libccp4: Eugene actively maintains mmdb and fixes bugs from time to time, so it's better to take it directly from us. gpp4 is a fork of a subset of libccp4 (maybe it was forked before libccp4 was separated from the ccp4 codebase), but the difference is not big. libccp4 requires another additional dependency, ccif, but it can be avoided using --disable-ccif (ccif is not needed for cctbx and coot). AFAIK coot works fine with the latest versions of the libraries and the versions in cctbx are updated nightly from our repositories. It's always better to use the original software I guess. To package a software we need a good build system like GNU Autotools. How are the packages distributed? Can I download libccp4, libmmdb, libssm as standalone packages? Is there a proper versioning of the packages and so-versioning? If this is guaranteed I guess it shouldn't be too hard to package them from scratch.
And a question about so-versions: is it a formal requirement for Debian packages? What will be the initial so-version of cctbx? 0.0.0?
It's not well seen to package a shared library without so-versioning. I can imagine some developers building software based on a shlib without versioning and in a few years upstream might change the API and this leads to weird errors during runtime and nobody knows why. About the version number this is something upstream needs to decide. From libtools manual[1]: current:revision:age """ Start with version information of `0:0:0' for each libtool library. Update the version information only immediately before a public release of your software. More frequent updates are unnecessary, and only guarantee that the current interface number gets larger faster. If the library source code has changed at all since the last update, then increment revision (`c:r:a' becomes `c:r+1:a'). If any interfaces have been added, removed, or changed since the last update, increment current, and set revision to 0. If any interfaces have been added since the last public release, then increment age. If any interfaces have been removed since the last public release, then set age to 0. """ BTW: Would it be possible to unbundle cctbx and distribute it seperately for people who have the shlibs already. In GNU Autotools you can do a check for a library and interrupt if it's missing. I'm thinking of something like this: cctbx_2012.08.30.src.tar.gz [1] http://www.nondot.org/sabre/Mirrored/libtool-2.1a/libtool_6.html
On 30 August 2012 15:10, Radostan Riedel
To package a software we need a good build system like GNU Autotools. How are the packages distributed? Can I download libccp4, libmmdb, libssm as standalone packages?
ftp://ftp.ccp4.ac.uk/opensource/ They use autotools and I hope there should be no problems with packaging (let me know if you have any problems).
Is there a proper versioning of the packages and so-versioning?
I've added dates to the version numbers, so it's more or less proper versioning. Actually authors of the libraries don't have a strong need to care about versions, because the primary distribution channel is a complete ccp4 suite, which is versioned separately. But there are some versions and they are always increased. SO versions are not really maintained, just 0.0.0 in all cases. Marcin
On Thu, 30. Aug 16:56, Marcin Wojdyr wrote:
ftp://ftp.ccp4.ac.uk/opensource/ They use autotools and I hope there should be no problems with packaging (let me know if you have any problems). This looks nice. I'll give it a try and I'll see if we can get them into debian.
I've added dates to the version numbers, so it's more or less proper versioning. Actually authors of the libraries don't have a strong need to care about versions, because the primary distribution channel is a complete ccp4 suite, which is versioned separately. But there are some versions and they are always increased. SO versions are not really maintained, just 0.0.0 in all cases. OK. But this could be an issue. Would be nice if upstream were aware of the versioning and would start to maintain it. In all cases versioning is an important indicator for API changes.
Hi Radostan, I took the liberty to fork the thread as it has become far too long.
It's not well seen to package a shared library without so-versioning. I can imagine some developers building software based on a shlib without versioning and in a few years upstream might change the API and this leads to weird errors during runtime and nobody knows why.
About the version number this is something upstream needs to decide. From libtools manual[1]:
current:revision:age """ Start with version information of `0:0:0' for each libtool library. Update the version information only immediately before a public release of your software. More frequent updates are unnecessary, and only guarantee that the current interface number gets larger faster. If the library source code has changed at all since the last update, then increment revision (`c:r:a' becomes `c:r+1:a'). If any interfaces have been added, removed, or changed since the last update, increment current, and set revision to 0. If any interfaces have been added since the last public release, then increment age. If any interfaces have been removed since the last public release, then set age to 0. """
[...]
OK. But this could be an issue. Would be nice if upstream were aware of the versioning and would start to maintain it. In all cases versioning is an important indicator for API changes.
Right, I am willing to learn! But before, a comment is in order: what about our Python code? That's a large part of the cctbx and it seems to me the very same reasonings justifying so-versioning applies to Python modules and packages: API's change in incompatible ways on a regular basis. I know many projects do maintain a __version__ string. Is there also some sort of official policy here in the Linux software community? Imho it would be meaningless for the cctbx to invest time in proper so-versioning if we do not version Python because in many instances, a change in the Boost Python interfaces will go hand-in-hand with changes in the Python modules. Now onto so-versioning per se. I am afraid I find the libtool manual excerpt you quoted totally obscure. If you understand it, could you tell me what the next version number would be if we started with e.g. 1.2.3 and then i. change some implementation but not API; ii. added new API without touching existing ones; iii. removed some API without changing existing ones or adding new ones; iv. changed existing API's without deleting any or adding any? Best wishes, Luc
On Fri, 31. Aug 02:05, Luc Bourhis wrote:
Right, I am willing to learn! Great.
But before, a comment is in order: what about our Python code? That's a large part of the cctbx and it seems to me the very same reasonings justifying so-versioning applies to Python modules and packages: API's change in incompatible ways on a regular basis. I know many projects do maintain a __version__ string. Is there also some sort of official policy here in the Linux software community? Imho it would be meaningless for the cctbx to invest time in proper so-versioning if we do not version Python because in many instances, a change in the Boost Python interfaces will go hand-in-hand with changes in the Python modules. I don't think there is anything special about versioning python extensions. It should be versioned by distutils like the module too. But maybe Baptiste knows more on that.
Now onto so-versioning per se. I am afraid I find the libtool manual excerpt you quoted totally obscure. If you understand it, could you tell me what the next version number would be if we started with e.g. 1.2.3 and then i. change some implementation but not API; ii. added new API without touching existing ones; iii. removed some API without changing existing ones or adding new ones; iv. changed existing API's without deleting any or adding any?
OK lets assume you start versioning you should start with 0.0.0 (current.revision.age) current The number of the current interface exported by the library. A current value of 0, means that you are calling the interface exported by this library interface 0. revision The implementation number of the most recent interface exported by this library. In this case, a revision value of 0 means that this is the first implementation of the interface. If the next release of this library exports the same interface, but has a different implementation (perhaps some bugs have been fixed), the revision number will be higher, but current number will be the same. In that case, when given a choice, the library with the highest revision will always be used by the runtime loader. age The number of previous additional interfaces supported by this library. If age were 2, then this library can be linked into executables which were built with a release of this library that exported the current interface number, current, or any of the previous two interfaces. By definition age must be less than or equal to current. At the outset, only the first ever interface is implemented, so age can only be 0. OK now after the public release (we now have 0.0.0) you are changing any sources for this library, the revision number must be incremented. This is a new revision of the current interface. 0.0.0 ---> 0.1.0 If the interface has changed, then current must be incremented, and revision reset to 0. This is the first revision of a new interface. 0.1.0 ---> 1.0.0 If the new interface is a superset of the previous interface (that is, if the previous interface has not been broken by the changes in this new release), then age must be incremented. This release is backwards compatible with the previous release. 1.0.0 ---> 2.0.1 If the new interface has removed elements with respect to the previous interface, then you have broken backward compatibility and age must be reset to 0. This release has a new, but backwards incompatible interface. 2.0.0 ---> 3.0.0 And we always have to keep in mind not to change any numbers during the development but before a public release. This way the numbers are not growing rapidly. Hope that's more clear now. Information found here: http://sources.redhat.com/autobook/autobook/autobook_91.html
On 31 Aug 2012, at 01:09, Radostan Riedel wrote:
And we always have to keep in mind not to change any numbers during the development but before a public release. This way the numbers are not growing rapidly.
Since cctbx is released in the form of snapshots, I guess this means that version numbers would indeed have to be updated on any commit that modifies library code. // Johan
On Fri, 31. Aug 03:35, Johan Hattne wrote:
Since cctbx is released in the form of snapshots, I guess this means that version numbers would indeed have to be updated on any commit that modifies library code. OK but snapshot (nightly) builds in my opinion are not considered as public releases. I'd just increment the numbers before a public and stable release. This should be communicated and documented somehow so it's not forgotten and it'd have the positive side effect that users can easily look into the nightly changelog (or commit log) and see if the interface is going to change in the next public release.
My proposed patch works like this: In the specific SConscript were the shared library is build you just set SHLINKFLAGS for that environment. So somebody working on a new a changed interface could just append "-version-info 1:0:1" to SHLINKFLAGS and maybe make a small comment: # Changed interface from 0:0:0 to 1:0:1 (with backward compatibility) # WORK in progress ....code here.... Then during development anyone else can ditch the backward compatibility change the comment and code to: # Finishing work on interface from 0:0:0 to 1:0:0 (no more backward compatibility # for public release)! ....code here.... Or you could just set goals for the next public release saying that the there's going to be a new interface but without backward compatibility as a developers guideline. I don't know how you organize yourself so this is up to you.
CCP4 libraries don't increase soversion as recommended in the cited sources, because it would be additional thing to do and there are many things with higher priority. IIUC the primary benefit is that a few versions of the same library can be packaged and installed at the same time. When it's needed we'll bump soversion. Note that not all projects use so-version as autotools docs recommend. For example in Qt so-version == version. There are many libraries with 0.0.0 in my /usr/lib, which is just a default value in libtool, and many projects don't increase so-version on every ABI change, because if part of the ABI is experimental and changes often the version would go up very quickly. But if you come across any practical problems (with ccp4 libs) we'll try to help. Marcin
On Fri, 31. Aug 13:31, Marcin Wojdyr wrote:
CCP4 libraries don't increase soversion as recommended in the cited sources, because it would be additional thing to do and there are many things with higher priority. IIUC the primary benefit is that a few versions of the same library can be packaged and installed at the same time. When it's needed we'll bump soversion. Note that not all projects use so-version as autotools docs recommend. For example in Qt so-version == version. There are many libraries with 0.0.0 in my /usr/lib, which is just a default value in libtool, and many projects don't increase so-version on every ABI change, because if part of the ABI is experimental and changes often the version would go up very quickly. But if you come across any practical problems (with ccp4 libs) we'll try to help. There is also this option in libtool[1]. This is pretty much what you describe with QT. I don't know about the Debian Policy here. Maybe Fred could lighten me up. This option can also be used with my proposed cctbx patch.
I don't care if upstream is using 0.0.0 all the time. But it might happen that somebody will complain if the API/ABI compatibility breaks. Most important part of this number is "current". If ccp4 never changes the interface so 0 is fine then we shouldn't have any issues here. But in the rare case that the interface will be changed (with no backward compatibility) I'd be thankful if upstream increments this. [1] http://www.gnu.org/software/libtool/manual/html_node/Release-numbers.html#Re... regards Radi
On Fri, 31 Aug 2012 16:05:36 +0200
Radostan Riedel
On Fri, 31. Aug 13:31, Marcin Wojdyr wrote:
CCP4 libraries don't increase soversion as recommended in the cited sources, because it would be additional thing to do and there are many things with higher priority. IIUC the primary benefit is that a few versions of the same library can be packaged and installed at the same time. When it's needed we'll bump soversion. Note that not all projects use so-version as autotools docs recommend. For example in Qt so-version == version. There are many libraries with 0.0.0 in my /usr/lib, which is just a default value in libtool, and many projects don't increase so-version on every ABI change, because if part of the ABI is experimental and changes often the version would go up very quickly.
For Qt the version is not that much different from the so number. the API stays stable during all the Qt4 history. So the only important point is to have a stable API supported by the upstream during the life of the project.
But if you come across any practical problems (with ccp4 libs) we'll try to help. There is also this option in libtool[1]. This is pretty much what you describe with QT. I don't know about the Debian Policy here. Maybe Fred could lighten me up. This option can also be used with my proposed cctbx patch.
The problem is that even if upstream do not care about so versionning (and I can understand why), maintainer's of a shared library of all distribution can not. On the Debian side we MUST bump the so number if the binary interface changed from one version to the other. We must then change the name of the binary package of this library (libblablaN -> libblablaN+1). So the old library can be co installable with the new one (to allow a smooth transition from one version of the library to the next one). And to be co installable, the name of the library MUST be different in both package. If not both package Conflicts and it is only possible to install only one of them. So a lot's of coordination is required from the release team to do the transition (need to rebuild all the depending package and make them transition in same time to the testing distribution) So this so number is mandatory for maintainability of the shared library from the Distribution point of view. This is always welcome by third party when It comes to evaluate risk to base one of their project on another library.
I don't care if upstream is using 0.0.0 all the time. But it might happen that somebody will complain if the API/ABI compatibility breaks. Most important part of this number is "current". If ccp4 never changes the interface so 0 is fine then we shouldn't have any issues here. But in the rare case that the interface will be changed (with no backward compatibility) I'd be thankful if upstream increments this.
The important part is to easily identify if the API has changed and to
bump the so name in consequence. So this is all about coordination,
before a release we must have access to a rc version and double check
for the so bump.
Cheers
Frederic
--
GPG public key 4096R/4696E015 2011-02-14
fingerprint = E92E 7E6E 9E9D A6B1 AA31 39DC 5632 906F 4696 E015
uid Picca Frédéric-Emmanuel
Dear Radostan Riedel,
Hope that's more clear now.
Crystal clear! Thanks for the explanations. Now going back to patch 0007-adding-shlib-versioning, is it correct that we would just need to do: env.Append(SHLINKFLAGS= [ "--version-info=c.r.a" ] for the env that build a particular .so? Well, that would be for gcc and clang only of course. To be cross-platform I would suggest adding a pseudo-builder SetVersionInfo(vers) that does the right thing on Unix and nothing on Windows (for the time being, as perhaps there is a similar mechanism with msvc). But your well-written patch and that little sugar are the easy part as they are of the write once, reuse forever nature. We need a workflow here and the two extremes would be as follow. 1. Each and every cctbx developer becomes aware of so-versioning. Let's say a module is at 2.3.4, so we have env.SetVersionInfo(vers="2.3.4") and then a change is made to the C++ code. The developer responsible for that change should then figure out the new c.r.a and then add a comment env.SetVersionInfo(vers="2.3.4", #next release# vers="c.r.a") It would then be easy to automate an edit en-masse before release that would change that line and all its sisters to env.SetVersionInfo(vers="c.r.a") That's basically a formalised version of your proposition in your answer to Johan earlier. 2. Cctbx developers would not care about so-versioning and before a Debian release, the cctbx Debian maintainers would go through each call to SetVersionInfo to set the right c.r.a, based on a careful examination of the commits, complemented by asking questions to the core cctbx developers (or simply open questions on this forum). The reality is that it may end up in-between those extremes. The C++ code tends to be much more stable than the Python code. Thus the effort of working out the so-versioning changes is less daunting than it seems. I am much more worried about Python if it comes to that:
I don't think there is anything special about versioning python extensions. It should be versioned by distutils like the module too. But maybe Baptiste knows more on that.
There are relatively very few shlibs compared to Python modules. Keeping track of the version of all of the latter by hand would be an enormous amount of work. I don't think we can get it done cheaply with some automatic keyword expansion if we want proper major.minor.patch or worse something as involved as so-versioning. Best wishes, Luc
On Sat, 01. Sep 03:33, Luc Bourhis wrote:
Now going back to patch 0007-adding-shlib-versioning, is it correct that we would just need to do: env.Append(SHLINKFLAGS= [ "--version-info=c.r.a" ] for the env that build a particular .so? Well, that would be for gcc and clang only of course. To be cross-platform I would suggest adding a pseudo-builder SetVersionInfo(vers) that does the right thing on Unix and nothing on Windows (for the time being, as perhaps there is a similar mechanism with msvc). That's a good idea. I'll try to rework by patch to ease everything up.
But your well-written patch and that little sugar are the easy part as they are of the write once, reuse forever nature. We need a workflow here and the two extremes would be as follow.
1. Each and every cctbx developer becomes aware of so-versioning. Let's say a module is at 2.3.4, so we have env.SetVersionInfo(vers="2.3.4") and then a change is made to the C++ code. The developer responsible for that change should then figure out the new c.r.a and then add a comment env.SetVersionInfo(vers="2.3.4", #next release# vers="c.r.a") It would then be easy to automate an edit en-masse before release that would change that line and all its sisters to env.SetVersionInfo(vers="c.r.a") That's basically a formalised version of your proposition in your answer to Johan earlier.
2. Cctbx developers would not care about so-versioning and before a Debian release, the cctbx Debian maintainers would go through each call to SetVersionInfo to set the right c.r.a, based on a careful examination of the commits, complemented by asking questions to the core cctbx developers (or simply open questions on this forum). I'd of course like the first extreme better. For starters it would be great if everyone feels responsible for the "current" number. I'm not a c++ developer and I don't know if thats easy?
There are relatively very few shlibs compared to Python modules. Keeping track of the version of all of the latter by hand would be an enormous amount of work. I don't think we can get it done cheaply with some automatic keyword expansion if we want proper major.minor.patch or worse something as involved as so-versioning. Maybe it can be good to look on other projects how they are doing this. As a python developer I'm always expecting different API's when it comes to new upstream versions. I checked the policy for Debian and there seems to be nothing special in versioning extensions and modules. Maybe Justin can tell us something about Gentoo. I'd say we don't need to worry about it for now.
On Sat, 01. Sep 03:33, Luc Bourhis wrote:
Now going back to patch 0007-adding-shlib-versioning, is it correct that we would just need to do: env.Append(SHLINKFLAGS= [ "--version-info=c.r.a" ] for the env that build a particular .so? Well, that would be for gcc and clang only of course. To be cross-platform I would suggest adding a pseudo-builder SetVersionInfo(vers) that does the right thing on Unix and nothing on Windows (for the time being, as perhaps there is a similar mechanism with msvc). That's a good idea. I'll try to rework by patch to ease everything up. But your well-written patch and that little sugar are the easy part as they are of the write once, reuse forever nature. We need a workflow here and the two extremes would be as follow.
1. Each and every cctbx developer becomes aware of so-versioning. Let's say a module is at 2.3.4, so we have env.SetVersionInfo(vers="2.3.4") and then a change is made to the C++ code. The developer responsible for that change should then figure out the new c.r.a and then add a comment env.SetVersionInfo(vers="2.3.4", #next release# vers="c.r.a") It would then be easy to automate an edit en-masse before release that would change that line and all its sisters to env.SetVersionInfo(vers="c.r.a") That's basically a formalised version of your proposition in your answer to Johan earlier.
2. Cctbx developers would not care about so-versioning and before a Debian release, the cctbx Debian maintainers would go through each call to SetVersionInfo to set the right c.r.a, based on a careful examination of the commits, complemented by asking questions to the core cctbx developers (or simply open questions on this forum). I'd of course like the first extreme better. For starters it would be great if everyone feels responsible for the "current" number. I'm not a c++ developer and I don't know if thats easy? I could be wrong, but here is my understanding + 2 cents. The version info is really for the API and not for code changes that do not affect
On 9/1/12 4:35 AM, Radostan Riedel wrote: the API. Therefore, updating an implementation based on a more effective algorithm will not change the version info unless it requires a new API. In other words, improving a refinement method would not require a change to the .so number provided the "public" interface to the method was not altered (of course a version bump to the cctbx package itself would be probably be desirable). If someone is really excited about this or finds it very important, I think the model presented by the sqlite library is simple enough. It distills to: here is the external API (it will be static unless there is a good reason to change it); all other C function prototypes may change at any time. I know that designing an API takes a significant effort, but if done sufficiently well, it allows for significant hacking of the code without having to break existing programs that depend on the library. Also, cctbx much more complicated and has quite a few warts when compared with sqlite, and versioning works rather well for procedure based programming, but possibly less so for languages that support constructs such as iterators. The existence of such an API would not preclude hacks to get cctbx to interface with Phenix or other code. However, if such a documented API existed and some set of rules were followed, it would allow students and other smaller groups to design C or C++ code that would not depend on a specific version of cctbx. With respect to Python, it is my opinion that it only becomes a problem if something is removed from the API. However, I am less knowledgeable about how to handle Python as the general consensus is to have a virtual environment (virtualenv) for each project as Python on Linux and Mac OS X is rather a mess (several different versions of Python 2 and 3 + Cython, Boost.Python, SIP, SWIG, etc.) --Jeff Van Voorst
Dear fellow developers, we may have missed the crux of the matter here. Let's say we have a program prg written in C++ that links to a shared library lib.so also written in C++ because it uses a function int f(int) provided by lib.so. Then the code of lib.so changes. Specifically the function interface becomes long f(long) If prg is not built against the new version of lib.so, it will crash as the linker will fail to find "int f(int, int)". Hence the hard stance of Debian on so-versioning. However, the situation in the cctbx is very different: we have a program prg written in Python that loads a shared library lib.so in which Boost Python magic has manufacture a function PyObject *wrapped(PyObject *) that eventually calls int f(int) (1) after checking that the argument is convertible. Then, again, lib.so is modified as the signature of f becomes long f(long) (2) and Boost Python emits slightly different a code to check the argument is convertible. But then the Python program prg won't see the difference at all: any call f(n) that succeeded with (1) will succeed with (2). No crash. Not even an incorrect result. Since every single shared library built by the cctbx is of that Boost Python kind, with the exception of libboost_python.so, which would not be versioned by us anyway, I am getting seriously doubtful that we need so-versioning at all. I mean if we do not version our pure Python modules then I do not see a reason to version our Boost Python shared libraries which are just a different breed of Python module. The most lethal changes, such as changing the number of arguments of a function, happens in pure Python module as well as in Boost Python modules. In both cases they may lead to crashes at run time (note the may as it will only crash if the flow of the program passes through a call to the function that has changed, contrary to the C++ case above which will crash no matter what) but again why would we single out Boost Python modules? Have I missed something? Best wishes, Luc
On Sat, 1 Sep 2012 11:35:04 +0200
Radostan Riedel
1. Each and every cctbx developer becomes aware of so-versioning. Let's say a module is at 2.3.4, so we have env.SetVersionInfo(vers="2.3.4") and then a change is made to the C++ code. The developer responsible for that change should then figure out the new c.r.a and then add a comment env.SetVersionInfo(vers="2.3.4", #next release# vers="c.r.a") It would then be easy to automate an edit en-masse before release that would change that line and all its sisters to env.SetVersionInfo(vers="c.r.a") That's basically a formalised version of your proposition in your answer to Johan earlier.
2. Cctbx developers would not care about so-versioning and before a Debian release, the cctbx Debian maintainers would go through each call to SetVersionInfo to set the right c.r.a, based on a careful examination of the commits, complemented by asking questions to the core cctbx developers (or simply open questions on this forum). I'd of course like the first extreme better. For starters it would be great if everyone feels responsible for the "current" number. I'm not a c++ developer and I don't know if thats easy?
To my opinion the first solution whould be the best one also for other distribution. Indeed this is a "social" problem, developpers of the C++ librarty should learn about ABI/API compatibility like expected by most distributions when it comes to do some "long term" support into an integrated environment. The person who did the change is the only one which know the implication of the changes. Now during the build of a Debian package we have some tools that could detect API modifications and the build can fail if a library without a so number bump, remove a symbols. So it is a round trip collaboration, we can check if a so bump is requiere by a build of the futur next stable release, Identify the problem and explain to the person in charge of the C++ library how to modify the corresponding so number. After a few roundtrip (release) I think that peoples whould understand how to deal with thoses numbers.
There are relatively very few shlibs compared to Python modules. Keeping track of the version of all of the latter by hand would be an enormous amount of work. I don't think we can get it done cheaply with some automatic keyword expansion if we want proper major.minor.patch or worse something as involved as so-versioning. Maybe it can be good to look on other projects how they are doing this. As a python developer I'm always expecting different API's when it comes to new upstream versions. I checked the policy for Debian and there seems to be nothing special in versioning extensions and modules. Maybe Justin can tell us something about Gentoo. I'd say we don't need to worry about it for now.
Yes excepted if you distribut python extension used by other third
party to build other extension. A good example is the python-numpy
package. It was a nightmare for distribution before they introduce an
API and ABI number.
the public C API/ABI of this extension is now following a sort of so
number and when someone build a package relying on this python-numpy
extension the dh_numpy helper generate the right binay dependencies,
based on the ABI number maintained by the numpy upstream.
there is a dependency on python-numpy-abix. Exemple for the scipy package
Package: python-scipy
Version: 0.10.1+dfsg1-4
Installed-Size: 34405
Maintainer: Debian Python Modules Team
Hi,
But indeed it mean, you need to care about the versionning of your public C API/ABI.
First it would be a C++ ABI in the cctbx case but, see my earlier email, we do not provide any shared library that is linkable by a generic C++ program. We only provide shared libraries that are to be loaded by the Python runtime. So those shared libraries are just Python module and their being written in C++ is an implementation detail. What matters is therefore only the Python API, in which I include the Python API exposed by our *.so, on the one hand and the C++ API on the other hand. But then each cctbx release contains a TAG file providing a version number the set of which is sortable, e.g. 2012_05_08_2305 Thus it seems to me that you only need to get that version number through to the Debian packaging system. And we are done with that. Again, have I missed something? Best wishes, Luc
Some of us do link C++ programs to cctbx C++ routines Phil On 3 Sep 2012, at 21:14, Luc Bourhis wrote:
Hi,
But indeed it mean, you need to care about the versionning of your public C API/ABI.
First it would be a C++ ABI in the cctbx case but, see my earlier email, we do not provide any shared library that is linkable by a generic C++ program. We only provide shared libraries that are to be loaded by the Python runtime. So those shared libraries are just Python module and their being written in C++ is an implementation detail. What matters is therefore only the Python API, in which I include the Python API exposed by our *.so, on the one hand and the C++ API on the other hand. But then each cctbx release contains a TAG file providing a version number the set of which is sortable, e.g. 2012_05_08_2305
Thus it seems to me that you only need to get that version number through to the Debian packaging system. And we are done with that.
Again, have I missed something?
Best wishes,
Luc
_______________________________________________ cctbxbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/cctbxbb
Hi Phil,
Some of us do link C++ programs to cctbx C++ routines
What you do not do is linking your program.o with a shared library libcctbx_sgtbx_asu.so because there is no such thing as the latter. What you do is that you ask the compiler to pull together your program.o and the object files inside the static library libcctbx_sgtbx_asu.a. But static libraries don't pose the same versioning problems as shared ones. Let's imagine that your program was made from version 1 of libcctbx_sgtbx_asu.a on your development machine A and then that it gets installed on a machine B with a more recent cctbx, featuring libcctbx_sgtbx_asu.a version 2. Your program will work as well on B as on A. Now replace static with shared in that example, and your program may crash B. Best wishes, Luc
On 3 Sep 2012, at 23:33, Luc Bourhis wrote:
What you do not do is linking your program.o with a shared library libcctbx_sgtbx_asu.so because there is no such thing as the latter.
Arghh, no, I'm wrong indeed: on Linux, we produce libcctbx_sgtbx_asu.so. I was biased by MacOS X where those are static libs. Ok, so indeed, Radi's arguments apply to those. Still in the minority compared to all the Boost Python ones though. Here is a complete list I think: lib/libccp4io.so lib/libomptbx.so lib/libcctbx.so lib/libscitbx_boost_python.so lib/libcctbx_sgtbx_asu.so lib/libscitbx_minpack.so lib/libiotbx_mtz.so lib/libscitbx_slatec.so lib/libiotbx_pdb.so lib/libsimdtbx_memory_allocation_central.so lib/libmmtbx_masks.so lib/libsmtbx_refinement_constraints.so Best wishes, Luc
On 9/3/12 4:08 PM, Phil Evans wrote:
Some of us do link C++ programs to cctbx C++ routines Phil
Same here... In particular, it is easier to link to iotbx.pdb than handle all of the pdb corner cases with my own code and it keeps the handling of the pdb hierarchy the same in C++ and Python. --Jeff
On 4 Sep 2012, at 02:43, Jeffrey Van Voorst wrote:
On 9/3/12 4:08 PM, Phil Evans wrote:
Some of us do link C++ programs to cctbx C++ routines
Same here...
Ok, sure, my mistake. Holidays are not good ;-) Seriously it begs the question as to why we don't make dynamic libraries on MacOS X. With the modern versions of SCons we use, it is rather trivial and we should remove the hack-ish way we build the boost python dynamic library on MacOS X in the process. But I am digressing here. So, Radostan, I reckon that the list of shared libraries we would version is as follow: libcctbx.so libcctbx_sgtbx_asu.so libiotbx_mtz.so libiotbx_pdb.so libmmtbx_masks.so libomptbx.so libscitbx_boost_python.so libsimdtbx_memory_allocation_central.so (this one is a work of mine that I will commit soon-ish: just for the record) libsmtbx_refinement_constraints.so I have excluded libccp4io.so because your plan is to rely on Debian libraries instead. I have also excluded libscitbx_minpack.so and libscitbx_slatec.so for the same reason, although I don't know your objectives for those two. I would like to emphasise again, and I am sure all the other cctbx developers will agree with me here, that you have to make sure that, after replacing the libraries we ship by the Debian ones, a significant subset of the Phenix test cases does still pass with flying colours. What is now called t12 should be enough. Regarding your libtool patch, it appears that for reasons that have nothing to do with my arguments in this thread of discussion, you have set up env_no_includes_boost_python_ext not to use libtool to version the shlibs. That's how we want it and you should therefore just change the comment explaining the rationales. However, we are jumping the guns here as this patch will only make sense of cctbx developers are willing to update so-versions as they change the C++ code. Some libraries in the list above are either tiny, or mature enough that the work involve will be trivial (libscitbx_boost_python.so, libsimdtbx_memory_allocation_central.so and libomptbx.so come to mind). On the contrary, libcctbx.so has 70 source files contributing to it and a huge ABI/API. So, dear cctbx developers, what do you think of versioning those shlibs? Do you think you can commit yourself to update one line in the relevant SConscript after a bunch of changes to the C++ headers or sources (according to the rules laid out by Radostan earlier in this thread)? Again I know that as far as our immediate work (Phenix, Olex 2, etc) is concerned, this is basically useless to us. But having a Debian distribution and easing the adoption of the cctbx by the beam lines at Soleil is worth an effort I reckon. But this is not for me to say alone. Best wishes, Luc
On Tue, 04. Sep 21:51, Luc Bourhis wrote:
So, Radostan, I reckon that the list of shared libraries we would version is as follow:
libcctbx.so libcctbx_sgtbx_asu.so libiotbx_mtz.so libiotbx_pdb.so libmmtbx_masks.so libomptbx.so libscitbx_boost_python.so libsimdtbx_memory_allocation_central.so (this one is a work of mine that I will commit soon-ish: just for the record) libsmtbx_refinement_constraints.so
I have excluded libccp4io.so because your plan is to rely on Debian libraries instead. I have also excluded libscitbx_minpack.so and libscitbx_slatec.so for the same reason, although I don't know your objectives for those two. I would like to emphasise again, and I am sure all the other cctbx developers will agree with me here, that you have to make sure that, after replacing the libraries we ship by the Debian ones, a significant subset of the Phenix test cases does still pass with flying colours. What is now called t12 should be enough. I'd like to include libscitbx_slatec and libscitbx_minpack too as of my understanding this are c++ ports of those fortran routines and maybe some developers can use them too.
Regarding your libtool patch, it appears that for reasons that have nothing to do with my arguments in this thread of discussion, you have set up env_no_includes_boost_python_ext not to use libtool to version the shlibs. That's how we want it and you should therefore just change the comment explaining the rationales.
However, we are jumping the guns here as this patch will only make sense of cctbx developers are willing to update so-versions as they change the C++ code. Some libraries in the list above are either tiny, or mature enough that the work involve will be trivial (libscitbx_boost_python.so, libsimdtbx_memory_allocation_central.so and libomptbx.so come to mind). On the contrary, libcctbx.so has 70 source files contributing to it and a huge ABI/API.
So, dear cctbx developers, what do you think of versioning those shlibs? Do you think you can commit yourself to update one line in the relevant SConscript after a bunch of changes to the C++ headers or sources (according to the rules laid out by Radostan earlier in this thread)? Again I know that as far as our immediate work (Phenix, Olex 2, etc) is concerned, this is basically useless to us. But having a Debian distribution and easing the adoption of the cctbx by the beam lines at Soleil is worth an effort I reckon. But this is not for me to say alone. I found Freds idea very nice where we (debian maintainers) would check the API/ABI after a new upstream version we pack (seems that there are debian helper
Reason is that libtool does not support python-extension. It checks if the file is called lib*.so. I don't know why it's so picky but as so-versioning is not required for python-extensions I just unset it for env_no_includes_boost_python_ext. To be honest I wanned to do this with a python method first but 2 problems I see here. Libtool supports most common compilers and platforms. Libtool itself is just a hackish bash script but it works fine and is the quasi standard for that purpose. scripts for that) and contact this mailing list if there are any problems and try to collaborate with the cctbx developers. Hopefully this leads to more acceptance of the need for so-versioning. Regarding the versioning of python extensions/modules I'll check the numpy source code as Fred suggested. Maybe there is an easy way to do this in cctbx too without too much work for the cctbx-developers. I was also thinking as there are debian debhelper scripts for checking API/ABI problems in shlibs maybe I can have a look into the scripts and contribute an automatic version checking and setting script. I can imagine to take the last compiled public release as a reference for an automatic script that takes care of the versioning for the cctbx-developers, this way you could save work. And an automatic script for detecting and setting python ext/modules API/ABI versions would be nice too. You could run it during your test builds and set the versions strings. What do you think? kind regards Radi
Possibly off topic, but I had issues with libtool and Mac OS X. Shared libraries need to be specified vi the -L/path/to/library/dir --libname linker flags when using libtool on Mac OS X. For what its worth, I wrote my own pkgconfig files with the necessary -L and --libname linker flags (and CFLAGS). This appears to work fine on Mac OS X, debian, Suse Enterprise Linux, and Gentoo. The reasons are: *) Mac OS X uses .dylib for shared libs; libtool ignores *.dylib; and specifying .dylibs is not good in that it makes the Mac OS X setup differ from that of Linux distributions *) Mac OS X does not use *.so files (as far as I know) On another note, are there regression tests (+ documentation) for the shared libraries? If so, it should be fairly easy to setup an automatic method to generate pkgconfig files + simple tests to make sure the linking works. On the other hand, I am assuming Scons is supposed to replace much or all of the standard autotools buildchain, and I don't know if Scons has an equivalent to pkgconfig files. --Jeff
I should add that I am willing to do the work to setup pkgconfig files + tests (or use a similar tool, if one exists) for cctbx as having such tools would make it significantly easier for others to install my tools that depend on cctbx. --Jeff
I should add that I am willing to do the work to setup pkgconfig files + tests (or use a similar tool, if one exists) for cctbx as having such tools would make it significantly easier for others to install my tools that depend on cctbx.
On Wed, 05. Sep 08:04, Jeffrey Van Voorst wrote: pkgconfig is very important. Although we use libtool for versioning shlibs in Debian we don't support *.la files anymore. I also proposed a patch to create pkgconfig files. Although I think that a simple replacement builder in scons would be a better solution to creating pkgconfig files[1]. As for SCons. It's a nice build system but not that powerful like GNU Autotools. It does not support elementary things like so-versioning yet but it's more focused to be cross-platform. But I learned it's very flexible and you can do a lot. Also look here[2]. kind regards Radi [1] http://www.scons.org/wiki/SubstInFileBuilder [2] http://www.scons.org/wiki/UsingPkgConfig
On 9 Aug 2012, at 22:24, Luc Bourhis wrote:
On 9 Aug 2012, at 19:39, Nathaniel Echols wrote:
I actually think introducing -Qnew was a mistake, because there are third-party modules [...] which simply break when true division is forced. [...] I definitely agree that it would be better if everyone continues to use the explicit division styles instead of relying on the Python interpreter's default behavior
I agree with you that using -Qnew has annoying side-effects. However I beg to differ as to the best solution. Imho we should enforce from __future__ import division at the beginning of every single Python module by adding another diagnostic to libtbx.find_clutter.
Moreover, I should add that I have the opposite problem. I need to integrate cctbx-based code into Bruker's Python code that is run without -Qnew. The only easy way is to add the __future__ line. Luc
Hi Baptiste,
Le 09/08/2012 03:35, Luc Bourhis a écrit :
Radostan Riedel wrote:
OK to make that clear a little bit. A few patches are really only for packaging they can't and shouldn't go back upstream.
Well, they shouldn't go into the packages. But if they end up beeing shared with other distros in the future, it could make sense to host them in a branch of your source repository. Time will tell.
Actually we are not opposed to move as many patches as possible to our trunk, providing that they are introduced in as incremental a manner as possible if they affect existing code, and if the authors of the patch quickly react to any breakage of our nightly tests so that after a couple of days they pass with flying colours. This is basically what we ask of all of the cctbx developers. Now fair enough, if you study the cctbx repository log, you will see that some people, especially me, pushed as many as 50 commits in one go. But that's because I used git to mirror my work on a representative sample of the machines used for the nightly tests, running a full build and a full test on each chosen machine, fixing problems along, and sending the patch bomb only when all problems were ironed out. Even then, it happened that some bugs surfaced on very old or very new Linux distros I had not tested. Either way, it is almost unheard of that the cctbx trunk was broken for more than a few days in a row and we would like to keep it that way. I am sure Nat would be more than happy to give you access to our test machines so that you can do this sort of testing. The alternative would be that all patches go through one of us. I could volunteer to do that.
we wanted the cctbx to always be run with -Qnew and we actually had to fix the code in quite a few places to make all tests pass with -Qnew. Having those Python dispatchers in the first place, the least intrusive change was definitively to add -Qnew. The alternative, adding "from __future__ import division" to every single Python module did not appeal to us. Thus that build_py class is definitively necessary.
The thing is, right now, most modules in cctbx don't need an additional __future__ line at all. They run equally well with or without "-Qnew", for one of 2 reasons:
Have you run all the tests without -Qnew to affirm that? I mean just search for the regex \b\d(?!\.)/\d(?!\.) for example and you will see how many potential breakages the omission of -Qnew would produce.
* either because they already have the __future__ line, probably because it was already there, and has not been actively stripped,
You are correct about that: some of us had got into the habit of systematically adding "from __future__ import division" before the move to -Qnew.
* or because int division is not used at all. The most common style, from the little I could see, seems to be using explicit constructs in all cases, for example "x//2" (explicit integer division) or "x/2." (explicit floating-point division). This style nicely sidesteps the problem.
the x/2. pattern was the alternative to adding "from __future__ import division" used by some developers indeed whereas most of the x//2 were added during the move to -Qnew iirc.
If we think that such a style will stay the norm in the future, we don't need any workaround.
Believe me, we need __future__ division.
Sure. I'll take a few days off starting friday, but I'll be back in the middle of next week.
Same here. Best wishes, Luc
Le 09/08/2012 19:55, Luc Bourhis a écrit :
Actually we are not opposed to move as many patches as possible to our trunk, providing that they are introduced in as incremental a manner as possible if they affect existing code, and if the authors of the patch quickly react to any breakage of our nightly tests so that after a couple of days they pass with flying colours. This is basically what we ask of all of the cctbx developers.
Now fair enough, if you study the cctbx repository log, you will see that some people, especially me, pushed as many as 50 commits in one go. But that's because I used git to mirror my work on a representative sample of the machines used for the nightly tests, running a full build and a full test on each chosen machine, fixing problems along, and sending the patch bomb only when all problems were ironed out. Even then, it happened that some bugs surfaced on very old or very new Linux distros I had not tested.
Either way, it is almost unheard of that the cctbx trunk was broken for more than a few days in a row and we would like to keep it that way. I am sure Nat would be more than happy to give you access to our test machines so that you can do this sort of testing. The alternative would be that all patches go through one of us. I could volunteer to do that.
OK to all that. Regarding the setup.py patch specifically, if it is considered too big, I can probably split it into more digestible pieces.
The thing is, right now, most modules in cctbx don't need an additional __future__ line at all. They run equally well with or without "-Qnew", for one of 2 reasons:
Have you run all the tests without -Qnew to affirm that? I mean just search for the regex \b\d(?!\.)/\d(?!\.) for example and you will see how many potential breakages the omission of -Qnew would produce.
Well, "most" may be overstated, that was just my impression. I did not make a full test run yet, either without -Qnew, or with my workaround. Both of which would be very interesting for sure...
the x/2. pattern was the alternative to adding "from __future__ import division" used by some developers indeed whereas most of the x//2 were added during the move to -Qnew iirc.
OK, so basically this style is a leftover from the past, not a coordinated effort to keep the code working without "-Qnew". That answers my interrogations.
Believe me, we need __future__ division.
I take your word on this! Then we'll use my "build_py" code. As I said above, I could not yet make sure all tests pass, because some tests are not correctly installed and won't run at all. But I'll try to reach that milestone as soon as possible. Cheers, Baptiste
To sum up the patches that we would like to give back: remove-hardcoded-libtbx_build-env options-for-system-libs-installtarget-and-prefix adding-shlib-versioning adding-setup_py All the changes would work optional. Although I guess that the install target patch still might need some work. All other patches not listed here are debian specific and we need to do this to install everything properly. I understand that this patches are not ready to import as we patched the latest stable release, but we'd be glad to help port this work into trunk. kind regards
Hi everybody, I'm working with Radostan, more specifically on the distutils integration. I'm the author of the patches 0010-adding-setup_py and 0016-adapt-test_utils-in-libtbx-for-setup_py-test, so I'll discuss only them. Le 08/08/2012 01:31, Luc Bourhis a écrit :
0003-correct-paths-in-dispatcher-creation, 0008-Fix-to-skip-pycbf-build, 0016-adapt-test_utils-in-libtbx-for-setup_py-test, 0018-Fix-to-use-systems-include-path, 0019-Fix-to-skip-build-of-clipper-examples I have put those together because they participate to the same philosophy. They make sense only in the Debian environment you are designing, where the cctbx will depend on other packages, that will therefore be installed in standard locations if the cctbx is installed. But in agnostic an environment, where the cctbx dynamic libraries and Python modules are not in standard places, those patches break the build system and part of the runtime system. For example, 0018 assumes there is gpp4/ccp4 somewhere on the header paths: that would require changing the packaging of Phenix to match that. This is so obvious that you can't have missed that. So am I missing something here?
The patch 0016-adapt-test_utils-in-libtbx-for-setup_py-test is meant to allow the caller of the tests to pass the location of the test routines ("builddir" and "distdir") to the test runner. I needed this feature to set "distdir" to the distutils build directory, in order to make sure that all the tests and their data are correctly copied by my setup.py script (conclusion: they are not yet, see below). It would also be necessary in order to run the tests on the installed debian system. It may be, though, that this can be achieved more cleanly by appropriately reconfiguring the pickled Environment object. Radostan told me that this patch no longer applies on the newer nightly builds of cctbx, but I didn't look into this problem yet.
0010-adding-setup_py: pending discussions I don't quite understand your code but this is orthogonal to our existing code.
This is indeed quite orthogonal to your code. The goal of this whole distutils integration is twofold: 1) we need to be able to actually install the python part of cctbx, which is normally just run from the source tree. 2) we need to make it easy for the Debian build system ("dh") to trigger a rebuild for each python version supported by Debian. It makes sense to use distutils because the Debian build system knows how to handle it. So the "build_ext", "install_lib" and "clean" classes are really just part of our "plumbing". They could probably be useful to other distros as well. The "install_data" class is mostly used to reconfigure the pickled Environment object, by replacing all the build-time paths with the run-time paths on the installed system.
What do you need class build_py for e.g.?
The "build_py" class is an experiment at adding "from __future__ import division" lines to python files, in case it would be needed to use them without the "-Qnew" option of python. Indeed, in Debian, cctbx will have to be compatible with the system python, without needing a global option. Whether this "build_py" class is needed actually depends on you: if the cctbx community is committed to keeping cctbx working also without the "-Qnew" option, we don't need it, which is much better for us. The "test" command is a first try at running the tests. Right now only half of the tests work, because distutils only copies the importable python packages, and many tests either are not importable, or need data files. I should be able to solve this by fixing my setup.py script. Other projects I have regarding tests are: * make sure the test runner returns with an error code in case of test failures, so that the Debian build system knows it must abort the build. * make it possible to run the tests on installed systems. Cheers, Baptiste PS: please continue to CC me in replies
Hello, to present myself first, I am the Debian Developper who "mentor" the work of Radostan and Baptiste. I am also working in the French synchrotron SOLEIL [1]. I am in charge of the diffractometer's calculation library [2] used with the tango control system [3]. I am also preparing what we call a Debian Pure Blend [4] related to Photon and Neutron facilities [5] (work in progress). During the last Debian Science Sprint at the ESRF [6], I met one guyes of the laboratory where I did my PhD (material science). This guy is working with Vincent Fabre Nicolin and use his pynx software [7] which optionaly use cctbx. This is why I started to interest myself to cctbx. Looking in your mailing liste I discovert that Radostan was interested in the cctbx Debian packaging. so I contact him and voilà...
I appreciate the effort of yours. Making the cctbx installable with aptitude is great indeed. Then, as you mentioned, it would be great if it became installable with yum on Fedora-like system, etc. Making it installable with distutils may open the possibility to use easy_install or pip to install the cctbx. More of a gimmick imho but the average Python user seems to expect that these days.
thanks, we are doing this work in the spirit to ease as much as possible the installation and maintainability of cctbx for non-specialist and indeed also for system administrators. As you may not know the ESRF as decided to use Debian as it first OS on most of it's infrastructur. So packaging software they are using seems to be important for long term maintainability of the facility. Nevertheless in the Debian social contract [8] we try as much as possible to forward our work to the upsteam and to be non-debian specific. So all the FLOSS should beneficiate of our work.
First a general comment: you have been using git in a manner that I find suboptimal. It would have been much easier for us (and much more in the spirit of git) if you had asked us to make a public git repository (I exclusively work with git for the record, using git svn to interact with sourceforge, so I could have provided one on no time), and then forked it. Indeed we would have been able to simply check out your repo into a branch of our public repo, and then immediately test your changes, and eventually apply those that passes the trial of fire. Actually, as pointed out in my comments below, we can't even apply your patches because some seem to be missing.
Here I would like to explain why I asked Radostand to work the way it is now. We as "Debian Science team" use a common infrastructure for our team work (mostly packaging effort). We are packaging only released (stable) software, so we started the cctbx packaging from the latest released version 2012.05.08.2305. It is important for us to host the packaging repository (every member of the Debian-Science team can commit on this repository in case there is a problem with the current maintainer, vacation etc...). Now that we manage to package this latest stable release (still some work to do), we decided to discuss with you about integration of part of our patch series. Now we completly agreed that it would be a lot easyer for all of us if the patch series provided were generated on top of your latest git repository. As a first contact it is important for us to know your opinion about the proposed pacthes before working on the git head of the cctbx repository. Can you give us the URL of this repository ?
0002-fix-opengl-header-missing-gltbx: rejected Do you really want to force all cctbx users to install OpenGL? Even if they don't need it because e.g. they run cctbx-based scripts as the back end of a web server?
the packaging of cctbx is done like this (for now): - 2 binary package per shared library libxxx, libxxx-dbg - 1 binary package for all the devel files - 1 binary package for all the python modules and extensions. python-cctbx_2012.05.08.2305-1_i386.deb So yes for now apt-get install python-cctbx pull also openGL libraries (<30 Mo on my computer). The room on a server is no more a problem nowaday. we can find harddisk of 1To for less than 250euros. Indeed we should also split python-cctbx under finer grain packages, but is it worth the effort ? This can indeed be discussed.
0003-correct-paths-in-dispatcher-creation, 0008-Fix-to-skip-pycbf-build, 0016-adapt-test_utils-in-libtbx-for-setup_py-test, 0018-Fix-to-use-systems-include-path, 0019-Fix-to-skip-build-of-clipper-examples I have put those together because they participate to the same philosophy. They make sense only in the Debian environment you are designing, where the cctbx will depend on other packages, that will therefore be installed in standard locations if the cctbx is installed. But in agnostic an environment, where the cctbx dynamic libraries and Python modules are not in standard places, those patches break the build system and part of the runtime system. For example, 0018 assumes there is gpp4/ccp4 somewhere on the header paths: that would require changing the packaging of Phenix to match that. This is so obvious that you can't have missed that. So am I missing something here?
So in a first time for this 0018 patch it should be possible to fix it providing during the build the right -I arguments. Let's rework this part on our side before going further.
0006-options-for-system-libs-installtarget-and-prefix: needs thorough testing I approve the spirit of it but this patch introduces a truckload of changes and that needs to stand the trial of our nightly tests. Note that you use a couple of new methods, e.g. env_etc.check_syslib, that none of the patches define as far as I can tell.
let's see with Radostan
0007-adding-shlib-versioning: accepted The new build option libtoolize seems properly introduced. Beyond that I must admit I am rather clueless about libtool. Anyway, if configure is not run with --libtoolize, this won't impact us!
great, now your just need to learn about so verisonning :) [9]
0009-build-libann-statically: pending explanations could you explain why you need to build this one statically only?
You can find the explaination on the wiki [10]
0011-fix-missing-python-lib-during-linking: needs tidying up Why don't you append to env_etc.libs_python instead of created the string env_etc.py_lib? We try to use list as much as possible in the SConscript.
I let others answer this question
0012-fix-to-remove-cctbx.python-interpreter: rationale? And trunk has moved on anyway Why do you need to remove cctbx.python? uc1_2_reeke.py has been removed and there is now uc1_2_a.py that features cctbx.python too
this is part of the questions we would like to ask you. We want to use the default Debian python interpreter, so we need to change all #!/usr/bin/env xxxx.python with #!/usr/bin/env python in all your files. If I remember correctly this job is also done by distutils [11] is declare as a script. we do not know now how to deal with the mutiple scripts that you are provinding in your bin directory. we call them dispatcher scripts (look at the wiki [10]).my concerne is with third party softwares which are relying on them in the PATH. Policy of Debian says that for each binary in /usr/bin we should provide a man pages so in our case this should be an issue... So for now we mostly worked on the python modules and to provide the libraries.
0013-fix-to-support-LDFLAGS-in-use_enviroment_flags: not sure This seems done in orthodox a manner. However, this has the potential of wrecking havoc to Phenix on some machines where LDFLAGS is set in fancy ways.
This is also true with the other flag also, why do you treat LDFLAGS differently than others ? In that case as explained by Radostan, Debian need to tune the build process by providing their own build flags. The trade-off would be to add a config flag that should allow or not to use the LDFLAGS --use-also-LDFLAGS what is your opinion ?
0015-fix-cif-parser-to-work-with-antlr3c-3.2: for Richard's eyes Richard (Gildea) is the expert when it comes to ANTLR
this should be problematic as Debian provide only 3.2 for now. We shoul dask for the packaging of 3.4, but as you told us you also pacthed it. Did you forwarded you changed to the antlr3c upstream ? If not our last solution is to compile it statically for now.
0017-autogenerate-pkgconfig-files: accepted Your business!
ok, lets doit. Thanks for your attention Frédéric PS: please CC me also [1] http://www.synchrotron-soleil.fr/ [2] http://people.debian.org/~picca/hkl/ [3] http://tango-controls.org/ [4] http://wiki.debian.org/DebianPureBlends [5] http://blends.debian.net/pan/tasks/ [6] http://www.esrf.eu/events/conferences/debian-for-scientific-facilities-days-... [7] http://sourceforge.net/apps/mediawiki/pynx/index.php?title=Main_Page [8] http://www.debian.org/social_contract [9] http://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.h... [10] http://wiki.debian.org/DebianScience/cctbx [11] http://docs.python.org/distutils/setupscript.html#installing-scripts
So in a first time for this 0018 patch it should be possible to fix it providing during the build the right -I arguments. Let's rework this part on our side before going further. To explain why I did this. I intend this to be patched only for Debian. Problem is that I didn't found a way to find the systems include path system independently. I figured that it's best to let gcc to find it. For example cbflib is installing files under linux in /usr/include/cbf/ But you write: #include
There is not really anything else I can do here. But this is a small price to
On Wed, 08. Aug 15:37, Frédéric-Emmanuel Picca wrote: pay to patch it manually after each release.
0006-options-for-system-libs-installtarget-and-prefix: needs thorough testing I approve the spirit of it but this patch introduces a truckload of changes and that needs to stand the trial of our nightly tests. Note that you use a couple of new methods, e.g. env_etc.check_syslib, that none of the patches define as far as I can tell.
To clear that up a little bit more. env_etc.check_syslib is only returning True if --use_system-libs option is set so it shouldn't break anything. I just put in a second check if there this library is really installed otherwise it returns False. ccp4 I treated specially because we have 2 packages in debian called libmmdb and libgpp4. gpp4[1] is a drop in replacement for CCP4 interface by Morten Kjeldgaard. I run your testsystem on the extentions linked by mmdb and gpp4 and it worked. I tried to remove as much bundled libraries as possible. [1] https://launchpad.net/gpp4/
participants (12)
-
Baptiste Carvello
-
Jeffrey Van Voorst
-
Johan Hattne
-
justin
-
Luc Bourhis
-
Marcin Wojdyr
-
Nathaniel Echols
-
Phil Evans
-
Picca Frédéric-Emmanuel
-
PICCA Frédéric-Emmanuel
-
Radostan Riedel
-
Richard Gildea