[AMBER-Developers] configure --prefix

From: Hai Nguyen <nhai.qn.gmail.com>
Date: Wed, 25 Jan 2017 15:31:49 -0500

> If I happen to get free time before
now and the code freeze maybe I'll take another crack at making this
happen.

.Dan: I have working code in amber_prefix_Hai branch. Feel free to try and
update that.

./configure --prefix=$HOME/try_another_folder gnu

Hai

On Thu, Dec 22, 2016 at 9:44 AM, Daniel Roe <daniel.r.roe.gmail.com> wrote:

> On Wed, Dec 21, 2016 at 10:04 PM, David Case
> <dacase.scarletmail.rutgers.edu> wrote:
> >
> > On Wed, Dec 21, 2016, Dan Roe wrote:
> >
> >>
> >> But I think it is important that these tests happen at 'configure'
> >> time, and it doesn't seem to me to be that difficult....
> >
> > It may indeed be do-able, but it is not just a matter of seeing if
> "import
> > numpy" throws an error. The configure script would have to compile
> python
> > extensions in the same way the real code will (which is actually several
> > ways), and make sure they all work. And since only developers will
> decline
> > the miniconda download, and since they can find out by trial and error if
> > their system python works, writing and maintaining this functionality
> seems
> > like a lot of work for only a small payback.
>
> I think that some users (maybe installs at HPC sites etc) may also
> want to decline the miniconda download, but it does seem like python
> detection is problematic. As long as the '--with-python' flag works I
> think it's fine anyway. It would be nice to have configure be able to
> test the python ecosystem but if it's too much work then I'll just put
> it on my wishlist.
>
> > For those who want to play with this, configure2 alreay *has* a
> > check_compatible_python() function. We commented out the call to this
> > function (about line 935) since it was not reliable enough: having
> > configure say a python is OK when it is not is quite annoying and
> confusing.
>
> Agreed - that's worse from a user standpoint for certain.
>
> > Knowing I'm repeating myself: we *really* don't want users installing
> > packages into their existing python just to install Amber. There is too
> > much danger of creating incompatibilities with other things that require
> > python: that is, we increase the danger that installing Amber breaks
> > something else on the user's computer. [numpy, in particular, exists in
> > several incompatible releases; it is quite easy for program A to require
> > version 1.9 and program B to require version 1.10. We can't solve that
> > problem, but should not make it worse, either.]
>
> This is one of the gotchas of living in the python universe. Advanced
> users can just use the '--with-python' flag. "Normal" users can have a
> built-in miniconda. We just have to make it clear during configure
> time exactly why miniconda is a good idea instead of the vague "well,
> maybe stuff won't work" message of the past.
>
> >> I'll have to test '--with-python', but if it works there is not much
> >> of a downside that I can see other than that Amber installation is now
> >> the "python keystone", so I'd better not ever move it or I will break
> >> every Amber install based on it (and it's another configure flag that
> >> has to be specified each time).
>
> I was being a bit facetious about having to specify an extra flag :-)
> - that stuff doesn't come through in email sometimes. I think the
> '--with-python' flag is the correct solution.
>
> >> > [Next thing we know, people are going to want to be able to specify a
> build
> >> > directory, separate from the sources.....]
> >>
> >> I still want this :-). For separate installs the size can really add
> >> up, more because the Amber 16 source tree weighs in at 2.3 GB - the
> >> miniconda install adds 635 MB to that (sizable but not egregious).
> >
> > Has anyone tried to just make a shadow directory (using "cp -as" or
> lndir),
> > with links to a clean amber tree, then build in the shadow directory?
>
> Unfortunately, this can't work for multiple parallel library/CUDA
> version builds (maybe I'm not understanding what you mean).
>
> > Realistically, I'd recommend bigger disks. You can get 500 Gb external
> > SSD drives with USB-3 or USB-C interconnects that are quite fast and will
> > allow you create all the Amber images you are ever likely to need....
>
> Just because disk space is cheap doesn't mean we should be wasteful. I
> would like to do with Amber what I can do with every other linux
> source code package: have one giant source directory that I can point
> to an install directory with '--prefix', and then that install
> directory contains only what I need to run that version of Amber.
> AMBERHOME can point to the common source, but PATH and LD_LIBRARY_PATH
> point to the install directory. If I happen to get free time before
> now and the code freeze maybe I'll take another crack at making this
> happen.
>
> -Dan
>
> --
> -------------------------
> Daniel R. Roe
> Laboratory of Computational Biology
> National Institutes of Health, NHLBI
> 5635 Fishers Ln, Rm T900
> Rockville MD, 20852
> https://www.lobos.nih.gov/lcb
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
>
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Wed Jan 25 2017 - 13:00:02 PST
Custom Search