On Thu, Mar 10, 2016 at 9:09 AM, Gerald Monard
<Gerald.Monard.univ-lorraine.fr> wrote:
> Example: Dan recently pointed out to me that SEBOMD tests don't pass on
> Windows+Cygwin. I don't have access to that setup + it seems to be a
> gfortran/cygwin bug. If cygwin is/must be supported, then I should spent
> some (a lot of?) efforts on changing my code to make it work.
> Thus my question: what must be supported? what could be supported? what
> is "who cares"?
I think that the philosophy of Amber has always been that we support
as many platforms as possible within reason. The 'within reason' part
means that if there is some part that does not work on a certain
platform and would be prohibitive to enable it (e.g. SEBOMD on cygwin,
fftw3 with gnu compiler versions < 4.3, etc) then we disable that
functionality or print an informative error/warning and how the user
may proceed. For example, in the case of FFTW3 and GNU < 4.3 the user
is told to specify '-nofftw3' to continue. In the case of
cygwin+SEBOMD we can implement a similar flag for configure
('-nosebomd') or just turn off SEBOMD if '-cygwin' is specified.
-Dan
>
> As far as I know, there is no mention on the doc or on the web site
> about things like "you need gcc>4.0 or intel>10"
>
>> I suspect that we (and our users) get relatively little benefit from all the
>> work that goes into supporting the Intel and PGI compilers, especially the
>> former, which has a different set of bugs in every release.
>>
>> It would be nice if some kind soul with some free time could run a pmemd
>> benchmark (say jac) comparing Intel vs gnu5 on a somewhat modern chip.
>> Also, is cpptraj time-constrained enough to warrant the extra optimizations
>> that might come from a proprietary compiler? Do we know anything about clang
>> vs gnu for cpptraj?
>>
>
> I don't have experience with gcc5, but on my linux cluster, Intel
> compiler gives faster results for MD, especially when the MKL is used.
> I'm talking here of course about things that are not cuda-enabled (QM
> and QM/MM for example).
>
> G.
>
>> I'm willing to be persuaded: almost all my simulations are on GPUs now, where
>> there is little need for proprietary compilers.
>>
>> ...dac
>>
>>
>> _______________________________________________
>> AMBER-Developers mailing list
>> AMBER-Developers.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber-developers
>>
>
> --
> ____________________________________________________________________________
>
> Prof. Gerald MONARD
> SRSMC, Université de Lorraine, CNRS
> Boulevard des Aiguillettes B.P. 70239
> F-54506 Vandoeuvre-les-Nancy, FRANCE
>
> e-mail : Gerald.Monard.univ-lorraine.fr
> tel. : +33 (0)383.684.381
> fax : +33 (0)383.684.371
> web : http://www.monard.info
>
> ____________________________________________________________________________
>
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
--
-------------------------
Daniel R. Roe, PhD
Department of Medicinal Chemistry
University of Utah
30 South 2000 East, Room 307
Salt Lake City, UT 84112-5820
http://home.chpc.utah.edu/~cheatham/
(801) 587-9652
(801) 585-6208 (Fax)
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Thu Mar 10 2016 - 08:30:06 PST