It is probably the most simple and safe thing to do - sticking with mpif77.
Also, if you look at $AMBERHOME/pmemd/README, a lot of the mpi issues, as
well as other pmemd config issues, are discussed there (so I know that does
not cover all of amber, but some of the info there is generally useful, like
using 'mpif77 -showme' or 'mpif77 -link_info', depending on your mpi). I
don't know how Dave feels about things that bulk up the manual further, but
parallel building is definitely not a trivial issue. There remains, always,
the problem of getting folks to read the bloody thing though ;-)
Regards - Bob
----- Original Message -----
From: "Jason Swails" <jason.swails.gmail.com>
To: "AMBER Developers Mailing List" <amber-developers.ambermd.org>
Sent: Tuesday, March 02, 2010 1:56 PM
Subject: Re: [AMBER-Developers] configure_openmpi
On Tue, Mar 2, 2010 at 11:58 AM, Robert Duke <rduke.email.unc.edu> wrote:
> This is off the top of my head, not recently researched, but in the past I
> have opted for building mpi for linkage to f77 because it is simpler,
> though
> at the same time not as safe. F90 includes all this external interface
> stuff, encapsulated in module definition files and what have you, and
> actually doing the build using all this stuff gets to be a bit more
> complicated. So assuming you can read a man page for an mpi call, it is
> easiest to just pretend you have f77 and include the mpi headers for f77
> mpi. It has always been my assumption that not everyone is adding a bunch
> of mpi calls to the code, so the liklihood of disaster from misuse of an
> mpi
> call is low. The higher software engineering road to take here might be to
> try to actually take "advantage" of all f90 has to offer, but due to
> vendor
> inconsistencies, advantages are not always advantages, and I am presuming
> we
> would need some minor code changes to actually be using f90 interfaces to
> mpi. PMEMD actually allows for this with the USE_MPI_MODULE define, which
> is used only for ibm systems, I believe, and you have to stick stuff in
> pmemd config.h to support it. At any rate, I think it is a bit foolhearty
> for a user to assume that mpi is going to map in the correct compiler for
> you, be it for mpif77 or mpif90, and in our instructions for setting up
> mpi,
> we should make that clear. With the newer shared memory multicore
> machines,
> I think building/configuring mpi has probably gotten a little easier
> (except
> for the fiasco of the mpich2 run model), but I think it is still a bit
> optimistic to think that someone can get this stuff up and running without
> knowing or reading anything.
> Regards - Bob
This sounds like a legitimate argument for using mpif77, though I
don't know the details of MPIs to anywhere near this level. Perhaps
we should switch back to mpif77? Either would be easily workable if
it was documented. In any case, we should probably document anyway?
At least explain the '-show' flags so people can debug stuff like that
on their own? I'd volunteer to document if nobody else wanted to,
though others may have more experience,
I'm guessing that people will read the manual if they don't know how
to compile amber in parallel, so we can provide instructions there
(since they're directed there by INSTALL, etc.).
Thoughts?
Thanks!
Jason
>
> ----- Original Message ----- From: "Jason Swails" <jason.swails.gmail.com>
> To: "AMBER Developers Mailing List" <amber-developers.ambermd.org>
> Sent: Tuesday, March 02, 2010 11:16 AM
> Subject: Re: [AMBER-Developers] configure_openmpi
>
>
> On Tue, Mar 2, 2010 at 10:54 AM, Lachele Foley <lfoley.ccrc.uga.edu>
> wrote:
>>
>> The thing I didn't get, and still don't, is why there is a compile
>> instruction that calls mpif77 on code that contains F90-specific internal
>> text. That's what, best I could tell from the error messages, happened
>> with
>> my build: mpif77 complained that it was being asked to compile fortran-90
>> code. That's what confused me.
>
> Yes, but that's because your mpif77 was truly wrapped around a
> compiler that only compiled fortran77 code (g77). f90 compilers can
> still compile fortran77 code, so mpif77 is typically wrapped around a
> fortran90 capable compiler (go to a compute cluster and type "mpif77
> -show" and you'll see that, more than likely, it'll say ifort if it's
> available or gfortran or something, rarely will it have a different
> compiler than mpif90 has). Thus, it should have no problem compiling
> fortran90-specific code. The way configure_openmpi was set up, it
> would build mpif77 around ifort, gfortran, or pgf90 based on what was
> specified.
>
> I think what Professor Case was saying was that some of the components
> built for mpif90 (the ones not built for mpif77) were not necessary
> for compiling amber, so why not just use mpif77 and not bother with
> the extraneous.
>
> All the best,
> Jason
>
>>
>>
>> :-) Lachele
>> --
>> B. Lachele Foley, PhD '92,'02
>> Assistant Research Scientist
>> Complex Carbohydrate Research Center, UGA
>> 706-542-0263
>> lfoley.ccrc.uga.edu
>>
>>
>> ----- Original Message -----
>> From: Jason Swails
>> [mailto:jason.swails.gmail.com]
>> To: AMBER Developers Mailing List
>> [mailto:amber-developers.ambermd.org]
>> Sent: Tue, 02 Mar 2010 10:44:16
>> -0500
>> Subject: Re: [AMBER-Developers] configure_openmpi
>>
>>
>>> On Tue, Mar 2, 2010 at 8:04 AM, case <case.biomaps.rutgers.edu> wrote:
>>> > On Tue, Mar 02, 2010, Jason Swails wrote:
>>> >>
>>> >> I was wondering if there is a specific reason why the
>>> >> --disable-mpi-f90 flag was put in configure_openmpi.
>>> >
>>> > Just to save time in a long compile, and to avoid problems that might
>>> arise
>>> > from a feature we don't use.
>>> >
>>> >> I've always changed FC from mpif77 to mpif90 simply because
>>> >> it feels more natural....
>>> >
>>> > Well, as a "tester", I'd prefer that you do what users would be doing,
>>> > > so
>>> that
>>> > if there *are* problems, we find them out as soon as possible.
>>>
>>> I did occasionally forget to change to mpif90, so I suppose I did test
>>> mpif77 occasionally :) (though I never built lam or openmpi that came
>>> bundled since i prefer my current MPI. I'm guessing a large number of
>>> users will be in this boat, too, though I'm not sure).
>>>
>>> >
>>> >> the recent change in configure from mpif77 to mpif90.
>>> >
>>> > OK, I give up. Ross doesn't understand this, and Lachele doesn't, and
>>> you
>>> > don't. I guess I can't expect users not to be confused. Go ahead and
>>> update
>>>
>>> Given your explanation above, I don't think it's terribly hard to
>>> understand. A downside of using mpif77, though, is if someone built
>>> their own MPI before trying to compile amber. If someone has the
>>> intel compiler suite and naively (like I did when I first started, and
>>> I think like Lachele did when her mpif77 pointed to g77) used
>>> (configure; make; make install) without specifying F77, F90, and CC
>>> beforehand, the configure script will attempt to use a .real. F77
>>> compiler. Thus, mpif77 pointed at gfortran/g77 while mpif90 pointed
>>> at ifort and mpicc pointed at icc (all by default). mpif77 then
>>> failed to build amber after running ./configure -mpi intel since
>>> gfortran didn't recognize all the intel flags, but mpif90 worked.
>>> Since this happened to me, I obviously think this would be the issue
>>> more users would have (with their own MPIs rather than the ones we
>>> provide), but again my experience is very limited and I could be
>>> completely wrong.
>>>
>>> There are trade-offs to either choice, and with my limited experience
>>> I'm hardly qualified to provide suggest a definitive *better* option.
>>> I'll update configure_openmpi to maintain consistency with the current
>>> version of configure, though this can be changed back if it's decided
>>> the alternative is the better option. My main concern was the change
>>> in configure that would make compilation fail with the built-in
>>> openmpi.
>>>
>>> > configure_openmpi.
>>> >
>>> > ....dac
>>>
>>> All the best,
>>> Jason
>>>
>>> --
>>> ---------------------------------------
>>> Jason M. Swails
>>> Quantum Theory Project,
>>> University of Florida
>>> Ph.D. Graduate Student
>>> 352-392-4032
>>>
>>> _______________________________________________
>>> AMBER-Developers mailing list
>>> AMBER-Developers.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber-developers
>>>
>>
>> _______________________________________________
>> AMBER-Developers mailing list
>> AMBER-Developers.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber-developers
>>
>
>
>
> --
> ---------------------------------------
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Graduate Student
> 352-392-4032
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
>
>
>
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
>
--
---------------------------------------
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Tue Mar 02 2010 - 11:30:04 PST