Re: [AMBER-Developers] pmemd.MPI build broken

From: Ross Walker <ross.rosswalker.co.uk>
Date: Sat, 5 Mar 2016 08:12:44 -0800

> On vacation until Thursday but I'll get the Intel India team on this. The code was put in precisely because it needs the testing on ranges of architectures and I don't have the resources for all the different variations.

So far I tested:

Mpich 3 with Intel and Gnu compilers (Centos 6 and 7)
Intel MPI with Intel compilers (Centos 7)

OpenMPI needs testing - as does PGI (I don't have a copy) and also the Cray stuff - don't have access.

OpenMP is required now which rules out any compiler that doesn't support it. I've never heard of DragonEgg but it looks like it won't support PMEMD. I just googled it and it looks like it's still mostly a hobbyist project right now so it's probably not critical we have this working. We may need a way to skip building of AMBER if one chooses a certain compiler. But OpenMPI is a must

I'll also look at the possibility of building without openMP - it should be possible but I don't have the bandwidth right now - we may want to look into this at the meeting if I don't get to it beforehand.

Currently for PME runs omp threads is locked to 1 except for MIC 2 - this will change down the line with updates after release when we get our hands on KNL and Skylake hardware but for now PME runs are single OMP threaded. Thus one should use mpirun -np X where X is number of cores. For GB it is openMP the whole way through so one should use export OMP_NUM_THREADS=Y; mpirun -np X where X is number of sockets and Y is number of cores per socket.

Ultimately if the various can't be easily fixed in the next 2 weeks we can roll back to the code as of morning of March 4th but it would be politically desirable not to need to do that.

All the best
Ross

> On Mar 4, 2016, at 23:19, Jason Swails <jason.swails.gmail.com> wrote:
>
> Oh, because dragonegg doesn't support OpenMP yet. Is there a compiler flag
> to disable OpenMP in pmemd?
>
> On Sat, Mar 5, 2016 at 2:18 AM, Jason Swails <jason.swails.gmail.com> wrote:
>
>> mpif90 -fplugin=/usr/lib64/dragonegg.so -fplugin=/usr/lib64/dragonegg.so
>> -fPIC -O3 -mtune=native -o /home/swails/build_amber/amber/bin/pmemd.MPI
>> gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o
>> mdin_emil_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o
>> inpcrd_dat.o dynamics_dat.o emil.o img.o nbips.o offload_allocation.o
>> parallel_dat.o parallel.o gb_parallel.o pme_direct.o pme_recip_dat.o
>> pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o
>> fft1d.o bspline.o pme_force.o pbc.o nb_pairlist.o gb_ene_hybrid.o
>> nb_exclusions.o cit.o dynamics.o bonds.o angles.o dihedrals.o
>> extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o runmin.o
>> constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o
>> pmemd_lib.o runfiles.o file_io.o AmberNetcdf.o bintraj.o binrestart.o
>> pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o
>> get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
>> ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
>> dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
>> multipmemd.o remd_exchg.o amd.o gamd.o ti.o gbsa.o barostats.o scaledMD.o
>> constantph.o energy_records.o constantph_dat.o relaxmd.o sgld.o emap.o
>> get_efield_energy.o -L/home/swails/build_amber/amber/lib \
>> /home/swails/build_amber/amber/lib/libnetcdff.a
>> /home/swails/build_amber/amber/lib/libnetcdf.a
>> /home/swails/build_amber/amber/lib/libemil.a -lstdc++ -lmpi_cxx
>> gb_parallel.o: In function `__gb_parallel_mod_MOD_gb_parallel_setup':
>> gb_parallel.F90:(.text+0x9f3): undefined reference to
>> `omp_get_num_threads_'
>> gb_ene_hybrid.o: In function
>> `__gb_ene_hybrid_mod_MOD_gb_ene_hyb_force_timode':
>> gb_ene_hybrid.F90:(.text+0x2b69): undefined reference to
>> `omp_get_thread_num_'
>> gb_ene_hybrid.F90:(.text+0x401c): undefined reference to
>> `omp_get_thread_num_'
>> gb_ene_hybrid.o: In function `__gb_ene_hybrid_mod_MOD_gb_ene_hyb_force':
>> gb_ene_hybrid.F90:(.text+0x7928): undefined reference to
>> `omp_get_thread_num_'
>> gb_ene_hybrid.F90:(.text+0x8c6c): undefined reference to
>> `omp_get_thread_num_'
>> gb_ene_hybrid.o: In function `__gb_ene_hybrid_mod_MOD_gb_ene_hyb_energy':
>> gb_ene_hybrid.F90:(.text+0xc579): undefined reference to
>> `omp_get_thread_num_'
>> gb_ene_hybrid.o:gb_ene_hybrid.F90:(.text+0xd76c): more undefined
>> references to `omp_get_thread_num_' follow
>> gb_ene_hybrid.o: In function
>> `__gb_ene_hybrid_mod_MOD_final_gb_setup_hybrid':
>> gb_ene_hybrid.F90:(.text+0x194eb): undefined reference to
>> `omp_get_num_threads_'
>> bonds.o: In function `__bonds_mod_MOD_get_bond_energy_gb':
>> bonds.F90:(.text+0x3d0): undefined reference to `omp_set_lock_'
>> bonds.F90:(.text+0x44f): undefined reference to `omp_unset_lock_'
>> bonds.F90:(.text+0x470): undefined reference to `omp_set_lock_'
>> bonds.F90:(.text+0x4c3): undefined reference to `omp_unset_lock_'
>> bonds.o: In function `__bonds_mod_MOD_bonds_setup':
>> bonds.F90:(.text+0x117f): undefined reference to `omp_init_lock_'
>> angles.o: In function `__angles_mod_MOD_get_angle_energy_gb':
>> angles.F90:(.text+0x5ec): undefined reference to `omp_set_lock_'
>> angles.F90:(.text+0x662): undefined reference to `omp_unset_lock_'
>> angles.F90:(.text+0x683): undefined reference to `omp_set_lock_'
>> angles.F90:(.text+0x6ec): undefined reference to `omp_unset_lock_'
>> angles.F90:(.text+0x70d): undefined reference to `omp_set_lock_'
>> angles.F90:(.text+0x764): undefined reference to `omp_unset_lock_'
>> angles.o: In function `__angles_mod_MOD_angles_setup':
>> angles.F90:(.text+0x112f): undefined reference to `omp_init_lock_'
>> dihedrals.o: In function `__dihedrals_mod_MOD_get_dihed_energy_gb':
>> dihedrals.F90:(.text+0x2847): undefined reference to `omp_set_lock_'
>> dihedrals.F90:(.text+0x28c1): undefined reference to `omp_unset_lock_'
>> dihedrals.F90:(.text+0x28e5): undefined reference to `omp_set_lock_'
>> dihedrals.F90:(.text+0x2948): undefined reference to `omp_unset_lock_'
>> dihedrals.F90:(.text+0x296c): undefined reference to `omp_set_lock_'
>> dihedrals.F90:(.text+0x29c6): undefined reference to `omp_unset_lock_'
>> dihedrals.F90:(.text+0x29ea): undefined reference to `omp_set_lock_'
>> dihedrals.F90:(.text+0x2a44): undefined reference to `omp_unset_lock_'
>> dihedrals.o: In function `__dihedrals_mod_MOD_dihedrals_setup':
>> dihedrals.F90:(.text+0x398f): undefined reference to `omp_init_lock_'
>> extra_pnts_nb14.o: In function
>> `__extra_pnts_nb14_mod_MOD_get_nb14_energy_gb':
>> extra_pnts_nb14.F90:(.text+0x21aa): undefined reference to `omp_set_lock_'
>> extra_pnts_nb14.F90:(.text+0x23f7): undefined reference to
>> `omp_unset_lock_'
>> extra_pnts_nb14.F90:(.text+0x241d): undefined reference to `omp_set_lock_'
>> extra_pnts_nb14.F90:(.text+0x24ce): undefined reference to
>> `omp_unset_lock_'
>> extra_pnts_nb14.o: In function `__extra_pnts_nb14_mod_MOD_nb14_setup':
>> extra_pnts_nb14.F90:(.text+0x859f): undefined reference to `omp_init_lock_'
>> shake.o: In function `__shake_mod_MOD_shake_gb':
>> shake.F90:(.text+0x1c1): undefined reference to `omp_get_thread_num_'
>> pmemd.o: In function `main':
>> pmemd.F90:(.text+0x7a6): undefined reference to `omp_get_max_threads_'
>> pme_alltasks_setup.o: In function
>> `__pme_alltasks_setup_mod_MOD_pme_alltasks_setup':
>> pme_alltasks_setup.F90:(.text+0x1f): undefined reference to
>> `omp_set_num_threads_'
>> collect2: error: ld returned 1 exit status
>> Makefile:103: recipe for target
>> '/home/swails/build_amber/amber/bin/pmemd.MPI' failed
>> make[3]: *** [/home/swails/build_amber/amber/bin/pmemd.MPI] Error 1
>> make[3]: Leaving directory '/home/swails/build_amber/amber/src/pmemd/src'
>> Makefile:22: recipe for target 'parallel' failed
>> make[2]: *** [parallel] Error 2
>> make[2]: Leaving directory '/home/swails/build_amber/amber/src/pmemd'
>> Makefile:28: recipe for target 'parallel' failed
>> make[1]: *** [parallel] Error 2
>> make[1]: Leaving directory '/home/swails/build_amber/amber/src'
>> Makefile:7: recipe for target 'install' failed
>> make: *** [install] Error 2
>>
>>
>> Looks like it needs -lgomp for gfortran, but I'm not sure what the Intel
>> version of the OpenMP library is called. Was this actually tested? Should
>> we really be mixing OpenMP and MPI by default when users don't request
>> -openmp?
>>
>
>
>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers


_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Sat Mar 05 2016 - 08:30:04 PST
Custom Search