Changing compiler flags doesn't do anything to indicate to make that .o
files are out-of-date. You need to make sure that after each pmemd.cuda
precision model binary is built, the .o files from src/pmemd/src/cuda are
removed. That should trigger them to get rebuilt with the "next" precision
model and so on.
Also note that this is begging for race conditions when doing parallel
make. So you'll need an appropriate .NOTPARALLEL designation for each of
the main pmemd.cuda_PRECISION rules so you don't wind up with the builds
competing with (and killing) each other.
Another thing you could do is move to "out-of-source" building, so that
each GPU precision model builds object files (and libraries) in a separate
folder that is created inside src/pmemd/src/cuda. This has the advantage
of not having to re-do the compilation of each source code file every time
you make a change (three times...)
On Thu, Feb 4, 2016 at 3:13 PM, Ross Walker <ross.rosswalker.co.uk> wrote:
> Hi All,
>
> I am looking for some help with setting up Makefiles for pmemd.cuda.
>
> I am trying to remove the -cuda_DPFP switches etc from configure so that
> there is only a -cuda option which builds all precision models by default.
> I have modified configure2 etc so that this works and that simplifies
> things by removing all the PREC MODEL flags from config.h etc.
>
> so now we have
>
> src/Makefile which has
>
> cuda: configured_cuda
> .echo "Starting installation of ${AMBER} (cuda) at `date`".
> cd pmemd && $(MAKE) cuda
>
> and then src/pmemd/Makefile has
>
> cuda: configured_cuda
> $(MAKE) -C src/ cuda
> .echo "Installation of pmemd.cuda complete"
>
> and then src/pmemd/src/Makefile has
>
> cuda: configured_cuda pmemd.cuda_SPFP$(SFX) pmemd.cuda_DPFP$(SFX)
> .( \
> mv pmemd.cuda_SPFP$(SFX) $(BINDIR)/pmemd.cuda_SPFP$(SFX) ;\
> mv pmemd.cuda_DPFP$(SFX) $(BINDIR)/pmemd.cuda_DPFP$(SFX) ;\
> cd $(BINDIR) ; ln -f -s pmemd.cuda_SPFP$(SFX) pmemd.cuda$(SFX);\
> )
>
> ...
> pmemd.cuda_SPFP$(SFX): $(OBJS) cuda_spfp_libs $(EMIL)
> $(PMEMD_LD) $(PMEMD_FOPTFLAGS) $(PMEMD_CU_DEFINES) $(LDOUT)$.
> $(OBJS) \
> $(PMEMD_CU_LIBS) -L$(LIBDIR) $(NETCDFLIBF) $(LDFLAGS) $(PMEMD_FLIBSF)
>
> pmemd.cuda_DPFP$(SFX): $(OBJS) cuda_dpfp_libs $(EMIL)
> $(PMEMD_LD) $(PMEMD_FOPTFLAGS) $(PMEMD_CU_DEFINES) $(LDOUT)$.
> $(OBJS) \
> $(PMEMD_CU_LIBS) -L$(LIBDIR) $(NETCDFLIBF) $(LDFLAGS) $(PMEMD_FLIBSF)
> ...
> cuda_spfp_libs:
> $(MAKE) -C ./cuda PREC_MODEL=-Duse_SPFP
>
> cuda_dpfp_libs:
> $(MAKE) -C ./cuda PREC_MODEL=-Duse_DPFP
>
> And this in principal works in that it builds both the SPFP and DPFP
> targets, correctly moves the executables to $AMBERHOME/bin and creates the
> link to SPFP for pmemd.cuda.
>
> The problem is that the cuda.a library built in
> $AMBERHOME/src/pmemd/src/cuda/ gets built the first time for
> pmemd.cuda_SPFP but then when it goes to pmemd.cuda_DPFP Make thinks the
> library is up to date so nothing gets rebuilt. I tried to address this by
> adding a clean target for the library in pmemd/src/Makefile :
>
> cuda_lib_clean:
> $(MAKE) -C ./cuda clean
>
> which calls the clean target in the cuda directory which is:
>
> clean:
> rm -f *.o *.linkinfo cuda.a *.mod
>
> Calling this directly from $AMBERHOME/src/pmemd/src/ appears to work:
>
> client65-47:src rcw$ make cuda_lib_clean
> /Library/Developer/CommandLineTools/usr/bin/make -C ./cuda clean
> rm -f *.o *.linkinfo cuda.a *.mod
>
> So then tried to add this as part of the targets that get built for cuda
> in src/pmemd/src/Makefile
>
> cuda: configured_cuda cuda_lib_clean pmemd.cuda_SPFP$(SFX) cuda_lib_clean
> pmemd.cuda_DPFP$(SFX)
> .( \
> mv pmemd.cuda_SPFP$(SFX) $(BINDIR)/pmemd.cuda_SPFP$(SFX) ;\
> mv pmemd.cuda_DPFP$(SFX) $(BINDIR)/pmemd.cuda_DPFP$(SFX) ;\
> cd $(BINDIR) ; ln -f -s pmemd.cuda_SPFP$(SFX) pmemd.cuda$(SFX);\
> )
>
> But it doesn't seem to call the clean command or clean anything in the
> cuda directory. E.g.
>
> cd $AMBERHOME
> make distclean
> ./configure -cuda gnu
> make install
>
> cd AmberTools/src && /Library/Developer/CommandLineTools/usr/bin/make
> install
> AmberTools14 has no CUDA-enabled components
> (cd ../../src && /Library/Developer/CommandLineTools/usr/bin/make cuda )
> Starting installation of Amber14 (cuda) at Thu Feb 4 11:59:14 PST 2016.
> cd pmemd && /Library/Developer/CommandLineTools/usr/bin/make cuda
> /Library/Developer/CommandLineTools/usr/bin/make -C src/ cuda
> /Library/Developer/CommandLineTools/usr/bin/make -C ./cuda clean
> rm -f *.o *.linkinfo cuda.a *.mod
> gfortran -DBINTRAJ -DEMIL -DPUBFFT -O3 -mtune=native -DCUDA
> -I/Users/rcw/Desktop/amber_master/include -c gbl_constants.F90
> ...
> ...
> So it seems to do the clean the first time which is good. And then it
> builds the SPFP version...
> ...
> ...
> /Developer/NVIDIA/CUDA-7.5/bin/nvcc -gencode arch=compute_20,code=sm_20
> -gencode arch=compute_30,code=sm_30 -gencode arch=compute_50,code=sm_50
> -gencode arch=compute_52,code=sm_52 -use_fast_math -O3 -Duse_SPFP -DCUDA
> -I/Developer/NVIDIA/CUDA-7.5/include -IB40C -c kPMEInterpolation.cu
> ar rvs cuda.a cuda_info.o gpu.o gputypes.o kForcesUpdate.o
> kCalculateLocalForces.o kCalculateGBBornRadii.o
> kCalculatePMENonbondEnergy.o kCalculateGBNonbondEnergy1.o kNLRadixSort.o
> kCalculateGBNonbondEnergy2.o kShake.o kNeighborList.o kPMEInterpolation.o
> ar: creating archive cuda.a
> a - cuda_info.o
> a - gpu.o
> a - gputypes.o
> a - kForcesUpdate.o
> a - kCalculateLocalForces.o
> a - kCalculateGBBornRadii.o
> a - kCalculatePMENonbondEnergy.o
> a - kCalculateGBNonbondEnergy1.o
> a - kNLRadixSort.o
> a - kCalculateGBNonbondEnergy2.o
> a - kShake.o
> a - kNeighborList.o
> a - kPMEInterpolation.o
> /Library/Developer/CommandLineTools/usr/bin/make -C
> ../../../AmberTools/src/emil install
> ...
> ranlib /Users/rcw/Desktop/amber_master/lib/libemil.a
> gfortran -O3 -mtune=native -DCUDA -o pmemd.cuda_SPFP gbl_constants.o
> gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o mdin_emil_dat.o
> mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o
> emil.o img.o nbips.o offload_allocation.o parallel_dat.o parallel.o
> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o
> pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o
> pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
> dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o
> runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o
> timers.o pmemd_lib.o runfiles.o file_io.o AmberNetcdf.o bintraj.o
> binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o
> nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
> ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
> dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
> multipmemd.o remd_exchg.o amd.o gamd.o ti.o gbsa.o barostats.o scaledMD.o
> constantph.o energy_records.o constantph_dat.o relaxmd.o sgld.o emap.o \
> ./cuda/cuda.a -L/Developer/NVIDIA/CUDA-7.5/lib64
> -L/Developer/NVIDIA/CUDA-7.5/lib -lcurand -lcufft -lcudart -lstdc++
> -L/Users/rcw/Desktop/amber_master/lib
> /Users/rcw/Desktop/amber_master/lib/libnetcdff.a
> /Users/rcw/Desktop/amber_master/lib/libnetcdf.a
> /Users/rcw/Desktop/amber_master/lib/libemil.a -lstdc++
> ld: warning: directory not found for option
> '-L/Developer/NVIDIA/CUDA-7.5/lib64'
> ...
> So it builds the SPFP version - note lib64 directory does not exist on OSX
> - not sure if we can avoid this warning.
> ...
> Then however it goes and runs the cuda make again with
> PREC_MODEL=-Duse_DPFP - it does not run the clean and it reports cuda.a as
> being up to date - which is incorrect.
> ...
> /Library/Developer/CommandLineTools/usr/bin/make -C ./cuda
> PREC_MODEL=-Duse_DPFP
> make[5]: `cuda.a' is up to date.
> gfortran -O3 -mtune=native -DCUDA -o pmemd.cuda_DPFP gbl_constants.o
> gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o mdin_emil_dat.o
> mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o
> emil.o img.o nbips.o offload_allocation.o parallel_dat.o parallel.o
> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o
> pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o
> pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
> dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o
> runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o
> timers.o pmemd_lib.o runfiles.o file_io.o AmberNetcdf.o bintraj.o
> binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o
> nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
> ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
> dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
> multipmemd.o remd_exchg.o amd.o gamd.o ti.o gbsa.o barostats.o scaledMD.o
> constantph.o energy_records.o constantph_dat.o relaxmd.o sgld.o emap.o \
> ./cuda/cuda.a -L/Developer/NVIDIA/CUDA-7.5/lib64
> -L/Developer/NVIDIA/CUDA-7.5/lib -lcurand -lcufft -lcudart -lstdc++
> -L/Users/rcw/Desktop/amber_master/lib
> /Users/rcw/Desktop/amber_master/lib/libnetcdff.a
> /Users/rcw/Desktop/amber_master/lib/libnetcdf.a
> /Users/rcw/Desktop/amber_master/lib/libemil.a -lstdc++
> ld: warning: directory not found for option
> '-L/Developer/NVIDIA/CUDA-7.5/lib64'
> Installation of pmemd.cuda complete
> ...
> So here we get a pmemd.cuda_SPFP which is good and a pmemd.cuda_DPFP which
> is identical to the SPFP. :-(
> ..
> Then strangely. :-( It goes and repeats the build again...
> Starting installation of Amber14 (cuda) at Thu Feb 4 12:05:18 PST 2016.
> cd pmemd && /Library/Developer/CommandLineTools/usr/bin/make cuda
> /Library/Developer/CommandLineTools/usr/bin/make -C src/ cuda
> /Library/Developer/CommandLineTools/usr/bin/make -C ./cuda clean
> rm -f *.o *.linkinfo cuda.a *.mod
> /Library/Developer/CommandLineTools/usr/bin/make -C ./cuda
> PREC_MODEL=-Duse_SPFP
> gfortran -DBINTRAJ -DEMIL -DPUBFFT -Duse_SPFP -O3 -mtune=native -DCUDA
> -I/Developer/NVIDIA/CUDA-7.5/include -IB40C -c cuda_info.F90
> ...
> ...
> gfortran -O3 -mtune=native -DCUDA -o pmemd.cuda_SPFP gbl_constants.o
> gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o mdin_emil_dat.o
> mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o
> emil.o img.o nbips.o offload_allocation.o parallel_dat.o parallel.o
> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o
> pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o
> pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
> dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o
> runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o
> timers.o pmemd_lib.o runfiles.o file_io.o AmberNetcdf.o bintraj.o
> binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o
> nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
> ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
> dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
> multipmemd.o remd_exchg.o amd.o gamd.o ti.o gbsa.o barostats.o scaledMD.o
> constantph.o energy_records.o constantph_dat.o relaxmd.o sgld.o emap.o \
> ./cuda/cuda.a -L/Developer/NVIDIA/CUDA-7.5/lib64
> -L/Developer/NVIDIA/CUDA-7.5/lib -lcurand -lcufft -lcudart -lstdc++
> -L/Users/rcw/Desktop/amber_master/lib
> /Users/rcw/Desktop/amber_master/lib/libnetcdff.a
> /Users/rcw/Desktop/amber_master/lib/libnetcdf.a
> /Users/rcw/Desktop/amber_master/lib/libemil.a -lstdc++
> ld: warning: directory not found for option
> '-L/Developer/NVIDIA/CUDA-7.5/lib64'
> /Library/Developer/CommandLineTools/usr/bin/make -C ./cuda
> PREC_MODEL=-Duse_DPFP
> make[4]: `cuda.a' is up to date.
> gfortran -O3 -mtune=native -DCUDA -o pmemd.cuda_DPFP gbl_constants.o
> gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o mdin_emil_dat.o
> mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o
> emil.o img.o nbips.o offload_allocation.o parallel_dat.o parallel.o
> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o
> pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o
> pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
> dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o
> runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o
> timers.o pmemd_lib.o runfiles.o file_io.o AmberNetcdf.o bintraj.o
> binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o
> nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
> ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
> dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
> multipmemd.o remd_exchg.o amd.o gamd.o ti.o gbsa.o barostats.o scaledMD.o
> constantph.o energy_records.o constantph_dat.o relaxmd.o sgld.o emap.o \
> ./cuda/cuda.a -L/Developer/NVIDIA/CUDA-7.5/lib64
> -L/Developer/NVIDIA/CUDA-7.5/lib -lcurand -lcufft -lcudart -lstdc++
> -L/Users/rcw/Desktop/amber_master/lib
> /Users/rcw/Desktop/amber_master/lib/libnetcdff.a
> /Users/rcw/Desktop/amber_master/lib/libnetcdf.a
> /Users/rcw/Desktop/amber_master/lib/libemil.a -lstdc++
> ld: warning: directory not found for option
> '-L/Developer/NVIDIA/CUDA-7.5/lib64'
> Installation of pmemd.cuda complete
>
>
> So not only does it not do the clean target between the SPFP and DPFP
> builds it also runs the complete build twice for some reason. :-(
>
> Any suggestions on what I am doing wrong here or how I might write this in
> a better way. I'd like to get this into master asap so I can move on with
> other stuff that needs to get done so any help is greatly appreciated.
>
> All the best
> Ross
>
> /\
> \/
> |\oss Walker
>
> ---------------------------------------------------------
> | Associate Research Professor |
> | San Diego Supercomputer Center |
> | Adjunct Associate Professor |
> | Dept. of Chemistry and Biochemistry |
> | University of California San Diego |
> | NVIDIA Fellow |
> | http://www.rosswalker.co.uk | http://www.wmd-lab.org |
> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> ---------------------------------------------------------
>
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
>
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
>
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Thu Feb 04 2016 - 12:30:05 PST