Hi,
On Fri, Mar 26, 2010 at 03:49:14PM -0700, Ross Walker wrote:
>
> I have now modified the cuda install so that it does not rely on any static
> libraries anymore. Thus one should be able to build with both gnu and intel
> compilers. With gnu everything works fine. However with Intel we get:
>
> ./configure cuda intel
> make cuda
>
> make[3]: Leaving directory
> `/server-home/rcw/cvs_checkouts/amber11/src/pmemd/src/cuda'
> ifort -DCUDA -o pmemd.cuda gbl_constants.o gbl_datatypes.o state_info.o
> file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o
> prmtop_dat.o inpcrd_dat.o dynamics_dat.o img.o parallel_dat.o parallel.o
> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o
> pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o
> pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
> dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o
> runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o
> pmemd_lib.o runfiles.o file_io.o bintraj.o pmemd_clib.o pmemd.o random.o
> degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o master_setup.o
> pme_alltasks_setup.o pme_setup.o ene_frc_splines.o gb_alltasks_setup.o
> nextprmtop_section.o angles_ub.o dihedrals_imp.o cmap.o charmm.o
> charmm_gold.o -L/usr/local/cuda//lib64 -L/usr/local/cuda//lib -lcufft
> -lcudart ./cuda/cuda.a ../../netcdf/lib/libnetcdf.a
> -L/opt/intel/mkl/10.1.1.019//lib/em64t -Wl,--start-group
> /opt/intel/mkl/10.1.1.019//lib/em64t/libmkl_intel_lp64.a
> /opt/intel/mkl/10.1.1.019//lib/em64t/libmkl_sequential.a
> /opt/intel/mkl/10.1.1.019//lib/em64t/libmkl_core.a -Wl,--end-group -lpthread
> ipo: warning #11043: unresolved gpu_get_nb_energy_
> Referenced in /tmp/ipo_ifortAZkkA7.o
> ipo: warning #11043: unresolved gpu_vdw_correction_
> Referenced in /tmp/ipo_ifortAZkkA7.o
> ipo: warning #11043: unresolved gpu_self_
> ...
What platform are you building on ?
And is all this cuda stuff in the latest release candidate ?
On Mon, Mar 29, 2010 at 12:18:09PM -0400, Volodymyr Babin wrote:
> One could possibly use a macro like
>
> #ifdef ASSUME_JUST_ONE_TRAILING_UNDERSCORE
We already have such a mechanism: CLINK_CAPS, CLINK_PLAIN, #else.
For an example see
amber11/src/sander/mmtsb_client.c
and my blurb on it (from Columbus, but stolen from Amber):
http://archive.ambermd.org/200601/0201.html
I note that configure has been stripped of CLINKs, but they are in
configure_amber.
On Mon, Mar 29, 2010 at 12:26:54PM -0400, Robert Duke wrote:
> Okay, I have not messed with the nvidia stuff yet, but there is a
> convention in pmemd already for getting the correct fortran to c linkage,
> embedded in the pmemd_clib.c code. Given that there are potentially a lot
> of fortran --> c calls potentially for cuda, then a macro like Volodomyr
> recommends here might be a pretty good idea, though, with the macro defined
> in place of the hardcoded names in pmemd_clib.c. The only issue I can
> think of is if there is a c preprocessor out there somewhere that can't
> handle stringizing - that may be why I avoided doing this in the first
> pass. Some folks who use a wider variety of platforms than I do might want
> to comment on that.
Yes, Bob is also mostly using CLINK... in pmemd.
I am not aware of any platform with such a terribly broken C preprocessor;
this includes over the years ibms, hps, suns, sgis, crays, etc.
Here's my comment that we are too close to release for all this activity...
Scott
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Mon Mar 29 2010 - 21:30:02 PDT