On Fri, 2014-11-07 at 11:37 -0700, Thomas Cheatham wrote:
> Any body have some ideas about this? Basically "cgroups" are a way to
> create a virtual container, one which you can restrict memory to a
> sub-process, etc (i.e. for example to partition a node into two
> independent halves). Thanks! --tom
Just to add a little to Scott's comment: this seems to be an issue with
the CUDA RT in general. I ran a quick test on my machine where I
started up CUDA-enabled VMD with a single PDB file and another small
simulation with OpenMM's CUDA platform.
Both programs consumed around 37 GB of virtual memory (not real memory,
though) on my desktop (with 16 GB of RAM total). When pmemd.cuda runs
on my machine, it consumes the same amount of virtual RAM. I would try
some of the CUDA SDK codes to confirm the issue in those, too, but they
don't run long enough to actually monitor the memory usage.
So it's definitely not just Amber -- it's every other CUDA-enabled
program I tried running on my machine too. I know this has been
discussed in a few threads in the past, but I couldn't seem to find them
in the archives easily. (Not sure if it was amber-dev or amber, to be
honest).
This was all done with the nVidia driver 340.32 and CUDA 5.5 on my
machine (although I've observed it for every other driver version I've
had, too, which is quite a few).
All the best,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Fri Nov 07 2014 - 11:30:02 PST