Hey Devs,
Is it possible that making a program larger, or adding routines that
allocate large (4-8kb) static arrays, even if those routines are not
called during particular runs, could dramatically affect the speed of
other parts of the code?
I'm trying to figure out how the mdgx simulation engine has suffered more
than a two-fold slowdown over the past year. The internal timings data
suggest that the slowdown is across-the-board in the mdgx dynamics
routines, despite the fact that I made few if any changes to most of those
routines during the past year. I have back-tracked to at least one
version that gives the performance I expected (80% of the speed of pmemd
on a protein in water system). Over the past year, my major focus has
been adding features for parameter optimization and force field building
in mdgx; only recently have I begun to tinker with the dynamics engine
again, and for as long as I've been testing mdgx over the past two weeks
the speed has been more or less constant. It was only tonight that I
realized mdgx had slowed down considerably relative to the other engines,
even as I have been doing things designed to make it cleaner, faster, more
cache-friendly, and above all scalable.
All I can think of at the moment is that, somehow, the parameter fitting
routines I wrote pushed the size of the program over some tipping point,
which is perhaps causing it to cache-miss a lot even though those other
routines are not being called. I will be going through the code history
to see if I can find particular revisions that crippled the run speed.
Does anyone else have an idea?
Thanks,
Dave
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Mon Feb 18 2013 - 00:30:02 PST