Re: [AMBER-Developers] multi gpu gromacs benchmarks

From: Scott Brozell <sbrozell.comcast.net>
Date: Wed, 2 Feb 2022 14:58:09 -0500

Hi,

They want to benchmark a variety. So dhfr, factor9, cellulose, and stmv
had a wide range from 24k to 1M atoms. It's definitely not clear how to
get directly comparable Gromacs benchmarks, but a range that has
significant overlap in size/performance/something-else is probably what
the doctor ordered.

thanks,
scott

ps
Why did the Gromacs PIs cross the road ?
They got beautiful love letters from an Amber PI, but the handwriting
was atrocious, so they went to their doctors to see if they could read
them. The doctors could, and they all fell in love. The end.
Based on a true story.
;0)

On Wed, Feb 02, 2022 at 11:20:50AM -0800, Scott Le Grand wrote:
> So how big a system do you want to simulate? Also, GROMACS and AMBER are a
> bit apples and oranges due to mixed precision force accumulation, no
> neighbor list cheats, and deterministic computation.
>
> I'd stopped harping on determinism, but now that the AI people are puffing
> their chests about it, might as well remind them we had it a decade ago.
>
> On Wed, Feb 2, 2022 at 11:14 AM Scott Brozell <sbrozell.comcast.net> wrote:
>
> > Just wondering whether someone can recommend Gromacs benchmarks that are
> > comparable to our Amber benchmarks. The focus is on multi-gpus in the
> > context of a project for better communications, cuda aware mpi, etc.:
> > "Collaborative Research: Frameworks: Designing Next-Generation MPI
> > Libraries for Emerging Dense GPU Systems."
> > https://www.nsf.gov/awardsearch/showAward?AWD_ID=1931537
> >
> > thanks,
> > scott

_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Wed Feb 02 2022 - 12:00:03 PST
Custom Search