Hi Scott,
We ran some multi GPU benchmarks on GORMACS and learned that binding to the socket is critical. Will be happy to share them post Feb 18.
With best regards,
Dhruv
Sent from my iPhone
> On Feb 2, 2022, at 5:26 PM, Scott Le Grand <varelse2005.gmail.com> wrote:
>
> PS because gromacs tries to load balance the CPU and the GPU, you're going
> to have to put more time into coming up with the optimal system for
> measuring the performance of gromacs than you would for benchmarking OpenMM
> or PMEMD.
>
> This is where you will run into the no true benchmark dilemma. But since
> the Amber PIs and the gromacs PIs are playing matchmaker, is there a chance
> you can get the gromacs guys to benchmark their own code by their own
> standards?
>
> I have never been able to reproduce any of their numbers back when I used
> to work for AWS and that was part of my day job, but I don't doubt that
> they can get those numbers at all. We just didn't have the right set of
> CPUs and GPUs in the same box.
>
>
>> On Wed, Feb 2, 2022, 14:40 Scott Le Grand <varelse2005.gmail.com> wrote:
>>
>> Here's gromacs DHFR... But also, not the same benchmark... Kind of MCU vs
>> DCEU but who am I kidding? Who cares?
>>
>>
>> https://www.gromacs.org/Documentation_of_outdated_versions/Installation_Instructions_4.5/GROMACS-OpenMM#GPU_Benchmarks
>>
>> STMV is probably in this container somewhere.
>> https://www.amd.com/en/technologies/infinity-hub/gromacs
>>
>> Good luck comparing apples and oranges here... No good will come of this
>> IMO...
>>
>> On Wed, Feb 2, 2022 at 11:58 AM Scott Brozell <sbrozell.comcast.net>
>> wrote:
>>
>>> Hi,
>>>
>>> They want to benchmark a variety. So dhfr, factor9, cellulose, and stmv
>>> had a wide range from 24k to 1M atoms. It's definitely not clear how to
>>> get directly comparable Gromacs benchmarks, but a range that has
>>> significant overlap in size/performance/something-else is probably what
>>> the doctor ordered.
>>>
>>> thanks,
>>> scott
>>>
>>> ps
>>> Why did the Gromacs PIs cross the road ?
>>> They got beautiful love letters from an Amber PI, but the handwriting
>>> was atrocious, so they went to their doctors to see if they could read
>>> them. The doctors could, and they all fell in love. The end.
>>> Based on a true story.
>>> ;0)
>>>
>>> On Wed, Feb 02, 2022 at 11:20:50AM -0800, Scott Le Grand wrote:
>>>> So how big a system do you want to simulate? Also, GROMACS and AMBER
>>> are a
>>>> bit apples and oranges due to mixed precision force accumulation, no
>>>> neighbor list cheats, and deterministic computation.
>>>>
>>>> I'd stopped harping on determinism, but now that the AI people are
>>> puffing
>>>> their chests about it, might as well remind them we had it a decade ago.
>>>>
>>>> On Wed, Feb 2, 2022 at 11:14 AM Scott Brozell <sbrozell.comcast.net>
>>> wrote:
>>>>
>>>>> Just wondering whether someone can recommend Gromacs benchmarks that
>>> are
>>>>> comparable to our Amber benchmarks. The focus is on multi-gpus in the
>>>>> context of a project for better communications, cuda aware mpi, etc.:
>>>>> "Collaborative Research: Frameworks: Designing Next-Generation MPI
>>>>> Libraries for Emerging Dense GPU Systems."
>>>>> https://www.nsf.gov/awardsearch/showAward?AWD_ID=1931537
>>>>>
>>>>> thanks,
>>>>> scott
>>>
>>> _______________________________________________
>>> AMBER-Developers mailing list
>>> AMBER-Developers.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber-developers
>>>
>>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Wed Feb 02 2022 - 15:30:03 PST