Re: [AMBER-Developers] request for Volunteers, part 2

From: Ross Walker <ross.rosswalker.co.uk>
Date: Sat, 24 Mar 2018 22:01:30 -0400

Yes just pulled down a new tree - for some reason it wasn't pulling updates. It looks okay now.

With regards to pre-kepler I don't think there is a way to check and exit gracefully. In order to call cudagetdeviceproperties one already has to initiate the gpu context and I believe, although I have not tested, that will fail if the code was not compiled with SM2.0 options and is running on a pre kepler card. Thus it likely never gets to the cudagetdeviceproperties.

Given CUDA 9.0 and later doesn't support pre-kepler it doesn't make much sense for someone to have it installed on a pre-kepler machine so we can likely ignore that option. We could, I guess still compile for cuda 8.0 and 7.5 with sm2.0 and then have the cudagetdeviceproperties line in there to check for it and quit with a message - it means the behavior would be different between different cuda versions though. It's also a lot over compilation overhead just to support a version check.

On another note I see we have:

    if [ -z "$NVCC" ]; then nvcc="$CUDA_HOME/bin/nvcc"; else nvcc="$NVCC"; fi
    #Note at present we do not include SM3.5 or SM3.7 since they sometimes show performance
    #regressions over just using SM3.0.
    #SM7.0 = V100 and Volta Geforce / GTX Ampere?
    if [ "$volta" = 'yes' ]; then
        sm70flags='-gencode arch=compute_60,code=sm_70 -DVOLTAOPT'
        sm62flags=''
        sm61flags=''
        sm60flags=''
        sm53flags=''
        sm52flags=''
        sm50flags=''
        sm37flags=''
        sm35flags=''
        sm30flags=''
    else
        sm70flags='-gencode arch=compute_60,code=sm_70'
        #SM6.2 = ???
        sm62flags='-gencode arch=compute_62,code=sm_62'
        #SM6.1 = GP106 = GTX-1070, GP104 = GTX-1080, GP102 = Titan-X[P]
        sm61flags='-gencode arch=compute_61,code=sm_61'
        #SM6.0 = GP100 / P100 = DGX-1
        sm60flags='-gencode arch=compute_60,code=sm_60'
        #SM5.3 = GM200 [Grid] = M60, M40?
        sm53flags='-gencode arch=compute_53,code=sm_53'
        #SM5.2 = GM200 = GTX-Titan-X, M6000 etc.
        sm52flags='-gencode arch=compute_52,code=sm_52'

So it looks like one can specify a volta specific flag but that disables all other compilations. That's really not a good idea since most people have mixed clusters and so having a volta only executable would be an annoyance I think. Especially since people would often submit to a queue and ask for next GPU available - if you happened to get a non-volta gpu with the volta executable it would crash I believe. Is there a reason why this volta optimizations aren't in the regular executable supporting all the various GPU versions? How much difference does it actually make? If it's < 10% or so it might be worth just disabling this in the release version and keeping it as a developer option otherwise I think it will just cause user (and sys admin) confusion.

All the best
Ross


> On Mar 24, 2018, at 21:45, David A Case <david.case.rutgers.edu> wrote:
>
> On Sat, Mar 24, 2018, Ross Walker wrote:
>> Just change
>>
>> nvccflags="$sm20flags $sm30flags $sm50flags $sm52flags $sm53flags
>> $sm60flags $sm61flags"
>
> I think you are looking at an old version of configure2.
>
>> Note we should also be sure to enable CUDA 9.1 support in configure2.
>
> Same idea: cuda 9.1 is being tested (see wiki page) and seems to be OK.
>
> Basic idea that needs testing: Is there a way for an executable
> compiled without $sm20flags to figure out is being run on a pre-kepler
> GPU, and exit gracefully with a useful error message?
>
> [Note: this is pretty low priority: I'd rather see RC3 tests on
> different GPUs, and compiled with different versions of nvcc.
>
> ...thanks...dac
>
>
> _______________________________________________
> AMBER-Developers mailing list
> AMBER-Developers.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber-developers

_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Sat Mar 24 2018 - 19:30:02 PDT
Custom Search