Private profanity-laden personal email attacks and insinuations about my
motives aside Adrian (and I won't post them here because shame on you but I
am petty enough to mention it because transparency is everything and what
you wrote ought to be beneath you), spending 8+ months in this code has led
to strong opinions about what happened to it and what ought to be done
about it. I'm happy to discuss those opinions with anyone who's willing to
invest anywhere close to that much effort reviewing the codebase and I'd
love a genuine collaborator here, but it doesn't seem like anyone has my
level of interest in refactoring PMEMD out of its situation (Cerutti's
gone, and Taisung seems like he wants to science and that's OK), and I've
already accepted that whatever I build won't be called PMEMD because I am
going 100% FOSS with it and PMEMD isn't, but it will be a successor to it.
But that ought not to be controversial IMO, but alas, everything's
controversial these days, no? NEB was a clear cut case for a braindead
simple GPU kernel anyone with a modicum of interest in CUDA could write and
there are plenty of examples in the code already for AMD, GAMD, the Scalar
Sum, the update and EField on which to base it. Its addition to PMEMD
should have been blocked until the author spent the time on it IMO. Water
under the bridge now, but yeah, shuttle is dead dead dead. And if anyone
ends up using what I built, they can build GPU NEB in a day with the API
even in its primordial state. And they can even do it on the CPU, but it
won't be in the codebase, it will be in the model zoo where it belongs
along with most of the other drive-by additions to PMEMD. And that ought to
ameiiorate the bitrot. Or not, we'll see.
But hybridizing a GPU codebase with the CPU is everything wrong with
GROMACS. It works fantastically when both the CPU and the GPU are SOTA but
results are not typical. The reason I went 100% GPU was so that even a
craptastic ARM chip in an embedded system would deliver 95+% the GPU
performance of a SOTA CPU driving that GPU so as to enable post-docs and
grad students to upgrade their GPUs independently of their workstations to
accelerate their research without breaking the bank as well as cut down on
e-Waste. Plus GPUs are moving exponentially faster than CPUs so CPUs
inevitable become decelerators or e-Waste. Whatever else you think of me,
that's my story and I have 20+ patents and 6-figures of lines of code out
there to back it up.
But by all means dismiss my entire career. Dismissing expertise and wisdom
is really the rage now. Yikes.
_______________________________________________
AMBER-Developers mailing list
AMBER-Developers.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber-developers
Received on Fri Aug 05 2022 - 13:00:05 PDT