Cuda support for Docking@home
Message boards : Wish list : Cuda support for Docking@home
Author | Message | |
---|---|---|
Guys we need Cuda support. Seti is very speedy with Cuda, Folding@home is 4,000 points behind seti. And i started running them together. Seti@home runs from 2X to 10X faster than the CPU-only version.
|
||
ID: 5092 | Rating: 0 | rate: / | ||
RE: Docking@home Cuda Support
|
||
ID: 5096 | Rating: 0 | rate: / | ||
I wish I have the resources for this.
|
||
ID: 5099 | Rating: 0 | rate: / | ||
You should really email him and ask for help.
|
||
ID: 5100 | Rating: 0 | rate: / | ||
http://viewmorepics.myspace.com/index.cfm?fuseaction=viewImage&friendID=166877672&albumID=1184535&imageID=30921683
|
||
ID: 5101 | Rating: 0 | rate: / | ||
Possibly there is new possibility to quickly talcke the problem, OpenMM library.
|
||
ID: 5726 | Rating: 0 | rate: / | ||
First,
welcome to
Number Crunching
at Docking@Home
,
|
||
ID: 5727 | Rating: 0 | rate: / | ||
http://www.opencldev.com/ OpenCl is cross platform... |
||
ID: 5923 | Rating: 0 | rate: / | ||
OpenCl is cross platform... New version of OpenCl... OpenCl 1.1 |
||
ID: 5953 | Rating: 0 | rate: / | ||
It's not that easy to compile the program for cuda opencl or anything, were having a lot of trouble with that other at Drugdiscovery@home and Hydrogen@home.
|
||
ID: 5964 | Rating: 0 | rate: / | ||
It's not that easy to compile the program for cuda opencl or anything, We know it's not easy, but the boost of computational power with GPU is enormous.... |
||
ID: 5966 | Rating: 0 | rate: / | ||
It's not that easy to compile the program for cuda opencl or anything, Only *if* the app can benefit from parallel processing. Many can't. ____________ Dublin, CA Team SETI.USA |
||
ID: 6041 | Rating: 0 | rate: / | ||
Only *if* the app can benefit from parallel processing. Many can't. Ok, but if i see a Michela Taufel session at Nvidia GPU Conference, i think she is the Michela of docking and she is working on gpu project.... |
||
ID: 6522 | Rating: 0 | rate: / | ||
Ok, but if i see a Michela Taufel session at Nvidia GPU Conference, i think she is the Michela of docking and she is working on gpu project.... There are 3 possibilities: 1) They have not gpu skilled developer, they realize that is impossible to bring the code on gpu, etc, etc. And they have abandoned "gpu idea" 2) They are working hard on gpu code and soon they present gpu app 3) They are working VERY slowly on gpu code (the last gpu admin post is 2009) Meantime, they may tell us something!!! |
||
ID: 6990 | Rating: 0 | rate: / | ||
Guys we need Cuda support. Seti is very speedy with Cuda, Folding@home is 4,000 points behind seti. And i started running them together. Seti@home runs from 2X to 10X faster than the CPU-only version. |
||
ID: 7021 | Rating: 0 | rate: / | ||
I've been wondering why there isn't CUDA support. If they upgrade CHARMM from version 34a2 (developmental release from 2007) to the latest version, then they would not only gain better performance (not sure about that point) but also the ability to use OpenMM. OpenMM is what is used in Folding@home to enable GPU acceleration. Is there any reason to stay on a developmental release from 2007? |
||
ID: 7031 | Rating: 0 | rate: / | ||
OpenMM is what is used in Folding@home to enable GPU acceleration. Is there any reason to stay on a developmental release from 2007? OpenMM support also OpenCL! |
||
ID: 7039 | Rating: 0 | rate: / | ||
OpenMM is what is used in Folding@home to enable GPU acceleration. Is there any reason to stay on a developmental release from 2007? More than likely lack of funds/people to do the upgrade. It takes a fair bit of time, knowledge and cash to upgrade servers. All things that, sadly, D@H seems to be lacking. |
||
ID: 7042 | Rating: 0 | rate: / | ||
More than likely lack of funds/people to do the upgrade. I agree with you. But i think that if gpu client it's a "lot of work", they can start with an upgrade of cpu client, and after pass to gpu.... |
||
ID: 7045 | Rating: 0 | rate: / | ||
I've taken an online course in CUDA, and am looking for an online course in OpenCL. I've found that the minimum compiler that can handle the C or C++ portion of CUDA workunits costs about $400, for the Windows version only. I'm not sure if there is a suitable C or C++ compiler for using CUDA under Linux, but if there is, it is likely to be free.
|
||
ID: 7182 | Rating: 0 | rate: / | ||
I looked up CHARMM on the web. It uses FORTRAN77, which I've used in the past, but I'm not familiar with the current generation of compilers. The licensing terms appear to make it available only to students and academic researchers, so does UDel offer any online courses that could make me qualify?
|
||
ID: 7184 | Rating: 0 | rate: / | ||
Message boards : Wish list : Cuda support for Docking@home
Database Error: The MySQL server is running with the --read-only option so it cannot execute this statement
array(3) { [0]=> array(7) { ["file"]=> string(47) "/boinc/projects/docking/html_v2/inc/db_conn.inc" ["line"]=> int(97) ["function"]=> string(8) "do_query" ["class"]=> string(6) "DbConn" ["object"]=> object(DbConn)#26 (2) { ["db_conn"]=> resource(114) of type (mysql link persistent) ["db_name"]=> string(7) "docking" } ["type"]=> string(2) "->" ["args"]=> array(1) { [0]=> &string(51) "update DBNAME.thread set views=views+1 where id=442" } } [1]=> array(7) { ["file"]=> string(48) "/boinc/projects/docking/html_v2/inc/forum_db.inc" ["line"]=> int(60) ["function"]=> string(6) "update" ["class"]=> string(6) "DbConn" ["object"]=> object(DbConn)#26 (2) { ["db_conn"]=> resource(114) of type (mysql link persistent) ["db_name"]=> string(7) "docking" } ["type"]=> string(2) "->" ["args"]=> array(3) { [0]=> object(BoincThread)#3 (16) { ["id"]=> string(3) "442" ["forum"]=> string(1) "9" ["owner"]=> string(5) "13687" ["status"]=> string(1) "0" ["title"]=> string(29) "Cuda support for Docking@home" ["timestamp"]=> string(10) "1386722172" ["views"]=> string(3) "717" ["replies"]=> string(2) "20" ["activity"]=> string(19) "1.4190479990113e-17" ["sufferers"]=> string(1) "0" ["score"]=> string(1) "0" ["votes"]=> string(1) "0" ["create_time"]=> string(10) "1246204855" ["hidden"]=> string(1) "0" ["sticky"]=> string(1) "0" ["locked"]=> string(1) "0" } [1]=> &string(6) "thread" [2]=> &string(13) "views=views+1" } } [2]=> array(7) { ["file"]=> string(63) "/boinc/projects/docking/html_v2/user/community/forum/thread.php" ["line"]=> int(184) ["function"]=> string(6) "update" ["class"]=> string(11) "BoincThread" ["object"]=> object(BoincThread)#3 (16) { ["id"]=> string(3) "442" ["forum"]=> string(1) "9" ["owner"]=> string(5) "13687" ["status"]=> string(1) "0" ["title"]=> string(29) "Cuda support for Docking@home" ["timestamp"]=> string(10) "1386722172" ["views"]=> string(3) "717" ["replies"]=> string(2) "20" ["activity"]=> string(19) "1.4190479990113e-17" ["sufferers"]=> string(1) "0" ["score"]=> string(1) "0" ["votes"]=> string(1) "0" ["create_time"]=> string(10) "1246204855" ["hidden"]=> string(1) "0" ["sticky"]=> string(1) "0" ["locked"]=> string(1) "0" } ["type"]=> string(2) "->" ["args"]=> array(1) { [0]=> &string(13) "views=views+1" } } }query: update docking.thread set views=views+1 where id=442