lots of disk writes
Message boards : Number crunching : lots of disk writes
Author | Message | |
---|---|---|
IMO there are really many disk writes, roughly estimated at least every second. In my "general preferences" the value for "Write to disk at most every" is 60 seconds.
|
||
ID: 80 | Rating: 0 | rate: / | ||
I'm pretty sure that's our app. Will check.
IMO there are really many disk writes, roughly estimated at least every second. In my "general preferences" the value for "Write to disk at most every" is 60 seconds. |
||
ID: 93 | Rating: 0 | rate: / | ||
I'm pretty sure that's our app. Will check. Thanks in advance, you're anwering all quickly, great job. |
||
ID: 105 | Rating: 0 | rate: / | ||
Found that our app is actually not using the boinc function boinc_time_to_checkpoint() that makes sure that checkpoints are only done according to the user pref "Write to disk at most every". Will build that in soon. Our app also writes to a logfile though and the frequency of log writing will be hard to control. Have to see if we can do anything about this.
I'm pretty sure that's our app. Will check. |
||
ID: 131 | Rating: 0 | rate: / | ||
|
||
ID: 771 | Rating: 0 | rate: / | ||
Any news to fix this in later windows apps? |
||
ID: 804 | Rating: 0 | rate: / | ||
Yes, for sure. We have started on this yesterday and will hopefully finish by next week monday or tuesday. Beware that this will make sure the checkpointing only takes place according to the disk write preference that the users sets. Other writes the app does, of course won't (we would loose important data if we can't write our results to file). This is a fix for all platforms by the way.
Any news to fix this in later windows apps? ____________ D@H the greatest project in the world... a while from now! |
||
ID: 805 | Rating: 0 | rate: / | ||
Other writes the app does, of course won't (we would loose important data if we can't write our results to file). This is a fix for all platforms by the way. Does the app writes the results to a file once after the wu has finished or does it continuously writes results to the hard disk or are there some intermediate results during the crunching of a wu? Can you specify (or give an estimation) of the amount of diskwrites your app needs to store its results(without checkpointing)? ____________ boinc.be , crunching belgian style... |
||
ID: 809 | Rating: 0 | rate: / | ||
It continuously updates a logfile (charmm.out), the minimum energy found (minenergy.pdb), the minimum rmsd found (minrmsd.pdb) and the percentage done (percentdone.str). This should not be more than once or twice a second though. Somebody was mentioning 400 writes/sec in another thread, which we have never seen and been able to reproduce.
Other writes the app does, of course won't (we would loose important data if we can't write our results to file). This is a fix for all platforms by the way. ____________ D@H the greatest project in the world... a while from now! |
||
ID: 812 | Rating: 0 | rate: / | ||
It continuously updates a logfile (charmm.out), the minimum energy found (minenergy.pdb), the minimum rmsd found (minrmsd.pdb) and the percentage done (percentdone.str). This should not be more than once or twice a second though. Somebody was mentioning 400 writes/sec in another thread, which we have never seen and been able to reproduce. Since the app will be restarted at the checkpoint, you should be able to stop the writes to the log file at that point as well - otherwise, there would be an overlap in the log as the same section gets run twice - and logged twice. |
||
ID: 822 | Rating: 0 | rate: / | ||
It continuously updates a logfile (charmm.out), the minimum energy found (minenergy.pdb), the minimum rmsd found (minrmsd.pdb) and the percentage done (percentdone.str). This should not be more than once or twice a second though. Somebody was mentioning 400 writes/sec in another thread, which we have never seen and been able to reproduce. Does that file contains a list of the evolution of the minimal energy? Or does it only contains the minimal value? (same for rmsd) There are several solutions to only write results to files at checkpoints. Eg. keeping a string/array variable with logfile entries and result progress to write away at checkpointing. Like mentioned before after checkpointing there is a possibility to enter the same logdata twice when boinc gets restarted in between. 2 writes/sec (I have a dual processor -> 4 writes/sec) is still a huge load on your hd and should be avoided at any cost ps: is it the final result you are interested in or rather the evolution? Because the evolution can quickly be reproduced for interesting final results ____________ boinc.be , crunching belgian style... |
||
ID: 836 | Rating: 0 | rate: / | ||
Hi,
|
||
ID: 886 | Rating: 0 | rate: / | ||
As long as the file isn't opened/written/closed all the time, and you don't flush() it, all the cache systems should help here. It won't really be 2 physical writes/sec. |
||
ID: 923 | Rating: 0 | rate: / | ||
I appreciate the update Michela, I had been wondering about the progress on this issue myself. Thanks!
Hi, |
||
ID: 926 | Rating: 0 | rate: / | ||
Message boards : Number crunching : lots of disk writes
Database Error: The MySQL server is running with the --read-only option so it cannot execute this statement
array(3) { [0]=> array(7) { ["file"]=> string(47) "/boinc/projects/docking/html_v2/inc/db_conn.inc" ["line"]=> int(97) ["function"]=> string(8) "do_query" ["class"]=> string(6) "DbConn" ["object"]=> object(DbConn)#19 (2) { ["db_conn"]=> resource(108) of type (mysql link persistent) ["db_name"]=> string(7) "docking" } ["type"]=> string(2) "->" ["args"]=> array(1) { [0]=> &string(50) "update DBNAME.thread set views=views+1 where id=21" } } [1]=> array(7) { ["file"]=> string(48) "/boinc/projects/docking/html_v2/inc/forum_db.inc" ["line"]=> int(60) ["function"]=> string(6) "update" ["class"]=> string(6) "DbConn" ["object"]=> object(DbConn)#19 (2) { ["db_conn"]=> resource(108) of type (mysql link persistent) ["db_name"]=> string(7) "docking" } ["type"]=> string(2) "->" ["args"]=> array(3) { [0]=> object(BoincThread)#3 (16) { ["id"]=> string(2) "21" ["forum"]=> string(1) "2" ["owner"]=> string(2) "35" ["status"]=> string(1) "0" ["title"]=> string(19) "lots of disk writes" ["timestamp"]=> string(10) "1160276591" ["views"]=> string(4) "1484" ["replies"]=> string(2) "13" ["activity"]=> string(23) "1.1334874920065999e-129" ["sufferers"]=> string(1) "0" ["score"]=> string(1) "0" ["votes"]=> string(1) "0" ["create_time"]=> string(10) "1158171766" ["hidden"]=> string(1) "0" ["sticky"]=> string(1) "0" ["locked"]=> string(1) "0" } [1]=> &string(6) "thread" [2]=> &string(13) "views=views+1" } } [2]=> array(7) { ["file"]=> string(63) "/boinc/projects/docking/html_v2/user/community/forum/thread.php" ["line"]=> int(184) ["function"]=> string(6) "update" ["class"]=> string(11) "BoincThread" ["object"]=> object(BoincThread)#3 (16) { ["id"]=> string(2) "21" ["forum"]=> string(1) "2" ["owner"]=> string(2) "35" ["status"]=> string(1) "0" ["title"]=> string(19) "lots of disk writes" ["timestamp"]=> string(10) "1160276591" ["views"]=> string(4) "1484" ["replies"]=> string(2) "13" ["activity"]=> string(23) "1.1334874920065999e-129" ["sufferers"]=> string(1) "0" ["score"]=> string(1) "0" ["votes"]=> string(1) "0" ["create_time"]=> string(10) "1158171766" ["hidden"]=> string(1) "0" ["sticky"]=> string(1) "0" ["locked"]=> string(1) "0" } ["type"]=> string(2) "->" ["args"]=> array(1) { [0]=> &string(13) "views=views+1" } } }query: update docking.thread set views=views+1 where id=21