Guru3D.com
  • HOME
  • NEWS
    • Channels
    • Archive
  • DOWNLOADS
    • New Downloads
    • Categories
    • Archive
  • GAME REVIEWS
  • ARTICLES
    • Rig of the Month
    • Join ROTM
    • PC Buyers Guide
    • Guru3D VGA Charts
    • Editorials
    • Dated content
  • HARDWARE REVIEWS
    • Videocards
    • Processors
    • Audio
    • Motherboards
    • Memory and Flash
    • SSD Storage
    • Chassis
    • Media Players
    • Power Supply
    • Laptop and Mobile
    • Smartphone
    • Networking
    • Keyboard Mouse
    • Cooling
    • Search articles
    • Knowledgebase
    • More Categories
  • FORUMS
  • NEWSLETTER
  • CONTACT

New Reviews
TeamGroup CX2 1TB SATA3 SSD review
EVGA GeForce RTX 3070 FTW3 Ultra review
Corsair 5000D PC Chassis Review
NZXT Kraken X63 RGB Review
ASUS Radeon RX 6900 XT STRIX OC LC Review
TerraMaster F5-221 NAS Review
MSI Radeon RX 6800 XT Gaming X TRIO Review
Sapphire Radeon RX 6800 NITRO+ review
Corsair HS70 Bluetooth Headset Review
MSI MEG X570 Unify review

New Downloads
Prime95 download version 30.4 build 7
AIDA64 Download Version 6.32.5620 beta
3DMark Download v2.16.7117 + Time Spy
Crystal DiskMark 8.0.1 Download
Corsair Utility Engine Download (iCUE) Download v3.37.140
ReShade download v4.9.1
GeForce 461.09 WHQL driver download
Intel HD graphics Driver Download Version: DCH 27.20.100.9126
HWiNFO Download v6.41–4345 Beta
MSI Afterburner 4.6.3 Beta 4 Download


New Forum Topics
Intel Rocket Lake-S prices Surface at European etailers : i9-11900K to cost € 600 Review: TeamGroup CX2 1TB SATA3 SSD AMD AGESA COMBO PI V2 1.2.0.0 Is Coming For MSI 500 Series and 400 Series Motherboards AOC AG323QCX2 is a VA based monitor with WQHD and FreeSync Fine Utilise Power of RadeonPRO Software & SweetFX Part 2 Creative Labs releases special edition Sound Blaster Z PCIe Samsung Introduces Consumer SATA SSD Series, the 870 EVO Intel to Discontinue Optane Products for the Consumer Market I don't think the 3080 has enough Vram Samsung Introduces ISOCELL HM3 with massive 108Mp Image Sensor for Smartphones




Guru3D.com » Overview» Default » Folding@home distributed computing

Folding@home distributed computing

Team guru3d has number 69411


We have our own Folding@Home Team which is currently in the Top 70 Teams in the World. Considering that there are more than 185000 teams in the world, that is quite an achievement! Bravo Guru Folders!

What is Folding@Home?

Folding@Home is a distributed computing project that is managed by Standford University. Their aim is to study protein folding, misfolding (when proteins do not fold correctly), aggregation, and related diseases. This will help scientists understand many well known diseases, such as Alzheimer's, Mad Cow (BSE), CJD, ALS, and Parkinson's disease (Details can be found here). Stanford uses novel computational methods and large scale distributed computing, to simulate timescales thousands to millions of times longer than previously achieved. This has allowed us to simulate folding for the first time, and to now direct Stanford's approach to examine folding related disease.

Two hints:

  • Want to join in ? Team Guru3D has number 69411
  • Seek help ? Here is our Guru3D F@H support Forum

What will your computer be computing (aka folding)?

Basically you donate CPU and/or GPU cycles to the cause by running an appropriate F@H client, which you  can configured to Team Guru3D # 69411. Stanford's algorithms are designed such that for every computer that joins the project, Stanford gets a commensurate increase in simulation speed. Below is the list of available F@H Clients that you can run on your system: (Remember that putting your PC at work means that it'll consume power. Please always bare this in mind)

Use Classic Client if: {Windows; Linux 32-bit;} [AMD & Intel]
1) System isn't running 24/7
2) You want a set-and-forget client that doesn't need any monitoring
3) You would like to contribute with the least amount of effort
4) Points aren't your priority
5) You want F@H to remain unobtrusive
Note: If you have a powerful system but runs for <15 hours, you can install multiple Classic Clients (one per CPU)

Use GPU2 Client if: {Windows} [ATI & Nvidia]
1) You have a discrete GPU (Fermi GPUs aren't supported)
2) System is on for 15+ hours
3) Would like to get some more points with some effort
Note: If you fold with an ATI GPU, please use Environment Variables to make folding more efficient.

Use GPU3 BETA Client if: {Windows; Linux (Unofficially); Details} [Nvidia]
1) You have a discrete GPU
2) System is on for 10+ hours
3) Would like to get some more points with some effort
Note: As this Client is in BETA Stage, expect some rough edges.

Use SMP2 BETA Client if: {normal for Windows; Linux 64-bit; OSX; Details} [AMD & Intel]
1) You have a powerful system
2) Know your way around with F@H cores
3) System is on 24/7
4) Would like to contribute significantly to F@H in terms of scientific value and get the advantage of high points
Note: As this Client is in BETA Stage, expect some rough edges.

Use SMP2 BETA Client if: {bigadv for Windows; OSX; Linux 64-bit (Suspended); Details} [AMD & Intel]
1) You have an extremely powerful system
2) 100% familiar with F@H cores
3) System is folding 24/7
4) Would like to contribute the most to F@H in terms of scientific value plus get the advantage of massive points
Note: As this Client is in BETA Stage, expect some rough edges.

Please remember that there are no hard and fast rules for which F@H Clients you can or can't use. Installation Guides for all the above F@H Clients can be found here . Feel free to experiment with them as long as your system supports it and you can return the WU before the Preferred Deadline. Below is the relationship between the WU assigned to you and its deadlines:

Classic Client WUs:
Before Preferred Deadline - You will get assigned Credit
Exceed Preferred Deadline - WU will be reissued. You will get assigned Credit
Exceed Final Deadline - WU is useless. You won't get any Credit

GPU2 Client WUs:
Before Preferred Deadline - You will get assigned Credit
Exceed Preferred Deadline - WU will be reissued. You will get assigned Credit
Exceed Final Deadline - WU is useless. You won't get any Credit

GPU3 BETA Client WUs:
Before Preferred Deadline - You will get assigned Credit
Exceed Preferred Deadline - WU will be reissued. You will get assigned Credit
Exceed Final Deadline - WU is useless. You won't get any Credit

SMP2 BETA Client WUs: (normal and bigadv)
Before Preferred Deadline - You will get Bonus Credit (The Bonus varies from system to system and increases for faster completion and return)
Exceed Preferred Deadline - WU will be reissued. You will get Base Credit (It is much less when compared to Bonus Credits)
Exceed Final Deadline - WU is useless. You won't get any Credit

For monitoringF@H Clients, you may use HFM.NET as it supports a lot of new features and is actively being developed.

Introduction To F@H Jargon

While browsing this our Guru3D F@H Forum or the Official Forum, you may come across some unusual acronyms which are commonly used by F@H donors. Below is a list of commonly used acronyms:

WU - "Work Unit" It is a small time-slice of Protein processing that is downloaded by your F@H Client from the appropriate Server. Your system will process it and once it finishes folding, it will upload the result to the Server and will request for another WU.
Note: The duration to process this WU will vary from an hour to few days depending on the type of WU, F@H Client, system usage and several other factors.

PRCG - "Project Run Clone Gen" It is used to identify a WU that is assigned to you. A simple explanation and a detailed one can be read for further explanation.
Note: For a given Project number (and Protein Name) there are several Run/Clone/Gen's so don't be alarmed if you seem to be processing the same protein again. The Pande Group doesn't assign duplicate WUs on purpose to keep donors busy.

PPD - "Points Per Day" It is calculated by third party applications so please visit this tools list and install an appropriate one.
Note: PPD will vary from WU to WU (even if they belong to the same Project) and will be effected by system usage and other factors.

TPF - "Time Per Frame" Each WU has 100 frames and it is the time taken to finish 1 frame.
Note: This is calculated using third party applications.

A list of further abbreviation is available here.

Now that you have a basic understanding of what Folding@Home does, please be a true Guru and join the cause! We thank you for your help and support.




Tagged as: Folding@home, computing, distributed




Guru3D.com » Overview» Default » Folding@home distributed computing

Guru3D.com © 2021