Guru3D.com
  • HOME
  • NEWS
    • Channels
    • Archive
  • DOWNLOADS
    • New Downloads
    • Categories
    • Archive
  • GAME REVIEWS
  • ARTICLES
    • Rig of the Month
    • Join ROTM
    • PC Buyers Guide
    • Guru3D VGA Charts
    • Editorials
    • Dated content
  • HARDWARE REVIEWS
    • Videocards
    • Processors
    • Audio
    • Motherboards
    • Memory and Flash
    • SSD Storage
    • Chassis
    • Media Players
    • Power Supply
    • Laptop and Mobile
    • Smartphone
    • Networking
    • Keyboard Mouse
    • Cooling
    • Search articles
    • Knowledgebase
    • More Categories
  • FORUMS
  • NEWSLETTER
  • CONTACT

New Reviews
Palit GeForce GTX 1630 4GB Dual review
FSP Dagger Pro (850W PSU) review
Razer Leviathan V2 gaming soundbar review
Guru3D NVMe Thermal Test - the heatsink vs. performance
EnGenius ECW220S 2x2 Cloud Access Point review
Alphacool Eisbaer Aurora HPE 360 LCS cooler review
Noctua NH-D12L CPU Cooler Review
Silicon Power XPOWER XS70 1TB NVMe SSD Review
Hyte Y60 chassis review
ASUS ROG Thor 1000W Platinum II (1000W PSU) review

New Downloads
GeForce 516.59 WHQL driver download
Media Player Classic - Home Cinema v1.9.22 Download
AMD Chipset Drivers Download v4.06.10.651
CrystalDiskInfo 8.17 Download
AMD Radeon Software Adrenalin 22.6.1 Windows 7 driver download
ReShade download v5.2.2
HWiNFO Download v7.26
7-Zip v22.00 Download
GeForce 516.40 WHQL driver download
Intel ARC graphics Driver Download Version: 30.0.101.1736


New Forum Topics
AMD Radeon Software - UWP NVIDIA GeForce 516.59 WHQL driver download & Discussion [3rd-Party Driver] Amernime Zone Radeon Insight 22.5.1 WHQL Driver Pack (Released) Info Zone - gEngines, Ray Tracing, DLSS, DLAA, TSR, FSR, XeSS, DLDSR etc. AMD Radeon Software Adrenalin 22.5.2 driver download and discussion Review: Palit GeForce GTX 1630 4GB Dual Windows Defender can Significantly Impact Intel CPU Performance? 3060ti vs 6700xt a year later How to force-enable Hardware-accelerated GPU scheduling (HAGS)? Keychron K8 mechanical keyboard, which is wireless and has an aluminum frame




Guru3D.com » Overview» Default » Folding@home distributed computing

Folding@home distributed computing

Team guru3d has number 69411


We have our own Folding@Home Team which is currently in the Top 70 Teams in the World. Considering that there are more than 185000 teams in the world, that is quite an achievement! Bravo Guru Folders!

What is Folding@Home?

Folding@Home is a distributed computing project that is managed by Standford University. Their aim is to study protein folding, misfolding (when proteins do not fold correctly), aggregation, and related diseases. This will help scientists understand many well known diseases, such as Alzheimer's, Mad Cow (BSE), CJD, ALS, and Parkinson's disease (Details can be found here). Stanford uses novel computational methods and large scale distributed computing, to simulate timescales thousands to millions of times longer than previously achieved. This has allowed us to simulate folding for the first time, and to now direct Stanford's approach to examine folding related disease.

Two hints:

  • Want to join in ? Team Guru3D has number 69411
  • Seek help ? Here is our Guru3D F@H support Forum

What will your computer be computing (aka folding)?

Basically you donate CPU and/or GPU cycles to the cause by running an appropriate F@H client, which you  can configured to Team Guru3D # 69411. Stanford's algorithms are designed such that for every computer that joins the project, Stanford gets a commensurate increase in simulation speed. Below is the list of available F@H Clients that you can run on your system: (Remember that putting your PC at work means that it'll consume power. Please always bare this in mind)

Use Classic Client if: {Windows; Linux 32-bit;} [AMD & Intel]
1) System isn't running 24/7
2) You want a set-and-forget client that doesn't need any monitoring
3) You would like to contribute with the least amount of effort
4) Points aren't your priority
5) You want F@H to remain unobtrusive
Note: If you have a powerful system but runs for <15 hours, you can install multiple Classic Clients (one per CPU)

Use GPU2 Client if: {Windows} [ATI & Nvidia]
1) You have a discrete GPU (Fermi GPUs aren't supported)
2) System is on for 15+ hours
3) Would like to get some more points with some effort
Note: If you fold with an ATI GPU, please use Environment Variables to make folding more efficient.

Use GPU3 BETA Client if: {Windows; Linux (Unofficially); Details} [Nvidia]
1) You have a discrete GPU
2) System is on for 10+ hours
3) Would like to get some more points with some effort
Note: As this Client is in BETA Stage, expect some rough edges.

Use SMP2 BETA Client if: {normal for Windows; Linux 64-bit; OSX; Details} [AMD & Intel]
1) You have a powerful system
2) Know your way around with F@H cores
3) System is on 24/7
4) Would like to contribute significantly to F@H in terms of scientific value and get the advantage of high points
Note: As this Client is in BETA Stage, expect some rough edges.

Use SMP2 BETA Client if: {bigadv for Windows; OSX; Linux 64-bit (Suspended); Details} [AMD & Intel]
1) You have an extremely powerful system
2) 100% familiar with F@H cores
3) System is folding 24/7
4) Would like to contribute the most to F@H in terms of scientific value plus get the advantage of massive points
Note: As this Client is in BETA Stage, expect some rough edges.

Please remember that there are no hard and fast rules for which F@H Clients you can or can't use. Installation Guides for all the above F@H Clients can be found here . Feel free to experiment with them as long as your system supports it and you can return the WU before the Preferred Deadline. Below is the relationship between the WU assigned to you and its deadlines:

Classic Client WUs:
Before Preferred Deadline - You will get assigned Credit
Exceed Preferred Deadline - WU will be reissued. You will get assigned Credit
Exceed Final Deadline - WU is useless. You won't get any Credit

GPU2 Client WUs:
Before Preferred Deadline - You will get assigned Credit
Exceed Preferred Deadline - WU will be reissued. You will get assigned Credit
Exceed Final Deadline - WU is useless. You won't get any Credit

GPU3 BETA Client WUs:
Before Preferred Deadline - You will get assigned Credit
Exceed Preferred Deadline - WU will be reissued. You will get assigned Credit
Exceed Final Deadline - WU is useless. You won't get any Credit

SMP2 BETA Client WUs: (normal and bigadv)
Before Preferred Deadline - You will get Bonus Credit (The Bonus varies from system to system and increases for faster completion and return)
Exceed Preferred Deadline - WU will be reissued. You will get Base Credit (It is much less when compared to Bonus Credits)
Exceed Final Deadline - WU is useless. You won't get any Credit

For monitoringF@H Clients, you may use HFM.NET as it supports a lot of new features and is actively being developed.

Introduction To F@H Jargon

While browsing this our Guru3D F@H Forum or the Official Forum, you may come across some unusual acronyms which are commonly used by F@H donors. Below is a list of commonly used acronyms:

WU - "Work Unit" It is a small time-slice of Protein processing that is downloaded by your F@H Client from the appropriate Server. Your system will process it and once it finishes folding, it will upload the result to the Server and will request for another WU.
Note: The duration to process this WU will vary from an hour to few days depending on the type of WU, F@H Client, system usage and several other factors.

PRCG - "Project Run Clone Gen" It is used to identify a WU that is assigned to you. A simple explanation and a detailed one can be read for further explanation.
Note: For a given Project number (and Protein Name) there are several Run/Clone/Gen's so don't be alarmed if you seem to be processing the same protein again. The Pande Group doesn't assign duplicate WUs on purpose to keep donors busy.

PPD - "Points Per Day" It is calculated by third party applications so please visit this tools list and install an appropriate one.
Note: PPD will vary from WU to WU (even if they belong to the same Project) and will be effected by system usage and other factors.

TPF - "Time Per Frame" Each WU has 100 frames and it is the time taken to finish 1 frame.
Note: This is calculated using third party applications.

A list of further abbreviation is available here.

Now that you have a basic understanding of what Folding@Home does, please be a true Guru and join the cause! We thank you for your help and support.




Tagged as: Folding@home, computing, distributed




Guru3D.com » Overview» Default » Folding@home distributed computing

Guru3D.com © 2022