Guru3D.com
  • HOME
  • NEWS
    • Channels
    • Archive
  • DOWNLOADS
    • New Downloads
    • Categories
    • Archive
  • GAME REVIEWS
  • ARTICLES
    • Rig of the Month
    • Join ROTM
    • PC Buyers Guide
    • Guru3D VGA Charts
    • Editorials
    • Dated content
  • HARDWARE REVIEWS
    • Videocards
    • Processors
    • Audio
    • Motherboards
    • Memory and Flash
    • SSD Storage
    • Chassis
    • Media Players
    • Power Supply
    • Laptop and Mobile
    • Smartphone
    • Networking
    • Keyboard Mouse
    • Cooling
    • Search articles
    • Knowledgebase
    • More Categories
  • FORUMS
  • NEWSLETTER
  • CONTACT

New Reviews
The Callisto Protocol: PC graphics benchmarks
G.Skill TridentZ 5 RGB 6800 MHz CL34 DDR5 review
Be Quiet! Dark Power 13 - 1000W PSU Review
Palit GeForce RTX 4080 GamingPRO OC review
Core i9 13900K DDR5 7200 MHz (+memory scaling) review
Seasonic Prime Titanium TX-1300 (1300W PSU) review
F1 2022: PC graphics performance benchmark review
MSI Clutch GM31 Lightweight​ (+Wireless) mice review
AMD Ryzen 9 7900 processor review
AMD Ryzen 7 7700 processor review

New Downloads
Corsair Utility Engine Download (iCUE) Download v4.33.138
CPU-Z download v2.04
Intel ARC graphics Driver Download Version: 31.0.101.4090
AMD Radeon Software Adrenalin 23.1.2 (RX 7900) download
GeForce 528.24 WHQL driver download
Display Driver Uninstaller Download version 18.0.6.0
Download Intel network driver package 27.8
ReShade download v5.6.0
Media Player Classic - Home Cinema v2.0.0 Download
HWiNFO Download v7.36


New Forum Topics
Ryzen 5600 all core boost is low Consumer PCIe Gen 5.0 NVMe SSD Goes On Sale In Japan, 2 TB Up To 10 GB/s For $385 US Cyberpunk 2077 NVIDIA DLSS 3 Update Is Out Now Western Digital To Offers 20 TB HDD with dual actuator that achieves twice the transfer speed Info Zone - gEngines, Ray Tracing, DLSS, DLAA, TSR, FSR, XeSS, DLDSR etc. The Callisto Protocol: PC graphics performance benchmark analysis RTX 4080 Owner's Thread BSOD playing Dead Space remake Windows timer resolution tool in form of system service Windows fast start up causing 'Default Radeon WattMan Settings Have been Restored due to Unexpected




Guru3D.com » News » IBM Building 120 Petabyte Drive

IBM Building 120 Petabyte Drive

by Hilbert Hagedoorn on: 08/29/2011 10:01 AM | source: | 0 comment(s)
IBM is working on a 120 petabyte storage unit, that's 200,000 HDDs used. That's a lot of storage !

Steve Conway, a vice president of research with the analyst firm IDC who specializes in high-performance computing (HPC), says IBM's repository is significantly bigger than previous storage systems. "A 120-petabye storage array would easily be the largest I've encountered," he says. The largest arrays available today are about 15 petabytes in size. Supercomputing problems that could benefit from more data storage include weather forecasts, seismic processing in the petroleum industry, and molecular studies of genomes or proteins, says Conway.

IBM's engineers developed a series of new hardware and software techniques to enable such a large hike in data-storage capacity. Finding a way to efficiently combine the thousands of hard drives that the system is built from was one challenge. As in most data centers, the drives sit in horizontal drawers stacked inside tall racks. Yet IBM's researchers had to make those significantly wider than usual to fit more disks into a smaller area. The disks must be cooled with circulating water rather than standard fans.

The inevitable failures that occur regularly in such a large collection of disks present another major challenge, says Hillsberg. IBM uses the standard tactic of storing multiple copies of data on different disks, but it employs new refinements that allow a supercomputer to keep working at almost full speed even when a drive breaks down.

When a lone disk dies, the system pulls data from other drives and writes it to the disk's replacement slowly, so the supercomputer can continue working. If more failures occur among nearby drives, the rebuilding process speeds up to avoid the possibility that yet another failure occurs and wipes out some data permanently. Hillsberg says that the result is a system that should not lose any data for a million years without making any compromises on performance.

The new system also benefits from a file system known as GPFS that was developed at IBM Almaden to enable supercomputers faster data access. It spreads individual files across multiple disks so that many parts of a file can be read or written at the same time. GPFS also enables a large system to keep track of its many files without laboriously scanning through every one. Last month a team from IBM used GPFS to index 10 billion files in 43 minutes, effortlessly breaking the previous record of one billion files scanned in three hours.

It will hold 24 million HD movies !







« Thermaltake Toughpower Silver Series 80 Plus Silver Compliant PSUs · IBM Building 120 Petabyte Drive · Avexir reveals four DDR3 memory series »


Guru3D.com © 2023