The Wild World of Supercomputing
Imagine trying to conduct an experiment that requires reproducing the conditions at the core of a thermonuclear weapon explosion—€”without actually exploding the weapon. Or replicating centuries of climate change on a scale ranging from a square mile to the entire planet. Or determining the effect of a major earthquake on buildings and underground structures—€”without waiting for an actual earthquake to hit.
Scientific problems like these can be beyond the reach of experiments, either because the experiment is too expensive, too hard to perform or evaluate, too dangerous—€”or, as in the case of nuclear weapons testing—€”against national policy.
When such barriers arise, scientists at Lawrence Livermore National Laboratory and elsewhere increasingly are turning to sophisticated, three-dimensional supercomputer simulations to suggest and verify their theories and to design, complement and sometimes replace experiments.
Powered by the dramatic increase in supercomputer speed over the last decade, today's simulations can mimic the physical world down to the interactions of individual atoms. They can take scientists into the interiors of stars and supernovae and reproduce time scales ranging from trillionths of a second to centuries. Simulations can test theories, reveal new physics, guide the setup of new experiments and help scientists understand past experiments. Within the past few years, simulations have taken their place alongside theory and research as essential elements of scientific progress.
"Livermore from its very founding has understood the value of computing and has invested in generation after generation of supercomputers," says Dona Crawford, LLNL's associate director for computation. "The computer, in effect, serves as a virtual laboratory."
"Simulation at the resolution now available represents a revolution in the process of scientific discovery," adds Mike McCoy, LLNL's deputy associate director for computation. "We're augmenting the 300-year-old Newtonian model of —€˜observation, theory and experiment' with —€˜observation, theory, experiment and simulation.'"
Many of the remarkable advances in high-performance computing over the last 50 years have resulted from collaborations between Livermore and its sister Department of Energy laboratories and the private sector. The Livermore Automatic Research Computer (LARC) project, a late 1950s collaboration with Remington Rand, is thought by many to represent the beginning of supercomputing.
Working closely with industry leaders such as IBM, Control Data Corporation and CRAY, LLNL shaped and contributed directly to supercomputer architectures, data management and storage hardware. Livermore was the first to require a computer using transistors rather than vacuum tubes; developed new technology such as the first practical time-sharing system; and fashioned a number of hardware and software tools that have found their way into the private sector.
That legacy of computing leadership continues today as Livermore prepares to put the current world champion in computing power, IBM's BlueGene/L, to work beginning this summer.
For the past decade, the driving force behind supercomputer development at Livermore has been the National Nuclear Security Agency's Advanced Simulation and Computing (ASC) program. ASC unites the resources of three national laboratories (Livermore, Los Alamos and Sandia), the major computer manufacturers, a host of networking, visualization, memory storage, and other vendors and researchers from top-level universities across the country. A common goal of these partnerships is to develop and deploy advanced computer and simulation capabilities that can simulate nuclear weapons tests, so that NNSA can ensure the safety and reliability of the nation's nuclear stockpile without underground testing.
To characterize nuclear reactions and the aging of materials, ASC has funded the design and construction of increasingly powerful scalable parallel supercomputers composed of thousands of microprocessors that solve a problem by dividing it into many parts. When it's completed later this year, the last of this series, a Livermore supercomputer named Purple, will run simulations at 100 trillion floating point operations per second (teraflops—€”the equivalent of 25,000 high-end personal computers), and enough power to begin to model in detail the physics of nuclear weapons performance.
Still more processing power is needed, however, and that's where BlueGene/L takes supercomputing in a new direction. Unlike traditional supercomputers like Purple, with up 15,000 powerful enterprise class processors, the final BlueGene/L configuration will have more than 131,000 low-cost embedded commodity microprocessors like those found in control systems and automobiles, supplemented by floating point units. At one-half of its final configuration, BlueGene/L is already the world's fastest computer based on the industry standard LINPACK benchmark, with a sustained performance of more than 135 teraflops out of a peak of 180 teraflops. The final configuration will have a peak of 360 teraflops when installation is completed this summer. BlueGene/L consumes very little power (just 2.5 megawatts to run and cool the computer) and only 2,500 square feet of floor space. By comparison, the 100 teraflops Purple machine will require up to eight megawatts to run and cool the system and 7,000 square feet of floor space.
Just as Livermore has worked with a variety of computer manufacturers to develop new advances in supercomputing, those advances have generated new software tools to capitalize on the capability of the hardware—€”tools that, in turn, have a variety of commercial applications.
"Supercomputing has generated new software for such things as new computer codes, data storage, visualization and file sharing," says Karena McKinley, director of LLNL's Industrial Partnerships and Commercialization Office, "and these have also enabled advances in other software for commercial applications."
The DYNA3D ("Dynamics in 3 Dimensions") computer program, for example, was developed in the 1970s to model the structural behavior of weapons systems. It was later broadly released to research institutions and industrial companies and gained widespread acceptance as the standard for dynamic modeling. The list of companies that have used DYNA3D reads like a "Who's Who" of American industry: GE, General Motors, Chrysler, Boeing, Alcoa, General Atomics, FMC, Lockheed Martin and more. A 1993 study found that DYNA3D generates $350 million a year in savings for U.S. industry by allowing speedier release of products to market and enabling savings in costly physical tests such as automobile crash tests.
Chromium, another widely used technology developed at LLNL, makes it possible to create sophisticated graphics and visualizations from the output of "commodity clusters," which are dozens or hundreds of interconnected personal computers operating in parallel as a supercomputer. Taking its name from clustered rendering, or Cr (the atomic symbol for the element chromium), this free, open-source software allows PC graphics cards to communicate and synchronize their commands to create single images from their combined data. More than 20,000 copies of Chromium have been downloaded since its release in August 2003, and the software received a 2004 R&D 100 Award from R&D Magazine as one of the as one of the year's top 100 technological advances.
Thanks to the continuing computing partnerships between government labs, industry and academia, "newer examples of application software are being generated right now," says McKinley. "We can soon expect a whole new generation of these programs for medical simulations, genetic computing, global climate modeling, aerospace and automotive design, financial models and many other domestic applications."
Even BlueGene/L won't be powerful enough to simulate all the complexities of matter at extreme pressures and temperatures. Looking to the future, Livermore hopes to acquire a petaflops (1 quadrillion floating point operations per second, or 1,000 teraflops) supercomputer by 2010.
McCoy, the deputy associate director for computation, calls today's supercomputer simulations "science of scale" because they represent extreme efforts to unlock nature's secrets.
"These simulations are similar to very large experiments in terms of the manpower and investment required before one can do the simulation or —€˜experiment,'" McCoy says. "In this sense, computing at this scale is perfectly aligned with the mission of a national laboratory: to provide and apply apparatus for unlocking nature's secrets that can be found nowhere else.
"The goal is to compute at a level of resolution and with a degree of physical accuracy that gives scientists confidence that the numerical error and inaccuracies in their simulations do not becloud the insights that they will enjoy from studying the results," he says. "This is an exciting time to be at Livermore."
Charles Osolin is a public information officer at Lawrence Livermore National Laboratory.

Copyright © 2016 | Innovation America