31

Thursday, Jul 31, 2014

Overcast, 76 °F

compassLogo
Avinash Kodi

Dr. Avinash Kodi is developing the next generation of supercomputers.

Photo courtesy of: Fritz J. and Dolores H. Russ College of Engineering and Technology

Featured Stories


Computer scientist developing next generation of supercomputers


With the huge amounts of data being processed for modern research into everything from climate modeling to genomics, the development of supercomputers of a significantly higher scale are a priority for today's computer scientists and engineers, including Avinash Kodi, associate professor of computer science at Ohio University's Fritz J. and Dolores H. Russ College of Engineering and Technology.

Kodi will spend the next several months developing an exascale computer, which is 100 times faster than the current fastest supercomputer.

Working with computer chip manufacturer AMD in Austin, Texas, Avinash is helping build a supercomputer that can compute 1018 floating point operations per second, or exaFLOP operations. Floating point operations (such as 2.0310 + 4.172) are more difficult for a computer to execute than integer operations (such as 2 + 2), so floating point computation is generally used as a metric for evaluating supercomputer performance.

Weather forecasting, nuclear dynamics, fluid thermodynamics and biotechnology demand this level of computational power, and several countries, including China, France, Germany and Belgium, have established research labs dedicated to exascale computing research.

"This is going to be the next frontier in computation," said Kodi, who received a National Science Foundation CAREER award in 2010 to support his research.

Providing 100 times faster computing speeds presents several challenges, including power consumption and heat generation. Computing performance at this level generates enough heat to warm several large buildings. New chip technology will be based on improving the processing algorithms to optimize computing power while lowering heat output.

Government agencies have capped the power consumption of the systems at 20 megawatts, but as the application size increases, there is demand for more computational power. This can be delivered by increasing the number of computational nodes — the individual processor units that are clustered to make up the computing system.

A processor is made of integrated circuits containing millions of transistors. The transistors repeatedly switch on and off during regular use and dissipate heat in the process, heating up the nodes as they work. With supercomputers, thousands of processors working together generate massive amounts of heat, so if there is no cooling or if power is not capped, the transistors will fail and the system will eventually stop working.

"As the system size increases, the underlying network that connects all the nodes should provide scalable bandwidth and more importantly, reduce the power consumption of the network," Kodi said. "I am evaluating network topologies and router architectures that can deliver the bandwidth at the power budget allocated to the network."

Another major challenge is system resiliency – meaning it has to be reliable.

"The system should be available 99 percent of the time and failure rates have to be extremely low," he said. "When there are millions of operational components, device failure is natural."

Programming such a large system in a way that harnesses the computational power while maintaining a balanced system load is another major consideration.

According to the World Future Society, the Obama administration has requested more than $100 million for the development of next generation supercomputers, which cost about $20 million annually to operate.

"Building exascale computers is very expensive and is mostly used by government agencies such as the Department of Energy," Kodi said. "Therefore, it is critical that such investments are made by the government to build larger and faster machines to support future applications."