Category Archives: HPC

US to Deploy Open Compute-based Supercomputers for Nuclear Security

Penguin-Tundra-HPCIn the first publicly disclosed deployment by a government agency of computing hardware based on specs that came out of the Open Compute Project, the Facebook-led open source hardware and data center design initiative, the US Department of Energy has contracted Penguin Computing to install a supercomputing systems at three national labs.

http://www.datacenterknowledge.com/archives/2015/10/21/us-deploy-open-compute-based-supercomputers-nuclear-security/

Penguin Computing Announces New Penguin Tundra Open Compute Servers Featuring Cavium’s 48 core ARMv8-A ThunderX™ Processors

LA18845LOGOFRANKFURT: Penguin Computing, the leader in developing open, Linux-based Data Center solutions for cloud and HPC, today announced the availability of Penguin’s Open Compute Project (OCP) compliant “Tundra” server family based on Cavium, Inc.’s, 64-bit ARMv8 ThunderX workload optimized processors. Penguin provides customized build-to-order server solutions for customers with specialized hardware requirements in enterprise, financial, federal government, bioinformatics and Internet segments.

http://www.prnewswire.com/news-releases/penguin-computing-announces-new-penguin-tundra-open-compute-servers-featuring-caviums-48-core-armv8-a-thunderx-processors-300111923.html

Penguin Tundra Platform brings Open Compute Project to HPC

penguin-computing-logoThe Penguin Tundra cluster platform, based on Open Compute Project rack level infrastructure, delivers the highest density and lowest total cost of ownership for high performance technical computing clusters. Large-scale HPC deployments will benefit from Tundra, which is designed to accommodate future exascale HPC components such as coprocessors and fabrics. Being an active member of the Open Compute Project community is a natural step for Penguin Computing as an early Linux pioneer that understands the benefits of community driven solutions.

Evolving Open Compute Project Transforming HPC

Quanta Rackgo XSince its creation by Facebook in April 2011, the Open Compute Project (OCP) has been rapidly evolving – and that’s good news for the HPC community. As originally conceived by Facebook, OCP’s charter was to develop open standards for the design and delivery of the most efficient server, storage and data center hardware designs for scalable computing.  It’s no surprise that particular emphasis was placed on large data centers focused on huge web workloads. The computational and energy requirements of Facebook’s 334,000 square foot Prineville, Oregon data center was a major motivator for the OCP initiative.

http://www.hpcwire.com/2014/03/03/evolving-open-compute-project-transforming-hpc/

Open Compute Project: The Future of Data Center Infrastructure or Just One Possibility?

The buzz surrounding Facebook’s Open Compute Project is increasing, with some predicting that it will reshape the entire enterprise-vendor relationship and throw a monkey wrench into longstanding sales and distribution channels. But while the accomplishments the initiative has achieved so far are impressive, they are not as earth-shattering as they appear—at least not yet.

http://www.itbusinessedge.com/blogs/infrastructure/open-compute-project-the-future-of-data-center-infrastructure-or-just-one-possibility.html