Category Archives: Ecosystem
It was a long weekend for me here in Bangalore thanks to a day off for voting in the General Elections on Thursday and a day off for Good Friday. Last week I researched on multiple topics including Massive Scale Computig , Web Scale Computing and Open Compute Project. For this week’s food for thought reading I will go through the Open Compute Project and what I learnt from it with my readers. The Open Compute Project initiative was announced in April 2011 by Facebook to openly share designs of data center products.The effort came out of a redesign of Facebook’s data center in Prineville, Oregon.The leader of the effort is Frank Frankovsky.
Open Compute, or OCP for short, is an organization that was founded by Facebook in 2011. According to the organization’s website, Open Compute’s goal is to “build one of the most efficient computing infrastructures at the lowest possible cost.” Open Compute attempts to achieve this goal chiefly by: 1) eliminating “gratuitous differentiation” by hardware vendors and 2) making designs for hardware and data centers “open source” to foster innovation.
Facebook is leading a revolution in how enterprise hardware is built. About two and half years ago, it launched the Open Compute Project (OCP) to create “open source” data center hardware. That means hardware vendors like HP, Dell and Cisco, who basically own the $150 billion data center hardware market, no longer control the product designs. Customers like Facebook and Goldman Sachs do. Because customers are the designers, OCP’s hardware projects use fewer materials, cost less and perform better than what traditional vendors typically offer.
The launch of two new features into the Open Compute hardware specifications on Wednesday has managed to do what Facebook has been threatening to do since it began building its vanity-free hardware back in 2010. The company has blown up the server — reducing it to interchangeable components. With this step it has disrupted the hardware business from the chips all the way up to the switches. It has also killed the server business, which IDC estimates will bring in $55 billion in revenue for 2012.
Who are the world’s biggest server sellers? No one really knows. Venerable research firms like IDC and Gartner will tell you that the server game is dominated by familiar names like IBM, Dell, HP, Cisco, and Fujitsu, but the truth is a bit more complicated. You see, the giants of the internet — the companies that need more servers than anyone — are buying massive amounts of custom-built gear straight from manufacturers in Asia, and they prefer to keep the specifics under wraps. This includes Google and Amazon and others.
Remember when “Other” was just a rounding error in market share reports? Now in the server market, it just might be the main event, as Facebook’s Open Compute project, cloud computing, and other trends drive buyers to no-name server vendors instead of IBM, HP, and Dell. Time to short the incumbents?
Server sales are up after a sluggish start to 2012, but growth isn’t coming from the incumbent server vendors, say several new reports. The “other” category, which includes Quanta and other companies building custom servers for large cloud companies had the most impressive growth, suggesting a significant impact for Open Compute designs at the expense of major server makers like Dell and HP.
Baidu (NASDAQ: BIDU), often referred to as the Google of China, has adopted new, low-power servers from Marvell Technologies (NASDAQ: MRVL). A large tech company adopting low-power servers isn’t typically news, but in this case it’s certainly news for the chip sector. Baidu’s new servers are built around ARM Holdings (NASDAQ: ARMH) chip designs, a key shift in server design. ARM dominates the smartphone chip market, providing the designs found in the majority of mobile devices, and now has eyes on the server market.
Nearly two years ago, Facebook unveiled what it called the Open Compute Project. The idea was to share designs for data center hardware like servers, storage, and racks so that companies could build their own equipment instead of relying on the narrow options provided by hardware vendors. While anyone could benefit, Facebook led the way in deploying the custom-made hardware in its own data centers. The project has now advanced to the point where all new servers deployed by Facebook have been designed by Facebook itself or designed by others to Facebook’s demanding specifications.
Facebook just made a potentially game-changing announcement. It got less fanfare than Tuesday’s announcement that it is going into the social search business, but this other announcement may have bigger long-term implications for the technology industry. Put simply, some of the world’s biggest computing systems just got a little cheaper, and a lot easier to configure. As a consequence, the companies that supply the hardware to these systems may have to scramble to remain as profitable. The reason is a Facebook-led open source project.
Facebook and the Open Compute Project (OCP) announced Wednesday that they’ve made huge strides toward the goal of setting standards for the most efficient server, storage and data center hardware available for scalable computing. Facebook launched OCP 18 months ago hoping to crowdsource the problem of creating better hardware for high-scale computing. From its start with one member, Facebook, and 200 participants, the group now has more than 50 member companies and saw more than 2,000 participants attend this week’s Open Compute Platform Summit in Santa Clara, Calif.
In its quest to figure out how best to build efficient, scalable data centers, the Open Compute Project is looking to students and other independent people for ideas. Thus the hackathon at the Open Compute Summit on Wednesday. While most hackathons focus on coding, Facebook and Open Compute are hoping to use tools from companies such as UpVerter and GrabCAD to help make the collaborative problem-solving that occurs so easily around code, happen in hardware. These companies offer web-based collaboration software with UpVerter letting engineers share circuit designs and libraries and GrabCAD performing a similar service for mechanical designs.
Data centers have a nasty reputation for pollution. Now Facebook has launched a contest to solve one of the computing world’s biggest ecological problems. It wants to make computers out of biodegradable, compostable materials like cardboard, reports Stacey Higginbotham on GigaOM. The idea is part of Facebook’s Open Compute Project. That effort designs custom, eco-friendly servers for the social media giant, launching an entirely new segment of the hardware industry in the process.
CIO — Facebook’s state of the art data center in Oregon uses 38 percent less power and costs 24 percent less to run that its older data centers. These figures are astounding, and they should certainly make any cost-conscious CIO sit up and take notice. What’s the secret behind these savings? The company honored its hacker roots by custom-designing both the data center itself, and the servers (and management tools) inside it, from the ground up. It’s akin to what Google has been doing for the past 10 years or so, but the good news is that–unlike Google–Facebook has not kept what it has achieved and how it has achieved it a secret at all.
Something extraordinary is happening at Facebook. The company is working on an idea that that could disrupt some of the largest enterprise tech companies in the world like IBM, HP, Dell. Facebook is leading a project that pushes hardware vendors into a new, and open-source way of building servers. It’s called the Open Compute Project. Its goal is to do for commercial hardware what Linux did for commercial software — change the way it is designed, built, sold and supported.