Google Reincarnates Dead Paper Mill as Data Center of Future

In February of 2009, Google paid about $52 million for an abandoned paper mill in Hamina, Finland, deciding that the 56-year-old building was the ideal place to build one of the massive computing facilities that serve up its various online services. It's the ideal metaphor for the internet age. Finnish pulp and paper manufacturer Stora Enso shut down its Summa Mill early in 2008, saying that a drop in newsprint and magazine-paper production lead to "persistent losses in recent years and poor long-term profitability prospects." Newspapers and magazines are slowly giving way to web services along the lines of, well, Google.
Image may contain Building Architecture Tower Office Building and Outdoors

Joe Kava found himself on the southern coast of Finland, sending robotic cameras down an underground tunnel that stretched into the Baltic Sea. It's not quite what he expected when he joined Google to run its data centers.

In February of 2009, Google paid about $52 million for an abandoned paper mill in Hamina, Finland, after deciding that the 56-year-old building was the ideal place to build one of the massive computing facilities that serve up its myriad online services. Part of the appeal was that the Hamina mill included an underground tunnel once used to pull water from the Gulf of Finland. Originally, that frigid Baltic water cooled a steam generation plant at the mill, but Google saw it as a way to cool its servers.

Weinberg-Clark Photography

Those robotic cameras -- remote-operated underwater vehicles that typically travel down oil pipelines -- were used to inspect the long-dormant tunnel, which ran through the solid granite bedrock sitting just beneath the mill. As it turns out, all 450 meters of the tunnel were in excellent condition, and by May 2010, it was moving sea water to heat exchangers inside Google's new data center, helping to cool down thousands of machines juggling web traffic. Thanks in part to that granite tunnel, Google can run its Hamina facility without the energy-sapping electric chillers found in the average data center.

"When someone tells you we've selected the next data center site and it's a paper mill built back in 1953, your first reaction might be: 'What the hell are you talking about?,'" says Kava. "'How am I going to make that a data center?' But we were actually excited to learn that the mill used sea water for cooling.... We wanted to make this as a green a facility as possible, and reusing existing infrastructure is a big part of that."

Kava cites this as a prime example of how Google "thinks outside the box" when building its data centers, working to create facilities that are both efficient and kind to the world around them. But more than that, Google's Hamina data center is the ideal metaphor for the internet age. Finnish pulp and paper manufacturer Stora Enso shut down its Summa Mill early in 2008, citing a drop in newsprint and magazine-paper production that led to "persistent losses in recent years and poor long-term profitability prospects." Newspapers and magazines are slowly giving way to web services along the lines of, well, Google, and some of the largest services are underpinned by a new breed of computer data center -- facilities that can handle massive loads while using comparatively little power and putting less of a strain on the environment.

Google was at the forefront of this movement, building new-age facilities not only in Finland, but in Belgium, Ireland, and across the U.S. The other giants of the internet soon followed, including Amazon, Microsoft and Facebook. Last year, Facebook opened a data center in Prineville, Oregon that operates without chillers, cooling its servers with the outside air, and it has just announced that it will build a second facility in Sweden, not far from Google's $52-million Internet Metaphor.

The Secrets of the Google Data Center

Google hired Joe Kava in 2008 to run its Data Center Operations team. But this soon morphed into the Operations and Construction team. Originally, Google leased data center space inside existing facilities run by data center specialists, but now, it builds all its own facilities, and of late, it has done so using only its own engineers. "We used to hire architecture and engineering firms to do the work for us," Kava says. "As we've grown over the years and developed our own in-house talent, we've taken more and more of that work on ourselves."

Over those same years, Google has said precious little about the design of the facilities and the hardware inside them. But in April 2009, the search giant released a video showing the inside of its first custom-built data center -- presumably, a facility in The Dalles, Oregon -- and it has since lifted at least part of the curtain on newer facilities in Hamina and in Saint-Ghislain, Belgium.

According to Kava, both of these European data centers operate without chillers. Whereas the Hamina facility pumps cold water from the Baltic, the Belgium data center uses an evaporative cooling system that pulls water from a nearby industrial canal. "We designed and built a water treatment plant on-site," Kava says. "That way, we're not using potable water from the city water supply."

For most of the year, the Belgium climate is mild enough to keep temperatures where they need to be inside the server room. As Kava points out, server room temperatures needn't be as low as they traditionally are. As recently as August 2008, the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) recommended that data center temperatures range from 68 and 77 degrees Fahrenheit -- but Google was advising operators to crank the thermostat to above 80 degrees.

"The first step to building an efficient data center...is to just raise the temperature," Kava says. "The machines, the servers, the storage arrays, everything -- they run just fine at much much more elevated temperatures than the average data center runs at. It's ludicrous to me to. walk into a data centers that's running at 65 or 68 degrees Fahrenheit or less."

There are times when the temperature gets so hot inside the data centers, Google will order employees out of the building -- but keep the servers running. "We have what we call 'excursion hours' or 'excursion days.' Normally, we don't have to do anything [but] tell our employees not to work in the data center during those really hot hours and just catch up on office work."

At sites like Belgium, however, there are days when it's too hot even for the servers, and Google will actually move the facility's work to one of its other data centers. Kava did not provide details, but he did acknowledge that this data center shift involves a software platform called Spanner. This Google-designed platform was discussed at a symposium in October 2009, but this is the first time Google has publicly confirmed that Spanner is actually in use.

"If it really, really got [hot] and we needed to reduce the load in the data center," Kava says, "then, yes, we have automatic tools and systems that allow for that, such as Spanner."

According to the presentation Google gave at that 2009 symposium, Spanner is a “storage and computation system that spans all our data centers [and that] automatically moves and adds replicas of data and computation based on constraints and usage patterns." This includes constraints related to bandwidth, packet loss, power, resources, and "failure modes" -- i.e. when stuff goes wrong inside the data center.

The platform illustrates Google's overall approach to data center design. The company builds its own stuff and will only say so much about that stuff. It views technology such as Spanner as a competitive advantage. But one thing is clear: Google is rethinking the data center.

The approach has certainly had an effect on the rest of the industry. Like Google, Microsoft has experimented with data center modules -- shipping containers prepacked with servers and other equipment -- that can be pieced together into much larger facilities. And with Facebook releasing the designs of its Prineville facility -- a response to Google's efforts to keep its specific designs a secret -- others are following the same lead. Late last year, according to Prineville city engineer Eric Klann, two unnamed companies -- codenamed "Maverick" and "Cloud" were looking to build server farms based on Facebook’s chillerless design, and it looks like Maverick is none other than Apple.

Large Data Centers, Small Details

This month, in an effort to show the world how kindly its data centers treat the outside world, Google announced that all of its custom-built US faccilities have received ISO 14001 and OHSAS 18001 certification -- internationally recognized certifications that rate the environmental kindness and safety not only of data centers but all sorts of operations.

This involved tracking everything from engineering tools to ladders inside the data center. "You actually learn a lot when you go through these audits, about things you never even considered," Kava says. His point is that Google pays attention to even the smallest details of data center design -- in all its data centers. It will soon seek similar certification for its European facilities as well.

In Finland, there's a punchline to Google's Baltic Sea water trick. As Kava explains, the sea water is just part of the setup. On the data center floor, the servers give off hot air. This air is transferred to water-based cooling systems sitting next to the servers. And Google then cools the water from these systems by mixing it with the sea water streaming from the Baltic. When the process is finished, the cold Baltic water is no longer cold. But before returning it to the sea, Google cools it back down -- with more cold sea water pulled from the Baltic. "When we discharge back to the Gulf, it's at a temperature that's similar to the inlet temperature," Kava says. "That minimizes any chance of environmental disturbance."

According to Kava, the company's environmental permits didn't require that it temper the water. "I makes me feel good," he says. "We don't do just what we have to do. We look at what's the right thing to do." It's a common Google message. But Kava argues that ISO certification is proof that the company is achieving its goals. "If you're close to something, you may believe you're meeting a standard. But sometimes it's good to have a third-party come in."

The complaint, from the likes of Facebook, is that the Google doesn't share enough about how it has solved particular problems that will plague any large web outfit. Reports, for instance, indicate that Google builds not only its own servers but its own networking equipment, but the company has not even acknowledged as much. That said, over the past few years, Google is certainly sharing more.

We asked Joe Kava about the networking hardware, and he declined to answer. But he did acknowledge the use of Spanner. And he talked and talked about that granite tunnel and Baltic Sea. He even told us that when Google bought that paper mill, he and his team were well aware that the purchase made for a big fat internet metaphor. "This didn't escape us," he says.