Google Servers

After reading the article at http://news.cnet.com/8301-1001_3-10209580-92.html I figured that it had to be a hoax.  I mean, that just looks like it’s looking for trouble.  However, after about a 20 minute discussion around this with one of the guys working at Google I am now reasonably convinced that this may very well be for real.

The first question I got asked was why doesn’t it seem legit, and honestly, I had to think about that, just looking at the picture at the top I guess one could simply say “well, it just doesn’t look right!”, or “heck, who would put the power supply in such a weird position?”.  The most applicable answer is simply “the article was released on April 1st”.

Well, for one, consider the 12V battery, DC to DC converters exist – and they can definitely do the job properly, so getting 3.3V and 5V isn’t a problem.  This does (as one of the comments on the article also states) eliminate the wasteful invertor/PSU conversion (a UPS generally stores like 12V or 24V anyway, or even 96V in some cases).  The point being that DC -> AC back to DC is quite wasteful.  So whilst I can’t comment regarding the UPS cost vs the battery cost, it does make sense to try and eliminate inverters.

Hanging the PSU out to the back also makes sense, mostly in terms of heat dissapation, what doesn’t make sense for me is the position of the battery.  With the PSU at (what I’m guessing) is the back of the rack and the battery at the front … also, generally power gets fed from the back, but seeing as these things (according to the diagram showing the container layout) goes against the side walls I’m actually thinking it makes sense, batteries may need to be replaced from time to time, and you probably want all the connectors on one side so you can slide the “server” into position and then just plug all the connectors in.  If the PSU dies the board probably powered down anyway, so in those cases pulling the server in order to replace the PSU is probably acceptable.  Replacing drives … not sure whether that would actually happen “as is” without power down and pulling the server in any case.

It’s definitely cheap.  I mean, look at that case, it’s essentially a piece of bent metal sheet!  Using velcro to keep the PSU and hard drives in place (SATA is hot-swap by design so it’s the cheapest hot-swap bay I’ve seen in my life).

The things in there that is not cheap is the motherboard (There is no audio ports on the back, indicating that it is actually a server board, along with 8 RAM slots, dual CPU is also quite handy, generally more processors in a single box is good for concurrency.

Now think about something like ScaleMP that binds multiple boards into a single system, it has it’s overheads, but how much CPU do you really need for processing a single query?  I’m guessing it’s mostly not CPU bound, in large databases it’s usually the disks that ends up being your bottleneck (which is why proper indexing and object caches are so important).

The width of that unit is probably not the standard 19″ either.  The 19″ servers I’ve seen could fit the hard drives next to the board, or even a PSU next to the board (especially the smaller form factor PSUs typically used on branded servers).  Comparing this to a branded server is really not actually fair in my opinion.  Branded servers are built to be sold and thus needs to look good, obviously advertise the brand, and also actually work.  They’re not built to be cheap, they’re built to make money.

On the other hand, google’s goal is far different.  They really couldn’t care what it looks like, they need practical, efficient, and CHEAP.  Most of us run one or two servers per task, thus we need them to be as reliable as humanly possible.  On the other hand, if you’ve got thousands of servers assigned to the same task, and one of them dies, who cares?  You replace it and go on with your life.

The one thing I’m pretty sceptical about is cooling inside those containers.  I can imagine that it gets pretty hot inside them, even without heat generating components inside of them they’re NOT the coolest of things, I do NOT want to know what they feel like with 1160 “servers” running inside of them.

I can’t comment on the power consumption and heat output as I know almost nothing about this other than more heat implies more cooling, thus you want things to be as power efficient as possible.  Not an easy task.

This must be the strangest blog I’ve yet written.  I’d love to see one of these in real.  With Google data centre(s) now coming to ZA, who knows …

2 Responses to “Google Servers”

  1. Juggernaut42 says:

    I has been fooled. damn.

  2. Johan Kok says:

    Go to google images and have a look at the chillers on the outside of some, while others float in the sea (rather bay).

    We stack 5 2Us vertically in a 6U “chassis” – including a battery (as a last resort backup), a 3KVA+ UPS at the bottom of the rack, and an outside generator. – also no big UPSs – as that would be but one more “single” point of failure.

    As for the “special 12v MB’s” — none of that for us….

    As for disks – we’re dropping the mechanical disks for solid state 2TB+ drives. – expensive, but saving quite a bit of power, and being fast at the same time.