Why local hosting providers are better…

We all hear how its good to buy our produce locally, but what about buying web hosting services locally. Interestingly, your company will be better off if you host with a local provider, and it has nothing to do with better support and everything to do with search engines and regionality.

Search Engines such as Google or Bing like LOCAL results

When you do an internet search, the search engine knows where you are and in turn will display search results that are close to your region. For example, if you do a search for lawyers, Google will undoubtedly return results for law offices in your area. It does this using IP geography. From your IP address, Google has a rough idea of why you are in the world.

The same technology that determines where you are located can be used to determine where a website/company is located. Now lets be honest, search engines know that most websites are hosted outside their served region. However, if your website is hosted from an IP with a geographic footprint of say Ohio, and your business is in Ohio and mentions this in the indexable content on your site (mailing address, area code, ectc.,.), the combination of these two things is very positive. It tells Google you are physically local to that area and cyber local to that area. All of this is good stuff for your ranking.

So go ahead and host your website with a local provider!

Posted in Uncategorized | 1 Comment

Cloud Computing and VPS Confusion

I hate terminology. I especially hate terminology when people gets things confused and use it improperly. My latest annoyance is the confusion and miscommunication of the terms Cloud Computing and VPS (Virtual Private Server).

The problem is that some hosting providers use the term cloud computing synonymously with VPS, and as a result the public now thinks they are one in the same. People call me sometimes asking if I do Cloud services, and I know they are talking about VPS, but they have been convinced that Cloud services is what they need. They mainly do this because Cloud Computing sounds better then VPS, and much of the mass media lately has started to emphasize “The Cloud” as a viable product for everyone.

What is VPS?

VPS is virtualization, the concept of running multiple server OS’s inside a single physical server. If you have a small hosting environment, VPS is ideal. Your VPS will sit on a single server with 10-15 other VPS’s that belong to other customers. You and those other customers all share the resources of a single server. VPS is not Cloud, because everything resides on a single physical machine, that machine always provides your VPS with CPU cycles, RAM, and storage.

What is Cloud Computing?

Cloud computing has been around for years. True cloud computing is the concept of a large pool of servers that work together to provide CPU cycles for computation. End-users send computational work into the cloud, the cloud processes it very quickly, and then it returns the computed result. From a fiscal standpoint, you only pay for the brief amount of time the cloud needed to work on your computations.

Cloud computing is obviously not VPS. The majority of VPS is used to host data. Data hosting has absolutely no need or application to Cloud computing since hosting is inherently very light on CPU processing.

Posted in Uncategorized | Tagged , | 2 Comments

Why does my datacenter feel warm?

As soon as November approaches I start hearing this question more and more. Someone will come in from the outside and walk into the datacenter and immediately say, “It feels warm in here, is something wrong?”

The answer is NO, nothing is wrong.

When you drive home from work, and get of your car and walk 10 yards to your front door (in 30 degree F weather), as soon as you get inside, the first thing that probably goes through your head is… “Ahh… Its nice and warm inside”. And that is with your house thermostate set to 70 degrees F.

A good datacenter will hold a solid 72 degrees 35% relative humidity year round. So yes, in the winter months, when you come in from the outside cold (30-40 degrees F) and walk into a 72 degree F room that happens to be a datacenter you should feel warm. But warm is a relative sensation by our bodies. When its 80 degrees F outside and you walk into the same datacenter, the first thing you say is… “Ahh… Its nice and cold in here”.

The datacenter is always 72 degrees F but in the winter that feels comfortably warm to our bodies after we were just exposed to 35 degree F outside temperatures. So relax, its not “warm” in the datacenter.

Posted in Environmentals, Main | Tagged , , | 1 Comment

Bad Locations for a Datacenter

Because I am a native of Philadelphia, I am very familar with the area and the history of certain commercial development areas. A few months ago, an outside firm started a Data Center Facility in the old Philadelphia Navy Yard. This is the worst place in Philadelphia to build a telecom facility. Since 1998, many local telecom companies have all looked at and passed on the Navy Yard. Why? For starters there is ZERO fiber optic access there. There aren’t even aerial poles. The copper service that does exists from the local LEC (Verizon) is underground and is heavily corrided due to flooding. Oh, yes, flooding, theres that too!

The new firm most likely took on a heavy expense of demarcing fiber into the building through the Navy Yard grounds. So they effectively have a single fiber entrance. Agauin, not ideal. Not very carrier diverse either since all the carriers will be on the same fiber trunks in the same conduit. And loop service access (DS-1 and DS-3 cross connects for MPLS) will be terrible.

Its a shame they didn’t reach out to local telecom experts, since the overwhelming concensus would have been to stay away from the Navy Yard. There are 3 solid carrier hotel buildings in Philly, all of which have diverse power, multiple diverse fiber entrance, and the list goes on. The Navy Yard was probably picked by this firm because off the low operating cost and tax incentives, but at the end of the day, not being in a true carrier diverse building will hurt over time.

Posted in Uncategorized | Tagged , , , | 3 Comments

Watch out for new email harvesting techniques…

The top two new techniques for email harvesting is Craigslist Ad’s and Facebook Friend Requests.

If you use Facebook, I’m sure you’ve received a few random friend requests from people you don’t know. These requests are actually email harvesting attempts. If you reply or accept the friend request, they will most likely grab your real email – which was the original intended purpose.

The Craigslist Ad technique is similar. If you post an Ad on criagslist there is a very good chance someone will reply. Not all the replies are real people however, in fact, many of the replies are from robots. The response will be generic, for example, “I’m interested, contact me”. If you reply to that email, boom, the robot just grabbed your real email address.

Posted in Uncategorized | Leave a comment

New Datacenter Building Methodologies

Being involved in multiple data center build projects, I wanted to share some opinions on what I think are the most important core features to focus on.

The first hurdle of any building project is electrical power. Any location with nearby poles will have access to either 7,200 or 13,200 volt 3-phase high voltage. But some areas dont have enough capacity on the pole to support a typical data center load of 1-2 Megawatts. And if the local grid can support it, the power might not be diverse, or might be mostly aerial. Now some design firms stress having underground diverse power. Underground power is easy if your in an urban Metro, diverse not so much. I have come to a surprising conclusion however, it doesn’t really matter!

Today’s Generators are nothing like those of the past…

If your facility has a single aerial power feed, thats not too big of a deal these days especially if you intend to install an N+1 redundant generator infrastructure with a few 1000 gallons of fuel. I mean, thats enough capacity to run for a few days, and with multiple generators you can survive a significant localized mechanical failure on top of your grid failure. And lets face it, when was the last time an area lost power for more than 5 days? Modern generators have an amazing performance record and can run for days on end. If you have three 1.5MW generators for a datacenter with a 1.5MW running load, lets face it, your never going dark.

Location is the most important aspect to a datacenter…

This location criteria is two fold. First, you want your location to be in an area of minimized natural disasters. So the probably of flooding and natural conditions that could cause power lose are limited. Second, you want to be very close to diverse fiber optic routes. The number one cause of datacenter outage time is not power or cooling, its IP access. If your facility sits at the cross roads of multiple major networks and has access to fully diverse fiber routes, you greatly eliminate the possibility of an outage. Mechanicals can be solved by buying the right gear and applying good design, but your network quality will also hinge upon your access to fiber.

Posted in Uncategorized | Leave a comment

To ping or not to ping… that is the question!

The other day I had a customer call and complain about high ping latency between our router and his server. I asked, what are you pinging? The default gateway he replied. Well, there’s your problem. Ping one of our servers, and it will look fine. Customer did not understand, and simply wouldn’t accept my answer that seeing spikes in ping latency on the ethernet handoff between his server and my router is normal.

Unfortunately, many people use ping to diagnose problems, but they dont understand exactly how to interpret the results. First, not all latency is bad. Some devices are slow to respond because there is an issue causing problem. But sometimes, a device is slow to respond because it doesn’t feel like responding right away. Huh? Its called priority queuing. When you ping one server from another server, that ping is treated is high priority by receiving server. The recipient server responds as fast as it can, just as it would for any other request. But when you ping a router, the router can care less about that ping. Routers are designed to treat pings as the lowest priority request, it will get around to it after it finishes the other more important stuff its doing. Two routers right next to other might show 3ms latency, with intermitent spikes to 20ms – perfectly normal.

Interpreting ping data is a balance of latency and packet loss. The two routers might show latency, but upon closer inspection, there is ZERO packet loss, even after 10,000 pings. Though you could have two routers with stable low latency between them, but 3 or 4 percent packetloss. So you have to look at all aspects of the ping result set and the overall environment.

Posted in Main | Leave a comment

New Cooling Technologies for Data Center – Green or Not?

Once again, vendors are ramping up with new and advanced data center cooling technologies, in fact, I have received many calls just in the last 2 months. There is a common thread, they claim up to 80% reduction in energy costs. Wow, thats a big savings, is there a catch? Sort of… There are some technologies that can reduce electrical consumption, it’s not entirely a false statement, but there is a big catch – and its a not a “Green” technology by any means. I will explain.

Typical data center providers use Liebert cooling, basic DX (direct expansion). You have a floor unit that contains a compressor and evaporator coil, heat is rejected to an air cooled or glycol cooled condenser. These units use approximately 1.5KW per 1-ton of cooling. So a 40-ton data center installation, will have about 60KW of electrical usage just for the Liebert AC units. Can we get that down to say 10KW for 40-ton, YES. Here’s how….

Evaporative cooling has been around for years. In fact most large buildings use evaporative cooling instead of air-cooled dry coolers because an evaporative cooling tower takes up much less space. These cooling towers are fairly simple, big fans, lots of airflow, a really big heat-transfer coil (with glycol circulating) and a water source that sprays liquid onto the coil for it to evaporate off. Even in the summer, a 100-ton evaporative cooling tower can easily reduce circulating glycol temperatures from 100 degress F at inlet to 60 degrees F at the outlet.

The low-energy cooling technologies being advertised are basically a non-DX non-compressor solution. They tend to be rack based. So right next to the rack cabinet is a coil with glycol circulating from the evaporative cooling tower. The side cabinet sucks hot air from the rear of the cabinet, cools it across the coil, and supplies the cool air back to the front of cabinet. The coils is about 60 degrees or so, and with enough airflow, that will cool an average size rack. The heat rejected goes to the roof tower and is dissapated through evaporation.

So if this works, and uses less energy, why doesn’t everybody do it? Simple. One piece of information has been left out. Evaporative cooling towers use a HUGE amount of water to perform this kind of cooling. Instead of electricity and freon in a closed DX circuit, they use water and physics, but water is a resource and its not cheap. A 100-ton tower at max capacity (which is where it would be to get glycol outlet down to 60 degrees F) will use about 5,000 gallons of water a day. Not only is that a huge waste of water, but you are only shifting cost. Yes, your electric bill will be lower, but your water bill will be insane, somewhere around $2000/month.

Its common sense, if there were a better cooling solution, we’d have it. Data Center Providers are already using the most efficient system since cost is already a major concern. The fact is, cooling is already as efficient as it can be. These modified systems, may work for some people, for example, if you have a huge underground source of well water that is “unlimited” this may work for you. But most datacenters don’t have access to unlimited, free, clean, non-brackish water.

Posted in Environmentals, Main | Leave a comment

Why do so many datacenters advertise Dry Pipe Preaction Sprinklers?

I’ve been seeing this more and more lately and its time to clear the air. In the past, dry chemical fire suppression was the standard. Either Halon or FM200 dry chemical gas would suppress the fire by removing all oxygen from the space.

Nowadays, many datacenters are cutting back on dry chemical systems. Instead they advertise that they have “Dry Pipe Preaction Sprinklers”. Sounds good doesn’t it! Well its a fancy way of saying we use building code required overhead sprinklers. Preaction simply means that the sprinkler pipes dont have water pressure in them. An action has to trigger the building pumps which pressurize the pipes, i.e. smoke alarms and such. Dry pipe means that after the system is triggered (either a real alarm event or planned maintenance) it is drained.

The play on words tricks people into thinking that “Dry Pipe” makes a connection to dry chemical – of which there is none!

Sprinklers are code in all buildings… period. The sprinkler heads only open after a temperature fuse breaks – normally around 175 degrees. Well, if a datacenter gets up to 175 degrees, its all over. Thats why you use dry chemical like FM200 or Halon to kill the fire immediately before sprinklers heads open.

Point of the story is… Never point your equipment in a datacenter that does not have true dry chemical fire suppression.

Posted in Environmentals, Main | Leave a comment

The Myth about Mid-West Datacenters

Some articles have been written recently about where the best location for running and building a datacenter is. These reports always pick mid-western states as the ideal locations due to cost. South Dakota or Kansas is a great place to build a cheap datacenter if cost is the number one concern. Labor is cheap, material costs are low, electricity prices are low. But these reports always leave out something that is very important. PEOPLE.

Datacenter operations will always be central to locations with population density. East Coast corridor, Texas, California, and so on. The surrounding population will support the service. Who needs colocation or datacenter services in South Dakota? The only people who can benefit from this are those who do not need to touch their equipment or Fortune 500 firms who can afford to fly out their technicians to a remote site. What people don’t realize is most operations that use significant colocation resources (10U and up) need to touch their equipment on a regular basis. They can’t ship it off 1000 miles into the mid-west.

Furthermore, the reduced electricity costs (which is the most significant operational cost of a datacenter) is only temporary. In a few years electricity prices will start to even out. Its sort of an anomaly that is Nebraska you can get electricity at $0.03 per Kwh – that wont last long. Mid-west locations also do not have the immense diversified telecom and fiber infrastructure that is present in major cities. Besides, content users are located in the major cities – content providers and users should be close to each other.

Posted in Main | Tagged , | 1 Comment