Repairing Tate Access Floor Tiles

How to repair floor tiles?

For this article I am referring to the newer style of Tate access floor tiles. The newer style has a single piece of laminant that runs from edge to edge. The older style tiles had the laminant end about a quarter inch short of the edge with the remaining space filled with a black edging strip that frequently snapped off.

The new style is great, but over time the laminant will start to pull away, especially in data centers with low humidity. Its simple to repair.

The laminant is held in place by contact glue, similar to a kitchen countertop. Contact glue can be loosened and re-hardened with heat.

To reattach your Tate laminant get a standard clothing Iron – the kind with a non-stick bottom. Set the iron temperature to medium and turn off the steam. Obviously, do this repair work outside the datacenter. Place the iron on the tiles laminant surface and slowing move it around. The laminant surface needs to be heated for at least 2 minutes. After properly heated use a surface roller to apply even pressure over the top of the laminant and press it down hard to the tiles underlying metal frame. Continue to use the roller until the surface has cooled down. At this point your laminant will be 100% re-attached.


Posted in Environmentals, Main | Leave a comment

Why does my datacenter feel warm?

As soon as November approaches I start hearing this question more and more. Someone will come in from the outside and walk into the datacenter and immediately say, “It feels warm in here, is something wrong?”

The answer is NO, nothing is wrong.

When you drive home from work, and get of your car and walk 10 yards to your front door (in 30 degree F weather), as soon as you get inside, the first thing that probably goes through your head is… “Ahh… Its nice and warm inside”. And that is with your house thermostate set to 70 degrees F.

A good datacenter will hold a solid 72 degrees 35% relative humidity year round. So yes, in the winter months, when you come in from the outside cold (30-40 degrees F) and walk into a 72 degree F room that happens to be a datacenter you should feel warm. But warm is a relative sensation by our bodies. When its 80 degrees F outside and you walk into the same datacenter, the first thing you say is… “Ahh… Its nice and cold in here”.

The datacenter is always 72 degrees F but in the winter that feels comfortably warm to our bodies after we were just exposed to 35 degree F outside temperatures. So relax, its not “warm” in the datacenter.

Posted in Environmentals, Main | Tagged , , | 1 Comment

Bad Locations for a Datacenter

Because I am a native of Philadelphia, I am very familar with the area and the history of certain commercial development areas. A few months ago, an outside firm started a Data Center Facility in the old Philadelphia Navy Yard. This is the worst place in Philadelphia to build a telecom facility. Since 1998, many local telecom companies have all looked at and passed on the Navy Yard. Why? For starters there is ZERO fiber optic access there. There aren’t even aerial poles. The copper service that does exists from the local LEC (Verizon) is underground and is heavily corrided due to flooding. Oh, yes, flooding, theres that too!

The new firm most likely took on a heavy expense of demarcing fiber into the building through the Navy Yard grounds. So they effectively have a single fiber entrance. Agauin, not ideal. Not very carrier diverse either since all the carriers will be on the same fiber trunks in the same conduit. And loop service access (DS-1 and DS-3 cross connects for MPLS) will be terrible.

Its a shame they didn’t reach out to local telecom experts, since the overwhelming concensus would have been to stay away from the Navy Yard. There are 3 solid carrier hotel buildings in Philly, all of which have diverse power, multiple diverse fiber entrance, and the list goes on. The Navy Yard was probably picked by this firm because off the low operating cost and tax incentives, but at the end of the day, not being in a true carrier diverse building will hurt over time.

Posted in Uncategorized | Tagged , , , | 3 Comments

Watch out for new email harvesting techniques…

The top two new techniques for email harvesting is Craigslist Ad’s and Facebook Friend Requests.

If you use Facebook, I’m sure you’ve received a few random friend requests from people you don’t know. These requests are actually email harvesting attempts. If you reply or accept the friend request, they will most likely grab your real email – which was the original intended purpose.

The Craigslist Ad technique is similar. If you post an Ad on criagslist there is a very good chance someone will reply. Not all the replies are real people however, in fact, many of the replies are from robots. The response will be generic, for example, “I’m interested, contact me”. If you reply to that email, boom, the robot just grabbed your real email address.

Posted in Uncategorized | Leave a comment

New Datacenter Building Methodologies

Being involved in multiple data center build projects, I wanted to share some opinions on what I think are the most important core features to focus on.

The first hurdle of any building project is electrical power. Any location with nearby poles will have access to either 7,200 or 13,200 volt 3-phase high voltage. But some areas dont have enough capacity on the pole to support a typical data center load of 1-2 Megawatts. And if the local grid can support it, the power might not be diverse, or might be mostly aerial. Now some design firms stress having underground diverse power. Underground power is easy if your in an urban Metro, diverse not so much. I have come to a surprising conclusion however, it doesn’t really matter!

Today’s Generators are nothing like those of the past…

If your facility has a single aerial power feed, thats not too big of a deal these days especially if you intend to install an N+1 redundant generator infrastructure with a few 1000 gallons of fuel. I mean, thats enough capacity to run for a few days, and with multiple generators you can survive a significant localized mechanical failure on top of your grid failure. And lets face it, when was the last time an area lost power for more than 5 days? Modern generators have an amazing performance record and can run for days on end. If you have three 1.5MW generators for a datacenter with a 1.5MW running load, lets face it, your never going dark.

Location is the most important aspect to a datacenter…

This location criteria is two fold. First, you want your location to be in an area of minimized natural disasters. So the probably of flooding and natural conditions that could cause power lose are limited. Second, you want to be very close to diverse fiber optic routes. The number one cause of datacenter outage time is not power or cooling, its IP access. If your facility sits at the cross roads of multiple major networks and has access to fully diverse fiber routes, you greatly eliminate the possibility of an outage. Mechanicals can be solved by buying the right gear and applying good design, but your network quality will also hinge upon your access to fiber.

Posted in Uncategorized | Leave a comment

To ping or not to ping… that is the question!

The other day I had a customer call and complain about high ping latency between our router and his server. I asked, what are you pinging? The default gateway he replied. Well, there’s your problem. Ping one of our servers, and it will look fine. Customer did not understand, and simply wouldn’t accept my answer that seeing spikes in ping latency on the ethernet handoff between his server and my router is normal.

Unfortunately, many people use ping to diagnose problems, but they dont understand exactly how to interpret the results. First, not all latency is bad. Some devices are slow to respond because there is an issue causing problem. But sometimes, a device is slow to respond because it doesn’t feel like responding right away. Huh? Its called priority queuing. When you ping one server from another server, that ping is treated is high priority by receiving server. The recipient server responds as fast as it can, just as it would for any other request. But when you ping a router, the router can care less about that ping. Routers are designed to treat pings as the lowest priority request, it will get around to it after it finishes the other more important stuff its doing. Two routers right next to other might show 3ms latency, with intermitent spikes to 20ms – perfectly normal.

Interpreting ping data is a balance of latency and packet loss. The two routers might show latency, but upon closer inspection, there is ZERO packet loss, even after 10,000 pings. Though you could have two routers with stable low latency between them, but 3 or 4 percent packetloss. So you have to look at all aspects of the ping result set and the overall environment.

Posted in Main | Leave a comment

New Cooling Technologies for Data Center – Green or Not?

Once again, vendors are ramping up with new and advanced data center cooling technologies, in fact, I have received many calls just in the last 2 months. There is a common thread, they claim up to 80% reduction in energy costs. Wow, thats a big savings, is there a catch? Sort of… There are some technologies that can reduce electrical consumption, it’s not entirely a false statement, but there is a big catch – and its a not a “Green” technology by any means. I will explain.

Typical data center providers use Liebert cooling, basic DX (direct expansion). You have a floor unit that contains a compressor and evaporator coil, heat is rejected to an air cooled or glycol cooled condenser. These units use approximately 1.5KW per 1-ton of cooling. So a 40-ton data center installation, will have about 60KW of electrical usage just for the Liebert AC units. Can we get that down to say 10KW for 40-ton, YES. Here’s how….

Evaporative cooling has been around for years. In fact most large buildings use evaporative cooling instead of air-cooled dry coolers because an evaporative cooling tower takes up much less space. These cooling towers are fairly simple, big fans, lots of airflow, a really big heat-transfer coil (with glycol circulating) and a water source that sprays liquid onto the coil for it to evaporate off. Even in the summer, a 100-ton evaporative cooling tower can easily reduce circulating glycol temperatures from 100 degress F at inlet to 60 degrees F at the outlet.

The low-energy cooling technologies being advertised are basically a non-DX non-compressor solution. They tend to be rack based. So right next to the rack cabinet is a coil with glycol circulating from the evaporative cooling tower. The side cabinet sucks hot air from the rear of the cabinet, cools it across the coil, and supplies the cool air back to the front of cabinet. The coils is about 60 degrees or so, and with enough airflow, that will cool an average size rack. The heat rejected goes to the roof tower and is dissapated through evaporation.

So if this works, and uses less energy, why doesn’t everybody do it? Simple. One piece of information has been left out. Evaporative cooling towers use a HUGE amount of water to perform this kind of cooling. Instead of electricity and freon in a closed DX circuit, they use water and physics, but water is a resource and its not cheap. A 100-ton tower at max capacity (which is where it would be to get glycol outlet down to 60 degrees F) will use about 5,000 gallons of water a day. Not only is that a huge waste of water, but you are only shifting cost. Yes, your electric bill will be lower, but your water bill will be insane, somewhere around $2000/month.

Its common sense, if there were a better cooling solution, we’d have it. Data Center Providers are already using the most efficient system since cost is already a major concern. The fact is, cooling is already as efficient as it can be. These modified systems, may work for some people, for example, if you have a huge underground source of well water that is “unlimited” this may work for you. But most datacenters don’t have access to unlimited, free, clean, non-brackish water.

Posted in Environmentals, Main | Leave a comment

Why do so many datacenters advertise Dry Pipe Preaction Sprinklers?

I’ve been seeing this more and more lately and its time to clear the air. In the past, dry chemical fire suppression was the standard. Either Halon or FM200 dry chemical gas would suppress the fire by removing all oxygen from the space.

Nowadays, many datacenters are cutting back on dry chemical systems. Instead they advertise that they have “Dry Pipe Preaction Sprinklers”. Sounds good doesn’t it! Well its a fancy way of saying we use building code required overhead sprinklers. Preaction simply means that the sprinkler pipes dont have water pressure in them. An action has to trigger the building pumps which pressurize the pipes, i.e. smoke alarms and such. Dry pipe means that after the system is triggered (either a real alarm event or planned maintenance) it is drained.

The play on words tricks people into thinking that “Dry Pipe” makes a connection to dry chemical – of which there is none!

Sprinklers are code in all buildings… period. The sprinkler heads only open after a temperature fuse breaks – normally around 175 degrees. Well, if a datacenter gets up to 175 degrees, its all over. Thats why you use dry chemical like FM200 or Halon to kill the fire immediately before sprinklers heads open.

Point of the story is… Never point your equipment in a datacenter that does not have true dry chemical fire suppression.

Posted in Environmentals, Main | Leave a comment

The Myth about Mid-West Datacenters

Some articles have been written recently about where the best location for running and building a datacenter is. These reports always pick mid-western states as the ideal locations due to cost. South Dakota or Kansas is a great place to build a cheap datacenter if cost is the number one concern. Labor is cheap, material costs are low, electricity prices are low. But these reports always leave out something that is very important. PEOPLE.

Datacenter operations will always be central to locations with population density. East Coast corridor, Texas, California, and so on. The surrounding population will support the service. Who needs colocation or datacenter services in South Dakota? The only people who can benefit from this are those who do not need to touch their equipment or Fortune 500 firms who can afford to fly out their technicians to a remote site. What people don’t realize is most operations that use significant colocation resources (10U and up) need to touch their equipment on a regular basis. They can’t ship it off 1000 miles into the mid-west.

Furthermore, the reduced electricity costs (which is the most significant operational cost of a datacenter) is only temporary. In a few years electricity prices will start to even out. Its sort of an anomaly that is Nebraska you can get electricity at $0.03 per Kwh – that wont last long. Mid-west locations also do not have the immense diversified telecom and fiber infrastructure that is present in major cities. Besides, content users are located in the major cities – content providers and users should be close to each other.

Posted in Main | Tagged , | 1 Comment

British Airways is unethical in their online ticket sales

For those readers who expect colocation topics, I am sorry. But when you have a blog you have the ability to let people know of your experiences. I was so shocked by how British Airways scammed me that I feel compeled to tell the story here.

I recently purchased an international flight via British Airways. Total cost was $846. There were two other carriers (USair and Delta) who were $10-$20 cheaper, plus a few carriers who were more expensive. I did alot of shopping around on sites like and to find the best price. Ultimately I decided to go with British Airways.

I went to the British Airways site and purchased my ticket online. When the whole process was done, I had my confirmaion number, final total was $846. Then I login to select my seats. To my surprise, it will cost a total of $90 to select my departure and return seats (1-stop flights). So in essence, this ticket really cost me $936. What pisses me off is I could have went with Delta for $834 and had ZERO seat fees.

British Airways (BA) says you can wait until 24 hours before departure and select seats for free, but we all know when that time comes, there probably wont be any seats left, especially since they overbook flights.

Nowhere in the order process does BA disclose the seat fee. I even clicked and read all their terms and conditions, doesn’t mention seat selection fees anywhere. When I called to complain, they told me to do a google search of their site for “seating” which returned a page stating the fees. Yeah, that helps, what am I supposed to be a mind reader. If you dont disclose it in the order process how am I supposed know about it.

I am disputing the charge with AMEX to get the ticket refunded so I can fly Delta. As a business owner, it really annoys me how some businesses abuse customers and trick them into higher pricing. I will never purchase a BA ticket for the rest of my life because of this.

Posted in Uncategorized | Leave a comment