Get ready folks! Weâ€™re about to go behind the scenes with SingleHopâ€™s Operations team, focusing on what we do (and what we do differently) when deploying a SingleHop location. Some may think that turning up a new location for SingleHop is a daunting task. However with the years of automationÂ we'veÂ put in place it is actually very turnkey, which allows SingleHop as a company to become very agile, and able to turn up new locations with ease.
The first phase of a new Data Center rollout it something that I wonâ€™t cover a lot, as I really want to get into the technical nitty gritty of it all, but determining the location is #1. We base our newest location off market data, and, more importantly, by surveying our customers and identifying the â€śhottestâ€ť place to expand to. Our latest data shows that Amsterdam is a prime location for our existing customer base - both in the US and Europe, so stay tuned - SingleHop will be turning up â€śAMS-1â€ť around May!
So now thatÂ we'veÂ determined our location, letâ€™s go through the step-by-step on how we roll these out in an automated fashion:
Step 1: Equipment Purchase
This is a no-brainer. Of course we need hardware for a newÂ data centerÂ However, this is somethingÂ we'veÂ done a number of times, and have certified that the hardware we use scales and works perfectly with our automation; we essentially take the building blocksÂ we'veÂ assembled in the past, and get it on the way to the new location. This equipment list consists of our redundant upstream core routers, all our aggregation devices, optics, top-of-rack switches, our service modules (load balancers, firewalls, etc.), and everything that powers our highly automated network.
At the same time, weâ€™re also ordering all the server hardware necessary for our automation platform infrastructure. Although we try to virtualize all we can, using VMware, not everything can be automated and some pieces require a bare metal server. Examples of this include our Linux and Windows Server provisioning servers, and our R1Soft backup nodes - high capacity disk arrays are necessary in all of those applications, soÂ we'veÂ found itâ€™s best to stick to raw hardware in those components.
To provide quick turnaround on hardware failures, or anything that may happen unexpectedly to our network, we order plenty of replacement hardware during this step. Typically we order one of each part of the network, and several of each component inside a dedicated server (RAM, hard drives, chassis, motherboards, etc.) - we believe that waiting more than 30 minutes for this type of hardware justÂ isn'tÂ satisfactory, so we put them all on site.
On the miscellaneous side of things, we also order inventory cabinets, workstations, and desks - all the necessary components of having a team onsite 24/7/365.
Step 2: Network Turn-up
Once the hardware has arrived at the new site, so does a team of data center technicians, with their sleeves rolled up and steel-toed boots on (because hey, this stuff isn't light! They work furiously throughout the day to deploy the physical infrastructure of the data center. Cable management, copper and fiber cabling infrastructure, and power whipping are the first pieces they work on. While the cabling is installed, the core network infrastructure is put in place. Once the core devices of the network and 10G carrier links are in place, they are on the phone with SingleHopâ€™s Network Operations team coordinating the turn-up of the network - the first step into truly â€ślighting upâ€ť the new data center. This is very important because without network connectivity, our automation layer cannot be enabled, and without that our data center technicians would be looking at a much longer deployment time.
Step 3: Automation Turn-up
Once the network and physical infrastructure are good to go, itâ€™s time for the DevOps team to step in and get the ball rolling with the automation infrastructure. This is the key component that makes SingleHop so efficient, and to be frank, about 95% of this process is automated.
The way this works is that we install both provisioning servers from the closest siteâ€™s provisioning server via our inter-data center transport link. For example, when we turned up PHX-1, we literally netbooted our provisioning server from CHI-2 and installed it from across the country. We do it this way because final touches are being done on the data center floor, so weâ€™d prefer not bother that task. With our out of band IPMI and out of band network console devices, weâ€™re able to manipulate every piece of the infrastructure from anywhere in the world as if it were sitting right in front of us.
Once the provisioning server is ready to go, it is all automation from here on out. We provision the rest of the components that make up our automation platform from that server - rDNS servers, our network automation pieces, recursive DNS resolvers, things of that nature. DevOps works around the clock to get that piece up, and afterwards, weâ€™re ready to deploy clients!
Step 4: Go Live!
Once we have run through all the fault-tolerant testing of the new facility on both the network and automation side, we are ready to go live. Each team member has signed off on his or her side the project. When everything is signed off the location is now â€śSingleHop Certified.â€ť During the technical rollout of the new data center, we've had marketing collateral put together, order forms adjusted to reflect the new location and our developers have been working directly with the DevOps team to enable the new facility in our internal management tool.
At the same time, the entire company is pumped and ready to announce the new location! We've had all hands meetings, and department specific meetings to ensure the entire company is on the same page, and that everyone has milestones and goals clearly in place to let our client base know about the new location.