Top Data Center Migration to Cloud Considerations [Interview] Blog Feature
Blair Felter

By: Blair Felter on April 29th, 2016

Print/Save as PDF

Top Data Center Migration to Cloud Considerations: Interview with John Burke of Nemertes Research

industry trends | Philadelphia data center | philly data center

Subscribe to vXchnge Blog

Today our guest is John Burke, CIO and Principal Researcher at Nemertes Research. This evening starting at 5pm, John will be will be presenting on “Which Cloud? Workload Placement and Cloud Selection For Fun and Profit.” at our Philadelphia Data Center in the heart of Center City. Come join vXchnge and Comcast Business for an evening of information sharing and networking. You will also have the opportunity to tour vXchnge’s 70,000+ sq foot data center, this state-of-the art facility features the latest design, redundant technologies and energy efficiencies. Now let’s get a sense of some of the things John will be discussing this evening.vXchnge: Thanks for joining us John. Before we start, could you tell us briefly about Nemertes Research and your role?

John: Sure. Nemertes is a research and strategic consulting company. We’ve been around for a little over 10 years now, actually more like 14 years, and our focus is on quantifying the business impact that people realize by deploying emerging technologies. So, what good do they actually get from doing something new, and how much good, trying to quantify that. I’m Principal Research Analyst here, and I focus a lot on cloud and data center topics. The company also has a very deep and broad expertise in all things collaboration and unified communications.

vXchnge: And you’re actually in the field doing a lot of qualitative research, interviewing people, you were saying before we hit the record button?

John: Absolutely. Our main mode of research is to do enterprise benchmarking, so the analysts, the folks like me, getting on the phone and talking with IT practitioners, typically for an hour, but sometimes two or three hours at a time, and really digging in deep on what they’re doing with new technologies, how they’re doing things, and what’s working for them. So what are the pitfalls, what are the success strategies and so forth.

vXchnge: On Thursday, April 28th, you’re presenting on “Which Cloud? Workload Placement and Cloud Selection For Fun and Profit.” at our Philadelphia Data Center in the heart of Center City. Can you discuss some aspects of application performance that people don’t always take into account when they’re thinking about putting a service on external cloud service or maybe a co-location service?

vXchnge: Sure. There’s a few things that they don’t always take into account when they’re thinking about cloud. One is DR strategy, because they’re taking traditional legacy style applications and pushing them up maybe into cloud infrastructure, and maybe doing some low-level high-availability, if you will, on it so they’ve got redundancy within the virtual data center they’re building. But they may not have redundancy across geographic zones, which they probably did with their traditional approach to DR, their physical failover center, wherever that was located. Probably located outside what we call the disaster radius, so that the same natural disaster was not likely to hit both. They have to make sure that they’re taking that into consideration when they’re moving things into the cloud, so they can guarantee continued availability, which is the bedrock of performance.

Second thing is latencies. Very predictable when you’re delivering things out of your own data center, you know where users are most of the time, you know where your stuff is all the time so you can pretty well predict what kind of performance people should be seeing. Once you move things into the cloud environment, not only do you get a different base for that latency, what data center is your stuff in usually, but you also have internet access folded into that as the baseline for performance for your end-users. So that makes performance more variable, less predictable and possibly gets in the way of meeting their needs.

vXchnge: I know vXchnge talks about this a lot. We’ve talked about it a lot during these podcasts, around the geographical location of that data center. It needs to be close to where it’s going to be used, because the latency of the network is an issue for some of these high-availability applications.

John: Especially when you’re putting the internet into the circuit, if you will, proximity, at least in terms of internet proximity, becomes increasingly important, exceptionally important, pretty much critical. So yeah, if your end-user community is mostly in one place and your data is all going to be living in another place and they’re not close, in internet terms, then you’ve got an issue.

vXchnge: Right. So these are some considerations people need to think about. Were there others or are those the major ones?

John: Those are the big ones about performance although there’s also always the issue of, how do I need to configure the virtual machines I’m moving things onto in the cloud data center to meet or exceed the performance of the dedicated hardware that I’ve typically had behind my legacy applications in-house? And remember if they’re old applications they may not cohabit well with other stuff. So you need to be taking into account the provisioning of your virtual machines with an eye on performance, not just on how it’s going to look when I run the metrics on the virtual server itself, but how well it’s going to be able to respond to requests and fulfill them for people who are remote from it.

vXchnge: What about connectivity to cloud homed services? Are there some options out there that data center providers like vXchnge and others are offering that people aren’t always necessarily thinking of?

John: Sure, absolutely. It actually ties in nicely to that previous conversation because another thing you’re typically not thinking about with performance is, if I’m moving all the backend systems that this one talks to up into the cloud then potentially I’ve got internet in the link for the East-West conversations between components, as well as the North-South ones from users to applications. So one of the options that we see people taking advantage of increasingly is direct connection into cloud service providers, so connecting either their own infrastructure in some co-location facility directly across into the edge of their cloud service provider’s network inside the same facility, or connecting the edge of their MPLS network into the edge of their cloud service provider’s network. That takes all the internet pieces out of the loop and gives you very predictable performance again, as well as potentially some increased security into the market. We don’t see a lot of people thinking about it but we more people doing it all the time.

vXchnge: So, in other words, if you didn’t do that, you’d be running the application request and then it would go back out to the public internet, jump back to get the data and that’s a silly way to organize it?

John: It could be a very bad way for the users anyway, because it would not just increase the amount of time it took all those back end things to happen before the fulfilled request was sent back to the end-user, it would make it more variable, and variability is very bad for the end-user’s expectations.

vXchnge: Exactly. So expectations, wide area networks. We had a sort of a expectation for the way they were set up and the way they worked and so forth. But then we have the cloud now. How is that changing the WAN, the wide area network?

John: In a of couple ways. One, easily predicted, the other maybe less so. The one being the more we make use of Software as a Service, Platform as a Service, Infrastructure as a Service, the more our stuff is basically out there on the public internet, the more pressure there is for communications to go directly from branch offices out to those cloud destinations, those internet destinations, rather than back across our private wide area network to our data centers and then from there out to the public internet. I was going to say cloud is driving an enormous increase in the amount of direct internet access at the branch, and therefore pressuring IT to deal with security at the branch more robustly than they used to have to, because part of it’s facing the internet directly. And having them have to consciously strike the balance between direct internet access for performance reasons perhaps, and internet access backhaul through the data center, perhaps for security or compliance auditing reasons. So there is that.

vXchnge: Do you see, there’s still going to be use cases, I suppose, where companies will say we still want that private WAN in place, as opposed to a cloud solution. Is security the major tipping point for that kind of decision or are there other things as well?

John: Security is a big piece of it or protecting the confidentiality, the proprietary information that they feel is either legally or operationally most at risk in traveling across the internet. Auditability is another piece. They may direct some things back through the data center so that they can, for instance, go through DLP systems or go through audit logging so that they’ve always got complete information about conversations that’s going on. If the cloud platform in question can’t provide it, they have to do it through the data center themselves. And for performance reasons, it may be better to have predictably engineered MPLS style performance for quite a while into the future.

There’s still enough going on in terms of development on internet access as WAN, that companies look out and say maybe three years from now I’ll drop my MPLS network, or five years from now, but for the next stretch, I’m going to have that, even if I augment it with internet bandwidth at the branch. And of course, this has given rise to the whole SD-WAN thing as well. Keep your MPLS, add internet links, when you get comfortable with internet links, and you can just pile them on until you feel comfortable, think about dropping your MPLS network and getting all the reliability you need from redundancy at the internet carrier level.

vXchnge: Got it. We haven’t talked about it but that kind of mixed environment where you have your MPLS and your cloud provider as well, does that add layers of complexity, or are the management systems able to give you visibility of both and…

John: It can certainly add a lot of complexity, and so people need to intentionally engage services or buy management tools that will once again hide that complexity from them until it becomes essential that they know about it. So you want that single pane of glass with the dashboards and the roll-ups and the summaries that hide all the ugly details until there’s a red flag and then you can drill down as deep as you need to.

vXchnge: In the time remaining, my last question. When it comes to data centers and the cloud adoption, has the market moved in any surprising ways, ways that perhaps the industry didn’t anticipate even a few years ago?

John: It’s been interesting to see how quickly smaller cloud providers have been either subsumed into larger ones or split off the original business that spawned them, the infrastructure as a service business that was going to be the foundation of their public cloud offering, and morphed rapidly into a bespoke virtual private cloud offering, so it’s not come one come all, it’s all done by contract. It becomes a variation on traditional hosting: It’s got a different kind of interface for the end-users, but still feels a lot like it from a business perspective. So we’ve seen that happen perhaps more broadly and more rapidly than we expected. Another thing has been the slowness with which the enterprise has moved major applications into the public clouds. Over half the companies out there now use public cloud solutions. It’s probably close to 70% now, but it’s still typically just a tiny slice of what they do. Under 5% of the overall workload in IaaS for most large companies, under 1% for very large companies. So the vast bulk of their enterprise workload is still running in their data centers or migrating to SaaS as the applications sort of age out of the portfolio to be replaced by relatively new stuff. But then they’ve got this massive core of legacy stuff that they’ve been carrying along generation by generation, going back to their mainframes, and that’s all still in there. There’s this enormous amount of stuff that has the enterprise still sort of revving its engines and getting ready to start moving out into the cloud. The decision to start pushing that is going to happen pretty rapidly in the next couple of years. People are going to go to a cloud first model. “If I can’t do it with SaaS, if I can’t do it with PaaS, I’ll do it in the public cloud unless there’s a compelling reason why I can’t.” Then we’ll start to see that percentage ramp up quickly to 10, 15, 30, even 50% over the course of the next 2 or 3 years, but it’s still going to be seven to ten years before most companies have most of their stuff in the cloud and settle down to their stable balance of internal versus external.

vXchnge: As a follow-up question to what you’ve just said, is there a risk of, how did they used to put it, paving over the cow paths? In other words, taking a data center app and moving it to a virtual machine on the cloud without taking the opportunity to rethink it, for that cloud service?

John: Absolutely, especially as more stuff gets out there, the impetus grows to get the rest out of the data center so you can shutter it, or at least so you can shrink it down to your go-forward size. I’ve got a data center built for 90% of my workload. I know six years from now I want to have 20% of my workload running in it, and I’m going to keep that in there constantly. I want to get down to the 20% data center as quickly as I can, so once the balance starts to tip the impetus is, get it out there, put a little wrapper around it, we’ll run it on a virtual machine and just let it run until it breaks in some way we can’t fix by tweaking the wrapper sitting around it, in the same way we’ve brought forward legacy applications for so long. We’re going to see a lot of that in the next four years, because re-architecting, rethinking applications from the start, and rewriting them is a significant effort. And it’s going to be directed mostly towards new functionality, not stable well-defined stuff.

vXchnge: Exactly. That’s going to be our final question and answer, John. It’s been a very interesting conversation. I’d like to thank you for joining us.

 
Speak to an Expert

About Blair Felter

As the Marketing Director at vXchnge, Blair is responsible for managing every aspect of the growth marketing objective and inbound strategy to grow the brand. Her passion is to find the topics that generate the most conversations.

  • Connect with Blair Felter on: