Back in the old days, when the horses escaped from the barn it was a bad thing since they tended to just wander off. Often: no horses meant no food.
But, what if the horses ‘wandered off’ to join with other horses and instead of leaving, they came back with friends: some bigger, some smarter, some really good at coding, some really good at documentation, etc.
The analogy is aimed at the OpenStack project, where NASA and RackSpace have released into open source the basis for cloud management (compute, storage, networking, more). From my perspective, this was a significant event. Not the least of which is that the code they released was code they were using every day in production, but also because we can now create parallelism in the innovation for cloud infrastructure management.
Now that the release into the community has happened and an active community has formed, it seems pretty clear that there is no going back now. With the ‘horses wandering off’, what kind of discoveries might we find? Here are a few that my prognostication sees coming.
- Here’s an easy one. For a commercial customer to use OpenStack, we need supported distributions. Citrix has announced their offering at their conference this week.
- Workload schedulers that optimize infrastructure in ever more complex and interesting ways. For a service provider, more VM’s per server means more revenue. My back-of-the-envelope math (below) says that for 4 rows of an OpenCompute infrastructure, a 10% improvement in workload density (while maintain SLA) could be $8k/day in increased revenue. For corporate IT, it means better utilization of the assets. But the trade-off usually is to pack more workloads on a server with decreased ability to ensure an SLA is being met. As we move forward with the underlying technologies, we will see increasing ability to measure fine grained performance at the server, network, and storage levels allowing us to place and adjust workloads based on the ability to see potential compromises to SLA guarantees.
- In the past we could design the network to support the carefully placed workloads. But with a cloud, we have to design a network where those darn workloads may not only change (radically) but also move around. Doing this at low cost with high availability is a challenge. We are seeing evolution to network designs that appear to offer “cheap, fast, and available” – and where I can pick all 3. OpenFlow is one example of the innovation.
- Hybrid clouds will be the norm for IT. What needs to be the norm is not the norm in terms of tools and IT processes to manage a hybrid environment. To be sure, there are some early players that have made it work in spite of the state of the art. OpenStack enters the market at a time when there is urgent need for portability. I’m betting that the community will embrace this requirement and not ‘defend the barn door’.
- We have to be able to trust our cloud infrastructures. To date that is accomplished by the “trust me” method. We already have many of the tools in place to validate VM’s, validate and recognize users, and establish platform and hypervisor trust. But, these are completely independent tools that are poorly integrated into a whole. I would expect to see the OpenStack community build these connections in interesting and innovative ways.
- Academic research is strongest when there is an open source basis for the experimentation and innovation. The fact that OpenStack is less about the API and more about delivering functionality tells me that we will be surprised at what the research institutions can come up with.
Well, if the horses are already out of the barn, I for one would vote to let them find their friends and continue to build a community.
(Back-of-the-envelope math: compute farm is 4 rows of 24 racks/row and 30 servers/rack yielding 2880 servers. With 12 vm’s per server and $0.10/hour per VM-hour driving $82,944/day in revenue.)