latoga labs

Alliances & Partnership Advising

  • About
  • Contact
  • View latoga’s profile on Twitter
  • View greglato’s profile on LinkedIn

© 2006–2025 · Log in

Enterprise IT Planning for Clouds

June 27, 2009 Leave a Comment

© Greg A. Lato - All Rights Reserved
© Greg A. Lato - All Rights Reserved

Over the past few weeks I have been involved in long term planning discussion with the senior IT management from multiple clients.  While I can’t go into details  of these meetings, a few common general trends emerged in these long term virtualization strategies.

First, all of them were roughly at the benchmark of having 30% of their compute workloads virtualized and looking at how to get well beyond that (see my prior post on Breaking the 30% Barrier).  Part of the growth strategy included defining a specific set of applications that are set aside from virtualizing in the next wave, typically somewhere between 10%-30% of the overall computer workloads.   General reasons for this are:

  • Organizing things into the next logical set of workloads primed for easy virtualization
  • Setting aside the workloads where resistance was being felt toward virtualization to give the business users more time to warm up to virtualization
  • Workloads that are “too big” to virtualize (typically because of CPU requirements, storage requirements, or IO requirements; some of these are just misnomers with the current VM scalability limits of vSphere 4)
  • workloads where the ISV specifically don’t support the software running in a VM (this is becoming less and less as more ISVs actively embrace virtualization or Enterprise customers flatly tell their ISVs “we’re running it in a VM, get on board”.)

Second, they are all planning on building a specific internal cloud within part of their infrastructure.   This alone isn’t that surprising.  There are specific use cases where a self-service internal cloud solves a lot of problems for the business users, most glaring being dev/test scenarios where lots of dynamic short used workloads are involved and web hosting where the business units (typically marketing) need to be able to react faster to market opportunities and popularity spikes of new products and viral marketing activity.

What is surprising is that when IT started to talk about the ideal end state view of their “non cloud” virtualized environment…it was essentially a cloud.  As Jian Zhen described recently in The Thousand Faces of Cloud Computing Part 4: Architecture Characteristics there are a set of architectural characteristics that describe cloud computing:

  • Infrastructure Abstraction
  • Resource Pooling
  • Ubiquitous Access
  • On-Demand Self-Service
  • Elasticity

(Note: Jian Zhen changed his list of characteristics in the above post from his initial The Thousand Faces of Cloud Computing post.

The enterprise long term vision for their virtualized computing environment include all of these characteristics with exception of On-Demand Self-Service and in some ways Ubiquitous Access.  On-Demand Self-Service is typically not in their plans because the Enterprises don’t have a key part of this, an internal finance model that allow for charge back of resources used — though most seem to be thinking about that. On-Demand isn’t as needed in this part of the enterprise environment as the workloads are the known sized, planned, enterprise applications that are the classic “Enterprise IT”.   Ubiquitous Access is also something that isn’t being thought of by IT for this part of their environment primarily because access to these workloads is already pre-defined by the workloads themselves: web servers are accessed by web browsers, email is from an email client (whether static or mobile), etc.

And yet, all the other things that Enterprise IT strategist are thinking about fall squarely in the realm of “cloud computing”.  Get the business users to think in terms of capacity and SLAs and abstract all other aspects of the infrastructure from them.  And then drive up the utilization on that infrastructure to maximize their ROI.  Some are still only comfortable driving per physical server utilization up to 50%-60% range while others are damning the torpedoes and want to get as close to 100% as possible.  On an overall basis across the entire infrastructure, you can never reach 100% utilization because not all work loads are that consistent, this is where resource pooling and elasticity come into play.

Thought I do have to argue that Resource Pooling is not the best term to use for what is meant for this characteristic.  Creating and managing pools of resources is included in this specific characteristic, but I think a more accurate term to describe this is as Resource SLAs.  The end users of the environment are buying  a specific amount of resources as a “guaranteed maximums” or as a an “on average maximum”.  The architecture of the cloud needs to ensure that spikes in resource usage by one user are serviced up to their agreed upon limit, but also allow IT to “over subscribe” the environment during the non-spike times.  Then mixing in guaranteed with on average work loads allow the performance spike of guaranteed work loads to be serviced at the cost of the on average work loads should no extra capacity be available.

It becomes a game of how tight of an IT airship do you want to run…

Filed Under: Cloud Computing, Technology Ramblings, Virtualization Tagged With: Cloud Computing, Enterprise Cloud, Internal Cloud, IT Strategy, Virtualization

Enterprises Defining the Enterprise Cloud

May 19, 2009 Leave a Comment

Thanks to a tweet today from cloudmeme, I found the InternetNews.com article VMware CTO : Cloud or software mainframe? where VMWare’s CTO, Steve Harrod, is quoted:

“Some call it a software mainframe others call it cloud, it depends on when you were born,”

I have heard a number of VMware’s executives use the term “software mainframe” when describing the new VMware Cloud OS, vSphere.  And as Steve indicated in his quote, it’s all about finding a term that resonates with the generation of your audience.

Today I joined in a virtualization discussion between the IT management teams of two Fortune 500 companies.  One of the more timely items that was discussed (with regards to the above article) was the implementing of an internal cloud.  One of the executives described it as a “change-request-less data center” where the business client is abstracted from the technology and given a transparent view of their own utilization within the data center.  Then, an executive from the other company nodded and said “getting back to a mainframe, a software mainframe”.

They were discussing their interpretations of what it meant for their business to have an internal cloud within their data center infrastructure.  And the reality is that it is starting to look more like an easily expandable, multi-vendor, distributed/modular mainframe enabled by software.

I found it interesting that the key aspect that defined an internal or enterprise cloud for these executives was the fact that the transparency of utilization was there (just like for an internet cloud and just like on the mainframe), but that the chargeback for the cost of that utilization may not be there.  Most enterprises don’t have the internal financial systems set up to do chargeback of computing resources.  And for an enterprise to change their internal  financial system is a non-trivial task.

However, levering virtualization technology to build an internal cloud that has the ability to track and report on utilization (even if the cost is null) puts the enterprise into a very powerful position:

  • Quicker servicing of their business customers needs
  • a simplified infrastructure which reduces operational expenses via automation and standardized IT offerings
  • the ability to run the existing computing jobs of the business with little or no modifications
  • a self service model which helps reduce operational costs even more
  • transparency to the business clients to help them understand how much of the IT infrastructure they are using and socially drive them to use it more consciously
  • an environment in which the technology can be managed at any time because it is abstracted from the business users
  • and the ability to change their internal financial system in the future without major upgrades to their data center or infrastrcuture.

The software mainframe.  The enterprise cloud.

Filed Under: Technology Ramblings, Virtualization Tagged With: Enterprise Cloud, Software Mainframe, vSphere

About latoga labs

With over 25 years of partnering leadership and direct GTM experience, Greg A. Lato provides consulting services to companies in all stages of their partnering journey to Ecosystem Led Growth.