latoga labs

Alliances & Partnership Advising

  • About
  • Contact
  • View latoga’s profile on Twitter
  • View greglato’s profile on LinkedIn

© 2006–2025 · Log in

Enterprise IT Planning for Clouds

June 27, 2009 Leave a Comment

© Greg A. Lato - All Rights Reserved
© Greg A. Lato - All Rights Reserved

Over the past few weeks I have been involved in long term planning discussion with the senior IT management from multiple clients.  While I can’t go into details  of these meetings, a few common general trends emerged in these long term virtualization strategies.

First, all of them were roughly at the benchmark of having 30% of their compute workloads virtualized and looking at how to get well beyond that (see my prior post on Breaking the 30% Barrier).  Part of the growth strategy included defining a specific set of applications that are set aside from virtualizing in the next wave, typically somewhere between 10%-30% of the overall computer workloads.   General reasons for this are:

  • Organizing things into the next logical set of workloads primed for easy virtualization
  • Setting aside the workloads where resistance was being felt toward virtualization to give the business users more time to warm up to virtualization
  • Workloads that are “too big” to virtualize (typically because of CPU requirements, storage requirements, or IO requirements; some of these are just misnomers with the current VM scalability limits of vSphere 4)
  • workloads where the ISV specifically don’t support the software running in a VM (this is becoming less and less as more ISVs actively embrace virtualization or Enterprise customers flatly tell their ISVs “we’re running it in a VM, get on board”.)

Second, they are all planning on building a specific internal cloud within part of their infrastructure.   This alone isn’t that surprising.  There are specific use cases where a self-service internal cloud solves a lot of problems for the business users, most glaring being dev/test scenarios where lots of dynamic short used workloads are involved and web hosting where the business units (typically marketing) need to be able to react faster to market opportunities and popularity spikes of new products and viral marketing activity.

What is surprising is that when IT started to talk about the ideal end state view of their “non cloud” virtualized environment…it was essentially a cloud.  As Jian Zhen described recently in The Thousand Faces of Cloud Computing Part 4: Architecture Characteristics there are a set of architectural characteristics that describe cloud computing:

  • Infrastructure Abstraction
  • Resource Pooling
  • Ubiquitous Access
  • On-Demand Self-Service
  • Elasticity

(Note: Jian Zhen changed his list of characteristics in the above post from his initial The Thousand Faces of Cloud Computing post.

The enterprise long term vision for their virtualized computing environment include all of these characteristics with exception of On-Demand Self-Service and in some ways Ubiquitous Access.  On-Demand Self-Service is typically not in their plans because the Enterprises don’t have a key part of this, an internal finance model that allow for charge back of resources used — though most seem to be thinking about that. On-Demand isn’t as needed in this part of the enterprise environment as the workloads are the known sized, planned, enterprise applications that are the classic “Enterprise IT”.   Ubiquitous Access is also something that isn’t being thought of by IT for this part of their environment primarily because access to these workloads is already pre-defined by the workloads themselves: web servers are accessed by web browsers, email is from an email client (whether static or mobile), etc.

And yet, all the other things that Enterprise IT strategist are thinking about fall squarely in the realm of “cloud computing”.  Get the business users to think in terms of capacity and SLAs and abstract all other aspects of the infrastructure from them.  And then drive up the utilization on that infrastructure to maximize their ROI.  Some are still only comfortable driving per physical server utilization up to 50%-60% range while others are damning the torpedoes and want to get as close to 100% as possible.  On an overall basis across the entire infrastructure, you can never reach 100% utilization because not all work loads are that consistent, this is where resource pooling and elasticity come into play.

Thought I do have to argue that Resource Pooling is not the best term to use for what is meant for this characteristic.  Creating and managing pools of resources is included in this specific characteristic, but I think a more accurate term to describe this is as Resource SLAs.  The end users of the environment are buying  a specific amount of resources as a “guaranteed maximums” or as a an “on average maximum”.  The architecture of the cloud needs to ensure that spikes in resource usage by one user are serviced up to their agreed upon limit, but also allow IT to “over subscribe” the environment during the non-spike times.  Then mixing in guaranteed with on average work loads allow the performance spike of guaranteed work loads to be serviced at the cost of the on average work loads should no extra capacity be available.

It becomes a game of how tight of an IT airship do you want to run…

Filed Under: Cloud Computing, Technology Ramblings, Virtualization Tagged With: Cloud Computing, Enterprise Cloud, Internal Cloud, IT Strategy, Virtualization

Signs the Internal Cloud Makes Sense

June 19, 2009 1 Comment

With loads of rhetorical flying around the internet on cloud computing these days it’s refreshing when clients start showing the “that just makes sense” sign. A few days back I was visiting a client discussing desktop virtualization, this particular client was a member of the desktop team at his company. Desktop virtualization discussions always migrate into the sacred territory of the data center. While discussing the data center components of desktop virtualization, this desktop architect had his “that just makes sense” moment.

While talking about the ESX hosts and the server VMs compared to the desktop VMs, the realization occured that it doesn’t matter what machine you run the VMs on:

“…in theory desktop VMs and server VMs could run on the same physical server.  All those server resources are just there to run a work load, regardless of what the workload is…”

This is where I leaned back in my chair, smiled and said “That’s cloud computing”.

What was glorious about this moment is that here was a desktop architect realizing the power of an internal cloud back in the data center.  Seeing a sign like this tells me that my next 6 months is going to be very busy indeed.

Filed Under: Cloud Computing, Technology Ramblings, Virtualization Tagged With: Cloud Computing, Desktop Virtualization, Internal Cloud

Virtualization Clouds & Grids

February 12, 2009 Leave a Comment

I had an interesting discussion today with an engineering manager at one of my customers.  We were discussing the capabilities of ESX and Dynamic Resource Scheduling (DRS).  During the discussion I needed to explain how virtualization helps to build clouds, but not grids.  At least not typically.  And not Supercomputers.  At least not yet.

Grid Computing generally refers to breaking up a large compute intensive work load into smaller blocks, then scheduling those blocks to be run across a set of smaller computer systems (typically small enough so that the work load will utilize nearly all, if not all, the computing capacity of the computer system).  Widely known examples of this are the SETI@Home project or the Search for Cancer projects that ran as screen savers on your PC when you were not using it.  Typically in a high performance grid environment you don’t want all the overhead of a traditional operation system, so each system runs a special operating system that just performs the compute job on the block of work and then returns the answer to centralized scheduler before getting the next block of work.

Cloud Computing on the other hand is about turning your compute capacity into an on demand utility.  Self-service infrastructure that allows any compute job containing inside of, just about, any container (Operating System).  The computer systems of a cloud envrionment can be small or big, lately the trend has been toward big.  Users can request access, load and run their compute jobs and be billed just for what they use.  Need more, add more and pay more.  Need less, remove capacity and pay less.   Some Cloud Systems are more restrictive like grids in that their infrastructure is designed for running certain types of applications (LAMP stacks).  Others are less restrictive and let you run any application.

Virtualization is the foundation of Cloud Computing.  Hypervisors like ESX provide the infrastructure and manamgent tools layer on top of that to add the self-service, automation, and control.  Virtualization can also be the infrastructure on which Grid Computing can run.  The reasons you would do this is for the flexability of managing the underlying hardware or if the underlying hardware has more compute capacity than the Grid can use for each compute job.  While this is usually technically, I haven’t seen it very often.

Once you connect a large storage array, fast and large network pipes (10GigE is getting more popular for this, and faster unified fabrics is not far away), and large computing capacty (cpu and memory) you get something that looks awfully similar to a supercomputer. But one thing is always the case for either Grid Computing and Cloud Computing.  The compute workload (in whole or the grid block) is always running on just one physical computer within the environment.

If you have 3 VMs running on an ESX host and that host only has one CPU/core and 1 Gig of RAM capacity left and the next VM to run needs 2 cores or 2 Gigs of RAM, that square peg won’t fit into the round hole that is available.  With VMware’s DRS, the system is smart enough to either place the square peg into a square hole of the right size, or shuffle around the VMs to turn the round hold into the right sized square hole.  But virtualization can’t split that computing workload up and run it across the compute capacity of two physical systems. Or take the CPU from two physical machines or RAM from two physical machiens and give them both to the same VM.

That is the line in the sand that separates Cloud Computing and Grid Computing from Supercomputing.  At least for today.  I wonder how much longer that line will exist?  In the near future you will see virtualization provide more capabilities that used to be the realm of specialized systems.  It won’t be that much longer until you see if step across that supercomputer line.

Filed Under: Technology Ramblings, Virtualization Tagged With: Cloud Computing, Grid Computing, Supercomputing, Virtualization

  • 1
  • 2
  • Next Page »

About latoga labs

With over 25 years of partnering leadership and direct GTM experience, Greg A. Lato provides consulting services to companies in all stages of their partnering journey to Ecosystem Led Growth.