latoga labs

Alliances & Partnership Advising

  • About
  • Contact
  • View latoga’s profile on Twitter
  • View greglato’s profile on LinkedIn

© 2006–2025 · Log in

Closed Source Buys Open Source V2.0

August 11, 2009 Leave a Comment

In case you didn’t notice from the change of tone in my tweets, I have been on vacation with the family for the past week.  Enjoying the scenic grandeur (and at times solitude) of the Pacific Northwest and taking a ton of photos with my new camera (1388 photos to be exact…and 5 movies…).

Today, I had the joy of the first day back on the job and dealing with the flood of emails, followups, and catching ups that is the price we pay for taking some time off and not reading emails.  Like that wasn’t enough, today VMware (my employer) had to go an announce that we were acquiring SpringSource (and add a few more items to my list to completely dissolve that post-vacation glow! 🙂 ).

After a day dealing with my inbox and urgent items, I had to take some time out of the evening photo processing to read the Steve Herrod and Rod Johnson blog posts on the acquisition.  And provide a bit of a different viewpoint on this acquisition…fresh from vacation and not knowing anything more about this acquisition than what has been publicly stated by others (so safe from saying anything other than my opinion – see disclosures in the About latoga labs in the sidebar).

I’ve Been Through This Before

I’m not talking about my employer acquiring a company.  I’m talking about a closed source Company acquiring essentially an Open Source company.  Before joining VMware I used to work for IONA Technologies (sound familiar….think CORBA…Yes!  That IONA!).  I was there when IONA bought LogicBlaze.  What made this acquisition interesting (especially for me…being part of the enterprise sales team at IONA) was that we went from having 1 closed source product (ESB) to three products (all ESBs) which competed with each other.  And I was only allowed to sell one of them.

Executing a successful merger is not easy even when the companies are very well matched.  But it becomes even more difficult when they have conflicting core values (and revenue models) like closed source code development and open source code development.  In my most recent experience, the Iona/LogicBlaze merger didn’t work as well as it could have because the two sides of the house competed against each other and management turned a blind eye to it while they tried to figure out a revenue strategy post merger.  Funniest thing is that a lot of the core value propositions we were discussing with clients at IONA in that Enterprise sales team that I was part of, still hold true today.  Back then virtualization was a huge hidden value savings that I couldn’t tap into.  Not any more…

Regardless of the synergies that two companies can provide each other technology wise, there is not as much focus traditionally placed on the social aspect of merging two companies.  It is that social aspect (like the social aspect of introducing any new technology in a company) that will drive the speed and revenue value of the acquisition.  Having been through this before in a rather painful way, it is important to mention this fact.

Why VMware + SpringSource Makes Sense

The good news is that this conflicting personality issue shouldn’t be a problem with the VMware/SpringSource merger.  First, there is no competing technologies between the two vendors.  SpringSource allows VMware to access the higher level parts of IT (Applications and App Developers) while also working together to enable the Cloud Vision of vSphere.

Second, based upon what Rod Johnson indicated in his blog post, he will be heading up SpringSource as a separate unit within VMware following the VMware BU organization.  This should mean that SpringSource will get to work as they have been to support their existing community and customers in that classic open source way while working together with the other VMware BUs to add bigger picture value through the combination of SpringSource technologies with VMware’s.

Paul Maritz has indicated in the past the need to move up the value stack of IT and has used the term framework more than once during the vSphere launch.  The ability to leverage the virtualization foundation of vSphere with vApp and abstract away the applications from the operating systems with SpringSource’s various build-run-manage products not only provides a much more open application development environment to compete with Google and Amazon, but also provides an solid migration path for Enterprises to move to the Private Cloud with all their web based Java applications.  Image a world where Java App developers have the ability to integrate via the spring framework right into the virtualization based cloud where their apps will be tested/QA’d/run.  Regardless of weather…er…I mean whether…that cloud is an internal cloud or an external cloud.

I see some very clear and interesting developments on the horizon from this acquisition which I’ll try to disclose my opinion on in the future.  And, as is can be the case when you put a lot of very smart people together with solid management, I’m sure we’ll see some surprises as well.  From the looks of my LinkedIn network, I’ll also be re-united with some old colleagues as well!

Tomorrow will be an interesting day of conversations with my global clients to hear their take on things!

Filed Under: Business Ramblings, Tech Industry, Technology Ramblings, VMware Tagged With: IONA, LogicBlaze, SpringSource, VMware

Convergence of Private Clouds Presentation

July 29, 2009 Leave a Comment

Last night I presented the following at the SDForum Cloud SIG in Palo Alto (you’ll have to bear with the animations that didn’t come through well on the online version..).

Convergence of Private CloudsView more presentations from latoga.

We had a great turnout considering we are in the middle of summer vacations.  Thanks to Dave, Dave Nielsen & Bernard Golden for coordinating and everyone for attending!  (Even thought I have my brand new camera waiting to be used, I completely forgot to take photos!  Luckily others did and I’ll update this post there photos once they have them online)

My goals for the presentation was to first help everyone understand that Virtualization lies at the heard of cloud computing. Second was to explain that private clouds are just the evolution of an enterprise’s existing virtualized data center (their internal cloud) with the flexibility to expand the private cloud to external cloud provider’s data centers if and when needed.  The key point of clarity here is that an enterprise’s data center could be referred to as both an internal cloud and a private cloud.  But, the cloud that federates the internal cloud with an external cloud should always referred to as a private cloud. And my third goal was to detail the VMware components that go into creating a private cloud.

I was glad to see that everyone mostly understood that an enterprise’s cloud needs are not the same as a internet application’s cloud needs.  Enterprise’s have to deal with legacy applications that can’t or don’t need to be re-written to become fully cloud aware.  And with a vCloud enabled private cloud they don’t have to be.  But, if you have an application that you want to be cloud aware that flexibility is there.  Enterprises also have demands that require features like HA and Fault Tolerance and understand that adding those features may increase overall cost due to technical requirements these features require.

There will be a lot of additional cloud related announcements in the march up to VMworld 2009.  (Dave did a good job of trying to get a scoop on some…)  All the attendees showed great patience with my answers of coming soon with regards to more details.  And today they get a small reward with the Rackspace announcement: Rackspace Private Cloud leverages VMware to Extend Enterprise Computing on Demand.

If you’re attending VMworld 2009 in San Francisco in a few weeks, I included a list of a few sessions that help build on the overview that my presentation gives:

  • DE-03 – Introduction to vCloud APIs
  • TA3326 – Building an Internal Cloud-the Journey and the Details
  • TA3901 – Security and the Cloud
  • TA4100 – Internal Clouds: Customer perspective and implementations
  • TA4101 – Buying the Cloud: Customer perspective and considerations on what you should send to an external cloud
  • TA4103 – Engineering the Cloud-The Future of Cloud
  • TA4102 – Unveiling New Cloud Technologies
  • VM2706 – Improved cloud interoperability using virtualization management standards

There are many other cloud related sessions during VMworld, so make sure you check the schedule.  And Register early! Last year I had clients who registered too close to the show and couldn’t get into a number of the sessions they wanted.  Some are hands on labs and there are only so many VMs to go around…

And now, some links based upon some of the questions that were asked and items that I promised:

  • My Fault Tolerance: Diamond in the Rough with links to additional FT resources
  • VMotion between Data Centers—a VMware and Cisco Proof of Concept (check out VMworld session TA3105 – Long Distance/Data Center VMotion and watch for other announcements on this…)
  • For those preparing for their vSphere VCP exam and have VI3 knowledge, checkout the VMware vSphere: What’s New [V4] class.

Filed Under: Cloud Computing, Tech Industry, Virtualization, VMware Tagged With: Private Clouds, SDForum, vCloud, VMware

Fault Tolerance: Diamond in the Rough

July 6, 2009 9 Comments

One of vSphere 4.0 most understated features in my opinion is Fault Tolerance.  I truly see this as a capability of vSphere that goes overlooked by most people (especially those who are focused on the cost of a vSphere deployment…as Fault Tolerance is included in vSphere Advanced and higher packages).  Not to long ago, companies paid millions of dollars to achieve a lock step fault tolerant solution.  Today, with vSphere, you can enable Fault Tolerance on a VM with just the click of your mouse.  I want to clarify the key points on Fault Tolerance that most of my clients seem to ask me about; this won’t be a deep technical discussion on Fault Tolerance, that has been covered by others already and you can find those in the links I have included at the end of this article.

I still find it amazing how the spectrum of availability solutions still gets confused by IT administrators and executives alike.  So, first a brief refresher on this spectrum:

  • Load Balancing: Multiple running copies of an application, failure may affect end user.  Load Balancing via a network connection load balancer is the lowest common denominator for availability.  Actually, these solutions are typically used to achieve scale out of applications that can’t scale out on their own.  Load balancers allow you to run multiple copies of the same stateless (typically REST based) application.  The nature of the client’s connection to the application determines what availability impact a load balancer has.  If a failure occurs between a client’s connection the client should not be affected by the failure.  However, if the failure occurs during a client’s connection the client most likely will be affected by the failure in some nature, possibly losing their work (REST, stateless short transactions less affected; non-REST, long connections more affected). By definition load balancing will increase the utilization on multiple servers…that’s what it’s designed to do.  I spent five years crafting load balancing solutions for clients back in the late 90’s…and yet I still come across confusion here from time to time.
  • High Availability: Single running copy of an application, failure will affect end users.  High Availability simply means that when a failure occurs, the highly available application will start running on another server.  In vSphere, this means the environment will turn on the VM on another ESX hosts to ensure minimal amount of down time for users of the application.  Typically, the user will be affected by the failure.
  • Fault Tolerance: Multiple (typically two) running copies of an application, failure will not affect end user.  Fault Tolerance means that you are running two copies of the application in lock step, what ever instruction gets executed on the primary also gets executed on the secondary.  This doubles the resource utilization in your environment, but ensures that a failure has no impact on the end user.  When a failure occurs, the IP address of the primary system moves to the secondary system and the user continues doing what ever they were doing because the secondary system was processing the same instruction as the primary when it failed.  By this definition, Fault Tolerance isn’t ideal for every application due to the higher cost of resource utilization, if you’re running at 80% utilization of your VM prior to Fault Tolerance, you will be running two VMs at 80% when Fault Tolerance is turned on.

What makes vSphere’s Fault Tolerance feature a diamond in the rough is this zero downtime solution is baked into the virtualization infrastructure that you may already own.  For those key applications where zero downtime is valuable, it’s there to be turned on with minimal additional cost. There are some hardware requirements that you need to be kept in mind: like an additional network for the FT messages to be passed across (two networks if you want a 100% fault tolerant system), and ensure you have the right type of processor.  But these are similar requirements for most comparable solutions.

What makes Fault Tolerance a bit rough is the fact that it only supports one VCPU Virtual Machines.  If you application need multiple VCPUs, you’re out of luck.  At least for today.  Considering Fault Tolerance is a 1.0 feature, this limitation is understandable.  It’s even more appreciated when you consider what is happening under the covers to keep the instructions in sync across two VMs, watch the following video from VMware Principal Engineer Doug Scale for the details:

Now imagine the complexity of trying to track, synchronize, and replay the processing instructions for multiple processors. Going back and using my basic Computer Science knowledge from my first year in college makes it obvious to me that supporting multiple processors is magnitudes more challenging that support one processor. So you gotta start somewhere!

Taking all this into consideration there are multiple applications that my clients are looking at as candidates for Fault Tolerance.  From mail servers and messaging servers to custom applications where down time needs to be avoided.  Before it used to apps where “downtime needs to be avoided at any cost“, but with vSphere Fault Tolerance it has become more like “avoided at a little cost”.

What apps do you have that you’re considering Fault Tolerance for?  Tell me about them by leaving a comment.

Additional Resources on vSphere Fault Tolerance

  • Training Lab simulator for vSphere Fault Tolerance (FT) (via VMWARE INFO) –  “See” FT in action thanks to this simulation created by the VMware Training team.
  • Check ESX CPU And VM OS Requirments for vSphere Fault Tolerance (via VM /ETC) – Make sure you meet the hardware requirements for FT…or use the New SiteSurvey utility from VMware checks for Fault Tolerance compatibility
  • VMware Fault Tolerance at your home-lab (via Eric Sloof) – after reading the last two, find out how to set this up for testing at home…
  • vSphere Availability Guide (pdf) – for the full skinny on Fault Tolerance and HA in vSphere.
  • How does Fault Tolerance prevent a split brain scenario? (via The VMguy) – Understand what happens when failure does occur.
  • VMware engineers caution IT pros: Use Fault Tolerance sparingly (via SearchServerVirtualization) – a bit of a misleading title, but reiterates my comments above: FT is different than HA, has specific use cases, and does use additional resource.
  • Don’t forget to search the VMware Knowledge Base for the latest articles on Fault Tolerance

Filed Under: Virtualization, VMware Tagged With: Fault Tolerance, FT, VMware, vSphere

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 17
  • Next Page »

About latoga labs

With over 25 years of partnering leadership and direct GTM experience, Greg A. Lato provides consulting services to companies in all stages of their partnering journey to Ecosystem Led Growth.