Blogs

RedMonk

Skip to content

Do Operating Systems Matter? Part 1

A month or two back, I had a conversation with a vendor who I won’t name here (given that I’m at VMWorld, I should probably say that it wasn’t VMWare) on the subject of application and service provisioning via a grid type application. A mouthful, I know. Essentially, the demonstration we were given centered around how the application permitted the drag and drop connection of a variety of resources: MySQL database to JBoss application server to Apache web server, and so on. Interesting, to be sure, despite the fact that I’ve seen similar demos and similar promises from vendors over the years.

The surprising thing was that the conversation got quite heated as I pushed for more information on what operating systems the individual applications were running on. Those of you that have interacted with me in person will probably realize that I don’t really get contentious easily. I cannot recall, in fact, a similarly antagonistic briefing in my career to date. Ultimately, the dispute – in my view – boiled down to a central and fundamental disconnect: I believed that the operating system mattered, while the vendor in question did not. Vehemently, did not.

My principle argument was simple: ISVs, in my opinion, still write to operating system layers. As much as Java and J2EE have simplified platform compatibility questions, a majority of vendors, I argued, supported certain operating systems and not others. Red Hat and SuSE, for example, and not Debian (much to my chagrin) or Fedora. The Oracle Unbreakable Linux decision, in that sense, can be seen as a fundamental validation of this assumption.

In case the link between the above argument and the original cause of the dispute is non-obvious, think about it this way. Being able to create applications ad-hoc with drag and drop tools is useful, no doubt, but my problem was what happens when you have a problem. If you have, as an example, an issue with MySQL and contact them for support, it seemed likely that they might ask what operating system that you’re running on. And that an answer of “it doesn’t matter” was not going to get you terribly far. To some degree, it’s similar to my reservations concerning running dynamic languages on top of the JVM: it’s not a bad idea, but when something goes wrong the uniqueness of your environment can be a significant problem.

But clearly more research was needed here. To help address the question, I set out to contact a handful of vendors to get a feel for their support policies with respect to virtualized or otherwise unique environments. The survey – using the term loosely – was designed not to be comprehensive, but to provide some feedback on how a handful of vendors handle support requests of this type. The questions asked of each vendor were the following relatively simplistic questions:

1. Does [vendor] support running in supported operating systems running on top of the following types of virtualization platforms?
a. Native virtualization (e.g. VMWare)
b. Paravirtualization (e.g. Xen)
c. Operating system-level virtualization (e.g. Solaris Containers, FreeBSD Jails)

2. Does [vendor] support running in supported operating systems running on top of grid type environments (e.g. Amazon’s EC2, Sun’s network.com)?

3. Does [vendor] support running within the context of a meta-operating system (e.g. 3Tera, which hosts a variety of “guest” operating systems within one “meta” operating system)?

Generally, all of the vendors were cooperative. A couple of requests fell through the cracks, but most of the vendors contacted provided responses. The quickest response took about 48 hours to process, the longest was just short of three weeks; the mean was probably a few days. While that’s obviously attributable mainly to communication inefficiencies, I did find it both interesting and noteworthy that few of the vendors are prepared to answer the support questions regarding increasingly complex virtualized environments off the top of their head. Nor should they be expected to, of course, because the production deployments of virtualized environments are just in the process of becoming mainstream, as even the VMWare folks have allowed.

Anyhow, here are the responses received thus far. Incidentally, if you’re a vendor and would like to be included, just ship over your answers.

  • Covalent:
    1. a. Yes; We have a number of customers running Apache and Tomcat in this environment
      b. Possibly, but not officially at this point; working with one customer to evaluate this now
      c. Yes on Solars Containers; we have supported this for some time; we do not support the BSD operating systems

    2. Not currently; We have not had a request for this environment so far
    3. Not currently; Same as above … Has not been requested to date
  • DB2:
    General support statement: we support environments that support the same versions of the operating system that DB2 runs on, whether that’s AIX, RHEL, Solaris, SuSE, Windows.

  • SourceLabs:
    1. Yes – with focus on paravirtualization (xen)

    2. Yes
    3. Yes
  • Sun:
    1. a. Yes, for example, we support the customers running VMware on our systems and Solaris 10 running on VMware. So does VMware.
      b. We will, on SPARC we will have Logical Domains, on x86 we will have a Xen version of Solaris. In addition we plan to support 3rd party Xen solutions like XenSource’s XenEnterprise and Novell’s SLES 10 Xen.
      c. Absolutely, Solaris Containers as part of Solaris 10 is definitely a big push for us and we have many happy customers running Solaris Containers in production.

    2. Yes
    3. More information required
  • WebSphere:
    Has a virtualization support statement posted here. General response:
    In general, as long as binary compatibility is maintained (it looks and acts as a normal OS would), then we’ll support it. The note about performance is a good one, app server performance can be dramatically affected by resource allocation (memory, processor caching, etc.) — this is why we put this note in our support statement…On #2 and #3, we don’t see enough action/traction here to have an official support statement, but, again, as long as binary compatibility is provided for, it wouldn’t necessarily be an issue.

  • Zend:
    1. a. Yes
      b. Should work, but we’ve never tried it.
      c. Yes, provided all of the necessary software/libraries is available in the container/jail

    2. That was never tested, my gut feeling is that we can expect some incompatibilities that would require some tailored solutions.
    3. Again, wasn’t tested and my gut feeling is that it would require some tweaks to get working.

    Generally all of Zend’s software is running in user mode, with no special drivers or kernel patches that are necessary. As such, it should work fairly well with virtualized environments. Some tweaks may be required to get it to work on some more ‘exotic’ virtualized environments, because of certain assumption made as to whether the software is running on a single server or not.

My conclusion, based on the above? If support matters, operating systems matter. If the subject is production systems, then, operating systems matter.

What’s equally clear, however, is that VMWare’s contention, made here at VMWorld, that the role of the operating system is changing is accurate. This is reflected not only in the technical innovation seen around virtualization from both a hardware and software perspective, but by the complexity of the support questions facing ISVs. The WebSphere folks are more than justified in their response that they haven’t seen substantial demand for virtualization solutions such as that provided by either Amazon or 3Tera, but I think it’s inevitable that we’ll see more of that.

Disclaimer: IBM, Sourcelabs, Sun, and Zend are RedMonk clients, while Covalent is not.

Categories: Virtualization.

  • http://blogs.sun.com/bmc Bryan Cantrill

    I guess I’m a little amazed that this argument is still being had — if only because we in Solaris have provided so much evidence now that the operating system actually does matter. (Not to toot our own horn, but if OS’s don’t matter, what the hell is the WSJ’s problem giving an operating system its top innovation award? And not just its top software innovation award — though that too — but its top innovation award, period.) To me, this is the ultimate vindication that operating systems do matter — and that innovation in the operating system has the power to provide unique value in information technology. There will of course be laggards that continue to rephrase the well-worn arguments of commoditization in the OS, but they are just that: reflections in the rear view mirror of a zeitgeist that’s been left behind.

  • http://duckdown.blogspot.com James

    Operating systems matter and don’t at the same time. If you ask me about what operating system matters most in corporate America I would say Solaris as it provides capabilities that Linux doesn’t have. Even MS beats Linux is several areas.

    If you are talking about operating systems to enterprisey folks, they aren’t really worth talking about for more than a minute as they are commodities and the value to us is a lot higher up the stack.

  • http://blogs.sun.com/bmc Bryan Cantrill

    I guess I would counter that the fact that OS’s aren’t perceived as having value further up the stack is really a failure on the part of operating systems to realize their potential to add value up the stack. That is, the perceived failure of operating systems is actually due to a lack of imagination (or understanding) on the part of OS implementors more than an abstract limitation of the idea of an operating system. To be honest, Linux has exacerbated this because instead of expanding the definition of the operating system (which must be done to be able to add that higher-level value), Linus and co. have actually contracted it: unlike most operating systems that have come before it, Linux doesn’t recognize the system libraries and utilities as being a part of the operating system. (They rely on the distribution for this — which dramatically limits the value that Linux itself can ever provide up the stack.)

    For our part in Solaris, we have been busily working to add value higher and higher in the stack. As a concrete example of this, I would point to our recent support for JavaScript in DTrace — but there is still much that can be done (in many disjoint areas) to allow operating systems to add value high in the stack of abstraction. Point is: the Operating System matters — even if certain operating systems don’t see it that way. ;)

  • http://www.michaeldolan.com Mike Dolan

    Ok, since we’re pushing corporate agendas now, AIX, APV, z/OS, z/VM, Linux, and i5/OS. OSs, hypervisors, libraries, etc – let’s throw them all in with anti-contraction and see what happens. A kernel does not equal an OS and ‘Linux’ as an OS is much more than a kernel.

    The above comment belongs on /. – I wouldn’t expect to see it here and it’s disappointing.

    Funny… claiming an OS that is trying to be like Linux is more creative than Linux… funny, near Redmond-ish. What do customers think? Last I checked Linux outsold Solaris in 2004… and has been growing 9x faster since. There’s a reason.

  • http://blogs.sun.com/bmc Bryan Cantrill

    You write:

    A kernel does not equal an OS and ‘Linux’ as an OS is much more than a kernel.

    That’s only true if one subscribes to the Larry Ellison definition of Linux (namely, whatever you want it to be). As Linux defines itself, however, it is technically just a kernel. For example, from the gentoo documentation:

    What do you think of when you hear the word “Linux”? When I hear it, I typically think of an entire Linux distribution and all the cooperating programs that make the distribution work.

    However, you may be surprised to find out that, technically, Linux is a kernel, and a kernel only. While the other parts of what we commonly call “Linux” (such as a shell and compiler) are essential parts of a distribution, they are technically separate from Linux (the kernel). While many people use the word “Linux” to mean “Linux-based distribution,” everyone can at least agree that the Linux kernel is the heart of every distribution.

    This is more than just nomenclature: in order to be able to deliver value up the stack (to get back to the original discussion), one often needs to be able to deliver technology that straddles the user/kernel protection boundary. In the Linux model, this delivery is essentially impossible: one can deliver the kernel portion into Linux, but the user-level portion of any technology must be left to the distributions. It’s my opinion that this is the wrong model — I have a much more expansive view of the operating system than does Linus, because I believe that the operating system can and must deliver value at higher levels of abstraction.

  • http://www.3tera.com/hotcluster.html Bert Armijo

    Most of the OS’s we use in our daily lives we have no interaction with directly. We’re sheltered by a user interface (cell phone), or the OS is embedded so deep we never see it (car transmission). This doesn’t mean they don’t matter, but rather that they can do their job without imposing themselves on us.

    Conversely, the OS’s we typicaly use on our servers have required constant attention. Sys admins seem to be constantly building, installing or patching something on every machine. In this way, these OS’s have mattered a great deal – they’ve driven the cost of operating a data center into the stratosphere.

    Just because the OS is used in the data center doesn’t mean it has to demand so much attention. Consider the number of OS’s in the data center with which we have far less interaction. Cisco could probably switch the base OS in their switches and routers and you’d never even know, because your interaction is strictly through the CLI. Likewise F5 or APC. You treat all of these as appliances. Your interaction ends at the UI.

    Can WebShpere or DB2 be run like an appliance. They can today. And consider the advantages of doing so. The vendor could choose the OS that best meets the needs of the application. They could configure the OS specifically to run the application and know that nothing else would be running. And, possibly most importantly, no one could change the configuration. I’ve marketed and supported a LOT of complex technology products and I humbly maintain that the support effort for something like WebShpere or DB2 could be eased by at LEAST an order of magnitude if delivered as an appliance.

    Evidently others agree, because this trend is already taking hold on the desktop for just these reasons.

  • http://www.redmonk.com/sogrady stephen o’grady

    Bryan: count me as somewhat surprised as well. even even virtualized environments need to run on top of something. and frankly, virtualized environments are not perceived by many of the folks i speak with as a first tier production deployment environment. they will be, i think, but aren’t there yet. in other words, reports of “death of the os” are slightly exagerrated.

    James: i’m not sure i buy that. if i try to pitch, say, BSD into many enterprisey clients i’m going to get massive pushback. and windows v linux fights in large enterprise shops are often vicious. OS’s may be perceived as commodities, but it’s not really playing out that way.

    Bryan / Mike: maybe you guys can agree to disagree. all i ask is that you respect one another’s position.

    Bert: “I’ve marketed and supported a LOT of complex technology products and I humbly maintain that the support effort for something like WebShpere or DB2 could be eased by at LEAST an order of magnitude if delivered as an appliance.”

    you think customers are ready to receive either WebSphere or DB2 as an appliance? i don’t, not yet anyway.

    i also think the comparisons between car or cell phone operating systems are a stretch. those are devices with prefined purposes; application servers and databases are general purpose platforms. apples and oranges, in other words.