Blogs

RedMonk

Skip to content

IBM's New zEnterprise – Quick Analysis

IBM launched a new mainframe this morning here in Manhattan, the zEnteprise – a trio of towers bundling together mainframe (the new 196) along with stacks of Power/AIX and x86 platforms into one “cloud in a box,” as Steve Mills put it. As you can guess, the pitch from IBM was that this was a more cost effective way of doing computing than running a bunch of x86 commodity hardware.

(To be fair, there was actually little talk of cloud, perhaps 1-3 times in the general sessions. This wasn’t really a cloud announcement, per se.”Data-center in a box” was also used.)

TCO with zEnterprise

zEnteprise launch

The belief here is two-fold:

  • Consolidating to “less parts” on the zEnterprise is more cost effective – there’s less to manage.
  • As part of that, you get three platforms optimized for different types of computing in one “box” instead of using the one-size-fits-all approach of Intel-based commodity hardware.

Appropriately, this launch was in Manhattan, where many of IBM’s mainframe customers operate: in finance. The phrases “work-load” and “batch-management” evokes cases of banks, insurance companies (Swiss Re was an on-site customer), and other large businesses who need to process over all sorts of data each night (or in whatever period, in batch), dotting all the i’s and crossing all the t’s in their daily data.

IBM presented much information on Total Cost of Ownership (TCO) for the new zEnterprise vs. x86 systems (or “distributed” for you non-mainframe types out there). The comparisons went over costs for servers, software, labor, network, and storage. Predictably, the zEnterprise came out on-top in IBM’s analysis, costing just over half as much in some scenarios.

Time and time again, having less parts was a source for lower costs. Also, more favorable costs for software based on per-core pricing came up. In several scenarios, hardware ended up being more expensive with zEnterprise, but software and labor were low enough to make the overall comparison cheaper.

Fit to purpose

Another background trend that comes into play with the zEnterprise platform is specialized hardware, for example in the areas of encryption, data analytics, and I/O intensive applications. The idea is that commodity hardware is, by definition, generalized for any type of application and, thus, misses the boat on optimizing per type of workload. If you’re really into this topic, look for the RedMonkTV interview I did with IBM’s Guruaj Rao, posted in the near future (now embed above).

While mainframes might be an optimal fit for certain applications, Rao also mentions in tbe interview, there are many innovations coming out of the distributed (x86) space. Rather than pass up on them because they don’t run on the mainframe, part of the hope of zEnterprise is to provide a compatible platform that’s, well, more mainframe-y to run distributed systems on. What this means is that you’d still run on Unix or Linux as a platform, but those servers would be housed on the zEnterprise-hosted blades; they’d be able to take advantage of the controlled and managed resourced, and be managed as one system.

On the distributed note, a couple analysts asked about Windows during the Q&A with Software and Systems Group head Steve Mills: would zEnterprise run Windows (Server. The short answer is: no. The longer one is about lack of visibility into source code, not wanting to support an OS that “drag[s] in primitives from DOS,” and generally not being able to shape Windows to the management IBM would want. Mills said he “doesn’t really every expect to manage Windows” on zEnterprise.

During that same Q&A, Mills also alluded to one of the perfect heterogeneity in one box scenarios that the zEnterprise seems a good technological fit for: many mainframe-based applications are served by a client and middle-ware tiers for their user-facing layers, 3-tier applications, if you will, with the mainframe acting as the final tier, the system of record, to use the lingo. You could see where the web/UI tier ran on x86, the middleware & integration on AIX/Power, and the backend on z.

The Single System, for vendors, for buyers

Steve Mills at the zEnterprise launch

Everything around the server has become more expensive than the server itself.
–Steve Mills, IBM

The word “unified” comes up in the zEnterprise context frequently. Indeed, the management software for zEnterprise is called “Unified Resource Manager.” As with other vendors – Cisco, Oracle, and private cloud platforms to some extent – the core idea here is having one, system that acts like a homogenous platform…all the while being heterogenous underneath, smoothing it over with interfaces and such.

For vendors, the promise is capturing more of the IT budget, not just the components they specialize in (servers, database, middleware, networking, storage, applications, etc.). There’s also the chance to compete on more than just pricing, which is a nasty way to go about making money from IT.

For buyers, the idea is that by having a single-sourced system, there’s more control and integration on the box, and that increased control leads to more optimization in the form of speed, breadth of functionality, and cost savings. There’s also reduced data center foot-print, power consumption, and other TCO wing-dings.

Sorting out the competing scenarios of heterogeneous vs. unified systems isn’t a simple back of the napkin affair: there’s so much apples to oranges comparisons that it’s tough to balance anything but the final bill. Part of the success of non-mainframe infrastructure is that its dumb-simplicity makes evaluating it clear and straight forward: x86-based servers have standardized computer procurement. Layering in networking, software, storage, management, and all that quickly muddies it back up, but initially it’s a lot more clear-cut than evaluating a new type of system. And it certainly feels good to buy from more than one vendor rather than put all your eggs in one gold-plated basket.

Nonetheless, you can expect vendors to increasingly look to sell you a single, unified system. To evaluate these platforms, you need a good sense of the types of applications you’ll be running on them: what you’ll be using them for. There’s still a sense that a non-mainframe system will be more flexible, agile even…but only in the short term, after which all that flexibility has created a mess of systems to sort out. Expect much epistemological debate over that mess.

More

  • Patrick Thibodeau at Computerworld covers many of the details: “the zEnterprise 196…includes a 5.2-GHz quad processor and up to 3TB of memory. That’s double the memory of the preceding system, the z10, which had a 4.4-GHz quad processor.”
  • Timothy Prickett Morgan gets more detailed on the tech-specs and possible use-patterns: “there is no way, given the security paranoia of mainframe shops, that the network that interfaces the mainframe engines and their associated Power and x64 blades to the outside world will be used to allow Power and x64 blades to talk back to the mainframes.”
  • Richi Jennings wraps up lots of coverage.

Disclosure: IBM, Microsoft, and other interested parties are clients. IBM paid for my hotel and some meat from a carving station that I ate last night.

Categories: Conferences, Enterprise Software, Quick Analysis.

Tags: , , , , ,

Links for July 19th through July 22nd

Disclosure: see the RedMonk client list for clients mentioned.

Categories: Links.

Numbers, Volume 51

Flash Mobile Dreams

  • Flash 10.1 is expected to really start taking off on tablets in the second half a year when it is preinstalled.
  • Adobe is hoping Flash 10.1 will be on 9 to 10 percent of smartphones this year.
  • By 2011, Flash 10.1 should be on a third of smartphones.
  • By 2012, Adobe plans to have Flash 10.1 on more than half of all smartphones shipped assuming no major market share changes.

Clear coverage

Clearwire LLC [aka “Clear”] says it now covers 51 million people in 44 cities in the US [though they only report 971,000 customers], following the launch of its mobile WiMax service in seven new cities today.

The Clear service offers average data downloads of 3 to 6 Mbit/s, with peaks of 10 Mbit/s, over its mobile WiMax network.

I’ve been using Clear for awhile now. I’m waiting until I visit some more 3G only cities before writing a review (so far, the 3G coverage doesn’t work). In the meantime, I post speed checks from time-to-time.

I predict this prediction will change

IT analyst company Gartner says that the dollar value of global IT spending in 2010 will be less than it previously thought, due to the devaluation of the euro and Europe’s sovereign debt crisis.
It had predicted that worldwide IT spend would reach $3.4 trillion in 2010 – a 5.3% increase from 2009. But it has now cut this figure by 3.9% to $3.35 trillion, which represents a 4.7% yearly increase.

Why pay just once for cloud when you can pay twice?

Meanwhile, Talisker comes as research points to the biggest opportunity for cloud being behind the firewall with customers running their own services rather than relying on service provides. IDC found 55 per cent of CIOs prefer private to public, with private clouds accounting for $11.8bn in server revenue by 2014 compared to $718m for private.

Pinboard.in a year later

Here are some of our vital signs, one year in:

  • 3.5 million bookmarks
  • 11.2 million tags
  • 2.5 million urls
  • 187 GB of archived content
  • 99.91% uptime (6 hours offline)

I’ve used Pinboard.in for awhile now and I love it.]

Don’t forget about Microformats, they’re successful!

Originally brainstormed in September 2004, and rapidly adopted by numerous tools, sites, large and small, the number of pages published with one or more hCards recently crossed the 2 billion mark a few days ago according to Yahoo Search Monkey, making it the most popular format for people or organizations on the web.

Your new SSO Overlords

Facebook now leads by nearly three to one with 46% of all social network logins. The closest competitor across all sites is Google with 17%. Twitter follows behind Google with 14%, barely leading Yahoo’s 13%.

The numbers switch around when we start breaking them down into different categories. Facebook becomes even more dominant, increasing to 52% when we look at entertainment websites, with Twitter and Myspace jumping into second and third place. For B2B websites, the distribution is a bit more even overall, with Facebook taking 37% of the pie and Google, Yahoo and Twitter all coming in with around 18%.

Disclosure: Microsft and Adobe are clients.

Categories: Numbers.

Links for July 16th through July 19th

Barton Springs

I’m off on vacation for most of this week, so don’t expect too much.

Disclosure: see the RedMonk client list for clients mentioned.

Categories: Links.

OpenStack – an open source cloud platform

Rackspace announced the OpenStack project today, open sourcing much of the software it uses to run its own cloud. I spoke with Rackspace’s Jonathan Bryce on the topic to get an in-depth overview, discuss Rackspace’s intentions, and explore the operational future of OpenStack.

This is a big announcement in the cloud world, further widening the technologies that are available to start crafting public and private clouds. The nature of Rackspace as not a software company is also interesting to watch here, as well as what partners do with the project.

Transcript

Michael Coté: Well, hello everybody, I’m here in the Austin Rackspace offices in the Austin City Limits conference room as you can see. It’s, perhaps, one of the more three dimensional conference rooms I’ve ever been in. It’s very exciting and this is always Michael Coté from RedMonk and I’m joined by a guest. Would you like to introduce yourself?

Jonathan Bryce: Sure. I am Jonathan Bryce. I am the Co-Founder of the Rackspace Cloud and I’m currently leading up the technology-end of a new venture that we are just getting started.

Rackspace has been doing hosting, managed hosting, dedicated hosting for about a decade and a few years ago we started a cloud initiative to do virtual servers, cloud storage, Platform as a Service and the big news that we’ve just released is that we’re actually going to be open sourcing most of the software that runs our cloud system. Specifically, our cloud servers’ product line, our cloud files product line.

We’re going to be giving all of the source code away and opening it up under a new organization that’s called OpenStack.

Michael Coté: What does that mean, for “cloud software” to go Open Source? Do you think — I mean to the — you were, kind of, alluding to this a little bit, but a lot of what a cloud is, is the hardware and everything that is running. So, what’s the part that’s the software you can Open Source?

Jonathan Bryce: Well, most of the clouds’ out there – public, private, enterprise, you know, whatever – under the hood it’s built with a lot of Open Source components like Linux, KVM, or Xen or different Open Source hypervisors. But what the missing piece for a lot of that is an orchestrator, a controller that can make tens of thousands of physical devices, the network, all of the different components work together so that you can provision and manage virtual servers in multiple locations across all of this hardware.

So we’ve built a system that does that. Our cloud is very large. It has hundreds of thousands of cores. Our storage system has billions of files, petabytes of data, and there’s a lot of software that glues it altogether and makes all of those other technologies work.

Up to this point, there have been a few Open Source projects that do similar things, but all of the ones that have really been at scale have been proprietary systems that are running in other people’s datacenters like ours and Amazon’s and Google’s and so this, I think, the big announcement and the big change for the industry is that we’re going to have what’s kind of a carrier grade controller for clouds that is, now, going to be available and it’s going to be open and it’s going to be contributed to from a lot of different players, not just us.

Michael Coté: To that point of it being carrier grade, which is, it’s kind of fun, it’s sort of like in the — outside of the enterprise datacenter, instead of using the world “enterprise” you always use “carrier grade” to mean, like, hardcore real stuff, right? But to that point, how long have you guys been using this software internally? What’s the maturity of the Open Source cloud stack?

Jonathan Bryce: The software that we are running has been in development internally for about four years. It’s been running in production for almost that whole time. We have tens of thousands of companies from all over the world that are using it in our environment. So it is something that — it’s been battle tested and it has scaled and I’ve seen a lot of demand and a lot of success in the deployments that we’ve already done with it.

So this is not, kind of, just a think layer of virtualization control that’s meant for a test lab of ten servers or 20 servers. It really is meant for that, kind of, large provider type scale.

Michael Coté: It’s not for like your Beowulf cluster in the closet?

Jonathan Bryce: Right.

Michael Coté: I mean you guys are not really a software company necessarily. So what’s like the motivation for — I mean this is a very software company, sort of, thing to do, to Open Source something. So, why are you guys doing this?

Jonathan Bryce: Well, a big a reason of why we are doing this is because we’re not a software company. We believe that the best technologies out there in the last decade have been driven forward by Open Source, whether they were Open Source systems themselves or whether Open Source provided a real competitor to an existing entrenched closed source player. Open Source has really propelled innovation and what we see, right now, in the cloud is that it’s a huge opportunity, it’s a huge market shift, but it’s being held back a little bit by the fact that a lot of the cloud technology out there is proprietary.

It’s either closed source, commercial, very expensive, or it is only run in a provider’s datacenter. We looked around and we didn’t see anything that met our needs and thought here’s an opportunity for us to really help push this forward, to get a lot of people involved, to use our scale and success to accelerate the adoption of the cloud technology.

Some companies who — you know, you mentioned software companies that Open Source some of their products, sometimes that happens when those products are maybe on the decline or if they need a marketing boost to generate interest and there’s always a little bit of a conflict of interest there and as a software company how do I give away my software and then I also make money off of it? Lots of people have done it and made a lot of money.

For us, though, it’s actually a much simpler decision, because, for us, the software is a piece of the overall service that we deliver. But really what Rackspace is about is operating software at scale, doing it really well, really efficiently, really reliably and then offering great support and great — just an overall awesome experience on top of it.

So, the software, to us, is not a real advantage competitively. We built our company using Linux, using MySQL, and Postgres, and JBoss, and Apache and all of these freely available systems. What we did that set us apart from the competition is we delivered it in a way that it was just a superior experience. When we look at the cloud, what we see is a lot of competing proprietary systems and there isn’t really a true competitive market.

We want to help move the cloud space to that. Where you can compete on support and a premium experience or you can compete on cost or you can compete on operational efficiency or you can integrate it in to your enterprise application, your ERP/CRM system, or whatever needs to be, but really move it to a true competitive market.

Michael Coté: When you open source something there’s lots — hopefully there’s other people who are working with you and collaborating. Like who are some of these other companies that are interested in working with you or partnering?

Jonathan Bryce: In our announcements, we’ve talked about a number of them. Companies like Dell and Citrix. A bigger partner who has really kind of been a surprise to us, but has been an amazing piece of this is actually NASA.

When we started to think about this, we went and we looked at some of the Open Source — actually all of the Open Source options out there to see if any of them would meet our needs and none of them would meet our specific needs. We have a specific set of needs, because of the scale and the type of offering that we run on top of it and so we thought okay, well this is it, we’re going to Open Source our code and really take this on ourselves.

A few weeks before we were getting ready to do all of this, the software that powers NASA’s Nebula Cloud was actually Open Sourced and we saw it, and we looked at it, and we said, “Wow, this is really awesome.” We always said that if you could find something better then we’d love to partner up and work together with that and so we’ve been able to do that and it puts us — actually it gives us a head start, it pushes us forward and helps advance the whole thing and I think it really shows the power of Open Source.

Because immediately NASA has access to more technology that we have, our storage system for instance that they were missing before. We have access to technology that they’ve been running inside of the federal government and so this is really the value and the promise of Open Source.

There are a lot of other players who have been our partners for a long time and who are well known in the cloud space, companies like RightScale, and Cloudkick, JungleDisk. Other companies who have built on top of clouds that are these proprietary public clouds, this is great for them and they’re all excited about it because now it opens up a whole new market for enterprises who are going to be running this in their own datacenters, for other private cloud installations, and I think it really is just going to be an awesome opportunity for the industry as a whole.

Michael Coté: Do you anticipate that people will try to use the software to compete directly with you guys running public clouds?

Jonathan Bryce: Absolutely! Some of the other people who have already been involved in this are our direct competitors and we all see that cloud is a shift that’s going to happen and instead of beating each other up in the first few years as the technology gets ironed out, let’s work out this technology together and compete the way that we always have, which is with standard x86 hardware stacks and Open Source software stacks so that we can all have a much bigger pie to eat from, so to speak.

Michael Coté: That’s right.

Jonathan Bryce: The pie analogy is always a funny one to me, but a much bigger pie out there of customers who want this type of technology.

Michael Coté: You’ve mentioned several times that the differentiation you guys will have is basically the service you provide and, kind of, the whole package of things. So, whenever a company Open Sources there’s always the cynical people looking for the part that’s closed sourced or whatever. And nowadays this is an “open core” model essentially where you’ve got an Open Source stack of software and then there is a set of software that’s usually plug-ins or extension that are kept closed source and then that’s the part that’s monetized, if you will. Are you guys using an open core model or what does it look like?

Jonathan Bryce: No, we want a system that is truly open. And one of the reasons is cloud is all about scale and a lot of the open core software components or products, projects, whatever you want to call them, they follow this open core model where you can run a basic version of the system and then when you want to do something that’s really heavy duty then you pay for the extra features that let you do that.

Well, from my point of view the reason that you do cloud in the first place is because you have some heavy duty needs. Cloud is all about scale, it’s all about giving people access to computing scale that they just didn’t have access to before.

So an open core model for a cloud system doesn’t make a whole lot of sense to me to have a product that is trimmed down and works on a small set of servers or on a small number of clients, whatever it may be and then you pay to get to the full scale, it just doesn’t seem to apply to cloud. So that’s not the model that we’re going after.

We will monetize this by running it in a hosted model, so, by operating it and running it reliably and efficiently for our customers. And I think also we will be providing commercial support for this. This is something that fits right in with fanatical support and the way that we’ve supported other Open Source products for years and years and years and now that we are developing it, it makes even more sense for us to do support on top of it.

But the core software that you need to run this and to run it at scale, it’s going to be what we are running for our hosted versions and it’s all going to be out there. With the exception of integration points, I shouldn’t say it’s all going to be out there, because we have to tie into our billing system, our authentication system, those hooks are really the only things that we’re not open sourcing from the product.

Michael Coté: So, then the other question, I mean, especially when you are open sourcing something is, I guess it’s the whole area of where is it going to live. Like where are you going to host it and what site is it going to be at, where would people go to get it and everything?

Jonathan Bryce: Both of these initial projects are going to be under an umbrella organization that is called OpenStack and so OpenStack will live on the internet at OpenStack.org.

You can go there and get some basic information like now as the project mature and move along. It’s going to continue to fill out that website. But the source code and the project management is going to take place on Launchpad.

Launchpad is something that we chose, because it gives us some extra capabilities over something like maybe GitHub that people are probably really familiar with. But it gives us the ability to do very complicated planning and long-term product management kind of things.

So lifecycles and in addition to just basic bug requests and it also is built around managing a really large community.

Michael Coté: Another thing that you mentioned briefly was private cloud users of OpenStack, and what do you anticipate the opportunity there is going to be?

Jonathan Bryce: When we talk to larger companies about using our cloud, they are kind of like, “Whoa! We’d like to really see it, really set it up, and test it. We are a little concerned about security, sometimes a lot concerned about security, we’d like to run it inside of our datacenter, we’d like to run in your datacenter, but behind our Firewalls.”

All of these kinds of concerns around lock-in and security come up when we talk to these big enterprises. To this point we’ve had an offering which is a private cloud based on VMware at Rackspace. But we have companies who want something that is more of a commodity cloud. They’re doing things where they don’t need all of the enterprise benefits of VMware and they don’t want to pay the enterprise premium for that and so, again, this is another one of those things where the market is going to be deciding how they want their cloud.

So, this gives this, now, an option where when those customers say that we go great, we have been leading this OpenStack project and you go access the compute and the storage system, you can run them. You can get support from us on them and they can set it so that they can have really truly commodity elastic clouds internally in their datacenters or in our datacenters or in a competitor’s datacenter or wherever they need to set it up for their business requirements.

Michael Coté: So, we’re here at the Rackspace Austin office and you guys have been here for a while as you were reminding me when you were giving me your tour and it was funny, we were going on a tour and [you said] like “oh, this used to be your workout space there.” I mean you guys have expanded so much that it’s just cubicles and everything, but you guys are based in San Antonio and you guys — was it a couple of years ago now you IPO’ed or I mean some —

Jonathan Bryce: It was August of 2008.

Michael Coté: Right and so — I mean since then you guys have been expanding quite a lot.

Jonathan Bryce: Yeah, well we have. Obviously, our headquarters is in San Antonio. We’ve had this Austin satellite office for a while that we’ve continued to grow and it’s many times larger than it was now when we started just a few years ago.

We also have staff in datacenter locations and various places – Dallas, Chicago, Virginia, Hong Kong. And then we have another office that’s staffed pretty well over in the U.K. that has kind of all the functions of the business in it.

One of the things that we really want to do is find good talent and so we’ve become a little bit more flexible in where we look for that good talent. But also I think that all of this OpenStack stuff it opens up new opportunities because we’re hiring. We’re hiring people to come work full-time on what we think is going to be the ubiquitous cloud computing system and it’s an exciting chance to get to work on code like this.

So, we’ve got openings to work on it in OpenStack and our hosted cloud offerings within Rackspace and we need developers and we need marketers and we need just good people everywhere, we want good Rackers. So, yeah, I mean I think that we have a lot of positions open and we’re continuing to hire and expand in different places.

Michael Coté: Over the years mostly bumping into people at South by Southwest as your shirt attest to – it’s been interesting to see what was otherwise a very, you know a normal traditional hosting company like really get — for lack of a less corny word get hip to Open Source and start to drink the Kool-Aid with it.

Anyway, it’d be fun to see how the OpenStack stuff plays out and I appreciate you talking all this time to go over with us.

Jonathan Bryce: Yeah! Thanks for talking with me.

Disclosure: Rackspace is a client and sponsored this video.

Categories: Cloud, Open Source, RedMonkTV, Systems Management.

Links for July 14th through July 16th

Disclosure: see the RedMonk client list for clients mentioned.

Categories: Links.

Links for July 9th through July 14th

San Antonio Riverwalk

Disclosure: see the RedMonk client list for clients mentioned.

Categories: Links.

Sorting out Microsoft's clouds – Quick Analysis

Watching the World Cup at Sarah & Brady's

Microsoft expanded its cloud offerings today, answering the call for “private cloud.”

Our strategy is to provide the full range of cloud capabilities in both public and private clouds.
Robert Wahbe

After today’s announcements, Microsoft has at least three cloud options for you: a public cloud that’s mostly a platform as a service (Azure), a private cloud in limited release (Windows Azure Appliance), and an outline for building a private infrastructure as a service cloud (“Private Cloud Deployment Kit”).

This is all notable as Microsoft has, until now, really only been know for the first, Azure, which provides a bundle of services for developing applications in several programming languages. Azure remains the only one of these clouds that’s widely, if not generally available.

I’m a bit unclear the “Private Cloud Deployment Kit,” and so far there’s not enough Google juice on whatever solid pages are up to find anything. While there’s a whole slew of .docx’s and .pptx’s on a Microsoft cloud site, the “solution” nature makes narrowing down a specific offering a bit, well, enterprise-y. Which is surprising coming from Microsoft who’s usually very good at not being so.

Evaluating Private Clouds

For private cloud, saving money is your main concern because you’re still worrying about everything.

For the newly announced Windows Azure Appliance, Microsoft is pairing its Azure software offerings with three hardware partners: Dell, HP, and Fujitsu. While they don’t call it a “private beta,” the “limited production release” makes it effectively so, in the Web 2.0 sense at least. This means you’ll need a special relationship with Microsoft (or one of it’s partners) to get Windows Azure Appliance.

Would it be worth it? It’s difficult to tell yet. Once the pricing and final specs are out, you could conceivably compare it to other offerings. For a private cloud, the only thing that really matters is pricing and TCO.

With a private cloud you’re still: managing your cloud, paying for and do any geographic dispersal (and manage the on-the-ground government hijinks there), going to be stuck on upgrade cycles getting hung up on your own fears about upgrading versus staying with what works….

In summary, with a private cloud you’re not getting the advantages of having someone else run the cloud infrastructure.

Clearly, if a private cloud is better than some calcified mess you’re in, then sure. But, the question at the back of your mind should always be: why not make it public cloud? I’m pretty sure there’ll be many legit reasons for several years to come – but things are murky at this point – maybe if you come up against them, you could share them and we could start cutting through the fog.

Nicely, Microsoft’s cloud-based desktop management offering Intune is good context here. Imagine if running all that desktop management infrastructure was no longer your concern. Intune is gated for just small businesses at the moment, but it’s clearly something that’d be appealing to enterprises.

¡Dale Gas!

A cloud announcement is always welcome from Microsoft, they should keep it coming!

Nonetheless, what’s commendable is to get the offering out there. Microsoft has been taking half-steps when it comes to getting cloud offerings in the hands of potential users and customers. The technology has seemed to be there, but the go-to-market has been timid. You can easily chalk it up to internal fears about “cannibalizing” on-premise sales, but the gusto with which Bob Muglia spoke on the topic of cloud at the recent Microsoft TechEd indicated that at least he (the leader of all this within Microsoft) was anything but timid.

(Tim Anderson does raise a good point about Microsoft’s small business product though in his piece on Aurora: it seems like most of those businesses would do better just to go straight to the cloud. I’m sure there’s some interesting trending and reporting Spiceworks could do on that point.)

Helping companies get over Cloud FUD

The biggest problem for cloud appears to be fears over regulations and legalities – it’s like using credit cards on the web in 1994.

We could be on the [public] cloud by the end of the week…. It’s just from compliance reasons we can’t be on there. If you know anyone at the FTC or SEC, please call them up and educate them!
Chris Steffen of Kroll Factual Data

Enterprises are still hung up on running their own clouds. Little wonder with all the regulation and security FUD that’s easy to kick up. There’s really no one out there helping large companies get over and/or defeat that FUD.

For vendors, it’s great. They can charge twice for cloud: once for private cloud transformation, then again for public.

I’m always a fan of “private cloud” as meaning “optimizing how IT is delivered internally.” However, enterprises and cloud-vendors aren’t pushing public cloud enough. Now, it could be the case that technologically it just doesn’t work. Increasingly, it seems like Steffen’s “lawyers” are the primary motivations creating the “private cloud” market which seems like a bit of a bummer.

More

Disclosure: Microsoft is a client, as are many others in this space.

Categories: Cloud, Quick Analysis.

Tags: ,

All about the DMTF with Winston Bumpus – IT Management & Cloud Podcast #076

_Users_cote_Library_Application-Support_Evernote_data_51783_content_p3536_f0d8dca257af7ed238f5fddbe81911da.gif

Winston Bumpus of the DMTF joins us to give us an update on what the standards body has been doing in IT Management and cloud computing.

Download the episode directly right here, subscribe to the feed in iTunes or other podcatcher to have episodes downloaded automatically, or just click play below to listen to it right here:

Show Notes

  • Winston and I did a video several years ago.
  • Update on DMTF standards, like SMASH, OVF.
  • What’s the DMTF’s cloud standards involvement at the moment? Interoperability, defining things, security – APIs, packaging.
  • John asks about things that seem missing for cloud management and such.
  • We speak about CIM, the models of IT stuff from the DMTF.
  • Whatever happened to AMS (Application Management Specification)? dev/ops people are bumping up against this a lot.
  • We talk about the possible pull to have developers more involved in infrastructure that cloud and dev/ops brings.
  • Winston tells us about US government work in cloud that the DMTF has been helping with.
  • John hits up more on OVF.

Transcript

As usual with these un-sponsored episodes, I haven’t spent time to clean up the transcript. If you see us saying something crazy, check the original audio first. There are time-codes where there were transcription problems.

Michael Coté: I just noticed that when Winston introduces himself, the audio very conveniently cuts out. So when you hear the introduction of the guest in this episode and there’s no sound, just insert in your mind the name Winston Bumpus who, as he says, is the President of the DMTF, and we’re really grateful to have him on this episode. It was a fun discussion. So with that enjoy the episode.

Well, hello everybody. It’s the 9th of July 2010, and this is the IT Management and Cloud Podcast, Episode Number 76. As we mentioned last episode we have a guest on us with this week, but first I, as always I’m one of your co-host Michael Coté available at peopleoverprocess.com. I’m joined by the other co-host as always.

John Willis: John M. Willis from OpsCode, mostly at John @opscode.com.

Michael Coté: And then would you like to introduce yourself guest?

Winston Bumpus: Yeah, this is Winston Bumpus, President of the DMTF, The Distributed Management Task Force, and my other full time job is I’m Director of Standards Architecture here at VMware, and I’m – go ahead.

Michael Coté: I was just going to say I’ll have to put up a link to the video, but we were joking before we were recording this, it’s been few years since we talked last and I remember the last time we talked, we did it in a video, and we were just kind of going over the various –- I think at that time was it SMASH that had been released or something like that and we were — that was the kind of opportunity but we were just kind of going over some of the standards and things that the DMTF had done and I was just hoping that we’d kind of get an update and have a few questions here and there about what your guy’s involvement in IT and cloud stuff is.

Winston Bumpus: Terrific, terrific, so I can give a quick thumbnail sketch and then maybe we — guys get some questions, we can go from there, does that make sense?

Michael Coté: Yeah, that would be fantastic.

John Willis: Sure.

Winston Bumpus: Great. So perfect, so yeah, we certainly the SMASH stuff, which is server management, probably couple of years ago when we talked, that was something important and now implemented in lots of servers and really trying to make management for servers standardized. It’s a place where customers — management interface customers don’t think that’s a place you need to innovate, it’s what you do with the management information that’s important. So service was a key piece and then we extended that into the desktop with a cleverer named — set of standards called DASH. So we had SMASH for the servers and then DASH for the desktops and now that’s embedded in various chipsets from companies like Broadcom, and Intel, and AMD.

So you can remotely manage and do power management and do other remote management of desktops and a lot of the enterprise desktops have that embedded today, all using the very similar technology. So beyond that certainly virtualization is really changing the landscape of enterprises, and so we’ve expanded in that space. One of the key standards that’s come out of there is a thing called OVF, the Open Virtualization Format and it’s a way to –- it’s metadata that you associate with a virtual machine image or multiple virtual machine images that allows you to have information about the resources that are required to deploy and things like that.

I liken it to and a good analogy is it’s kind of the MP3 of the data center, I’ve got an image the way I like it, and I can rip and burn an OVF of that particular image or group of images and deploy it some place else. So that was really around virtualization and you can see as we move into the cloud it’s becoming a key building block to be able to move workloads in the infrastructures of service.

So that’s kind of where we’ve been and then certainly moving the cloud space building on those technologies and we’ll be talking about that.

Michael Coté: I’ve noticed that you’ve been kind of on the cloud circuit of late, if you will, speaking at a lot of cloud conferences and things like that about –- I think especially over the past year or so there has been much discussion about standards for the cloud and things like that. I mean why don’t we just jump into like what –- I mean what’s the –- virtualization is an obvious sort of thing, it’s sort of like a building block for doing cloud standards but like what are you guys –- what’s the DMTF kind of seeing or helping with as far as cloud standards and things like that? What’s the involvement you guys are having?

Winston Bumpus: So one of the things we –- about a year or so ago, we launched an activity called the Open Cloud Standards Incubator, and because –- the reason we call an incubator, I mean it was really we didn’t have a set of specs to work on, we really needed to come together as an industry.

So a large group of companies came together and companies AMD, CA, Cisco, Citrix, CMC, Fujitsu, HP, Hitachi, IBM, Intel, Microsoft, Novel, Rackspace, Redhat, Savvis, SunGard, Sun Microsystems and VMwear are just — because its important that when you talk about standards the constituency is a big deal. I was talking to somebody early today about some standards that were worked on years ago in another organization and it was a good idea but it just didn’t fly because you didn’t have the right constituency.

So I think having the right players engaged is important. So we started this incubator and really the process of it, this kind of rapid rolling out of — getting together so we don’t spend a lot time on how we do it, we get right into the work of looking at what we need to do and the first thing that we really needed to do is agree on definitions, use cases and architectures. I mean we thought that that was an important piece. I used to say that if you had 50 people on the room you’d have 52 definitions of what cloud was.

So we really need to come around the industry and focus on that and then beyond that the key problems that we’re trying to solve is we think there were three key issues to be solved in cloud interoperability.

One is about portability, how do you move from one cloud to the next? The second one is, and maybe it’s most important, we see a lot of discussion on it, around security, that’s one of the issues that a lot of the folks who are looking at clouds are concerned about. If I move into this public cloud, do I have enhanced security concerns and the third thing is just interoperability, what are the interfaces so that it can tie into existing business infrastructure.

So the work in the DMTF is really focused around ATIs, packaging formats and the security work and with that we are actually working with our alliance partners and one of the new organizations that’s cropped up over the last year or so called the Cloud Security Alliance, I don’t know if you’ve heard of them.

Michael Coté: Right, I think we remember them.

Winston Bumpus: Yeah, but the cool thing about what they are doing is, they’ve got a really good group of, you know again constituency is important. That really got a lot of the key cyber security guys engaged in this activity and in addition their saying lets not go off and build new standards, lets talk about what are the best practices around cloud security that people should implement because they feel that the technology exists and there’s lots of good standards that are already there. So what it really is about enabling best practices.

So the DMTF is several alliances and that’s one we’ve recently formed or formed last year to be able to leverage that expertise as we are trying to solve the security issues in addition to the interoperability and portability issues that we are working on.

John Willis: Cool, so Winston, so I’ve been following DMTF, we were talking before the podcast about how long we’ve both been doing this. So I remember first getting involved with the DMTF back in mid stage, mid ‘90s, the management information format and then kind of being somewhat coarsely involved, never in the group but definitely following the pretty hard one, made the transition to kind of object formats and transition from desktop management to distributive management.

Winston Bumpus: Sure.

John Willis: And just watch — you know my excitement has always been kind of the name of this podcast, the IT management guys were the clicking clacker of IT management. And so I look at all what’s going in DMTF, in fact OVF has probably had a five-year hiatus down. When I left I was playing with a lot with WMI and Pegasus and CIM and stuff like that and then I kind of took a five-year dive into kind of WebOps distributed computing and then OVF kind of brought me back in.

So in preparation for this, I’ve probably been following OVF a little bit. I look back at DMTF and one of the thing I kind of saw was, it seems like to me and don’t take this as a disrespect for DMTF, that we are missing some critical pieces, there were some pieces that just kind of either got stuck in the mud or — because when I think about the problem today, I think about the problem being, this lifecycle from a management standpoint of provisioning, configuration management, assistance integration, kind of, automation lifecycle, key provisioning and all the components really that goes into that.

And when I look at what it appears from a service level looking at their website and not spending a whole lot of time in DMTF, it seems like that there is reasonable amount of focus on the provisioning with OVF and images but it seems like there is not much more configuration, system integration and then the manageability if I go back and look at CIM and WebM. I don’t see all that clear, it seems like to me there should be a bigger picture, am I making sense?

Winston Bumpus: Yeah, yeah you are and there’s two pieces. One of which is our website and so we’re actually — you know I would probably say its probably hard to find OVF on it today, you had to dig a little bit, so we’re actually doing a website redesign, so that’s a small niche, that’s the icing. I think there are some things that we’re doing, particularly around CMDB federation, that work is in the DMTF and ongoing, which I think is higher level abstraction piece. I think — you know as far as — you know we spend a fair amount of time.

Certainly, the work we’ve done on CIM, the Common Information Model has been a major piece of the underlying technology, even OVF is based on it, all the rest of the technology leverages it. CIM, really the common information of the object model that we started work on, actually we started work on ‘96 John, so that was how long ago that we started down that road and actually started doing web based management in ‘98. So we’ve been at —

John Willis: Right I remember that.

Winston Bumpus: We’ve been at it a while and CIM today has grown to somewhere, you know 1,500 classes, some 4,000 properties and I really look at it as the DNA of IT infrastructure. Every Windows, Linux, the hardware, virtualization, OVF, all of the pieces we’ve got are building upon that information model for management. You need to — it’s a lot about the definitions, relationships, methods and things that are in it.

So we spend a bunch of time on that. We’ve also beyond the original POX version of CIM XML. In 1998 when we were doing web services, I mean we were doing web services before SOAP. How do you leverage XML and HTTP to provide management without the whole concept beyond web based enterprise management and so we’ve gone beyond that and we’ve worked on our SOAP-based protocol WS-Management, which has got a lot of traction, like I said it’s embedded in chipsets both in desktops and servers and it’s probably in every Windows machine — you know certainly since Vista on WS-Management.

CIM has been in Windows since Windows 98, a lot of people don’t realize it and there’s a lot of Linux distributions today that are implanted to provide that. Now moving up the stack, you know certainly tying that stuff to the CMDB work and what we’ve done in the virtualization profile work that we’ve done, in addition to where we’re working on the cloud stuff.

So we’re not designing — you know I don’t think it’s job of the DMTF to design a complete management system, but we certainly are trying to define the models, the data that we can agree upon to provide interoperability in this space and I think we’ve done a good job. I think millions of machines today — I think you’d be hard pressed to find an X86 machine that didn’t have some piece of its management infrastructure built upon DMTF technology.

Michele Cote: In the discussions that John and I have, I mean that’s one of the reasons that the DMTF comes up. Well I guess one of the reasons is that we’re sort of — well one of us is old enough, but we’re kind of experienced enough that we actually remember when we put out all of the CIM stuff and get excited about like weird models of things like that.

But there is like a — and John networks to the company that’s part of this at Opscode and there is this interesting kind of DevOps cloud sort of thing going on now where there is — and Cisco speaks a lot to this with their Unified Compute stuff and other people are beginning to as well, where there is a reemphasis on sort of model based configuration management in automation and things like that, where people do get obsessed with that kind of modeling effort that CIM was trying to do like I guess ten years ago now, if not a little over that and it is interesting.

There is continuous cycle of us reinventing stuff in IT, but it does seem like this is like a good time for CIM to come, like you were saying with that ubiquity of modeling and everything that already exists as a way of saying, here’s the models that exist for IT and then try figure out how to fit that into the way this kind of DevOps mentality or cloud mentality people are trying to manage their IT.

Winston Bumpus: Right, and looking CIM, as a — it is an information model, it is always intended to be. I have often said that you don’t have to use every word in the dictionary to write a book and CIM is really kind of the dictionary or a blue print of all the technology and I think we are still moving up the layers of abstraction to new model abstractions as we move into cloud. Look at OVF, OVF doesn’t implement CIM like 00:15:39 did or any other — you know the infrastructure you know, Pegasus or any of those other CIM object managers.

It is really a level of abstraction that represents those concepts in a much higher level of abstraction even in some of the cloud API’s. I think one of the things that’s really interesting is that OVF has been kind of embraced in, not only in VMware vCloud API, but other cloud API’s that are you know — the one in OGF for example, the OCCI and others as –- and so all that stuff based on CIM and when you take a look at stuff that is going on in SNIA as well as Storage Networking Industry Association, you know, all of that management infrastructure and where they are moving into the cloud space again, all built on that same fundamental building blocks.

So we are on a journey and I think the things that we work on John are what –- you know, we need people to come in and say this is something important. We don’t sit at the DMTF and say what standard we are going to create today? It is more driven by either customers coming in and saying we got a problem. Like in the case of the SMASH, it was a group of Wall Street customers that just said, you know, we are not going to buy products that have different management interfaces and you guys go figure it out and come back when you have got it done and so those are really the driving forces here.

When somebody has got a problem, you need to go solve it and there is a bunch of people willing to go work on it.

Michael Coté: Yeah, it is us industrial analysts who are just supposed to make stuff up out of thin air, so I am glad you are not getting into our business.

John M. Willis: Exactly.

Michael Coté: Sorry I interrupted you there John, go ahead.

John M. Willis: Yeah, no problem. So, the thing about, I was thinking about like one of these I was so excited about CIM you know, when we were making that kind of transition from like a missed model to an object model is that you know, when I wish you can describe the difference between where, what not and what the objects in abstraction was about, it wasn’t about what’s on the machine, which was more of a niche model, it is about what we can do with the machine, you know what I mean

I wanted to just get a little background. I spent a lot of blood, time in the Tivoli world like trying to implement CIM like architectures and I always felt that there was — it seemed to be — and I agree it’s not DMTF’s job, it is like, if you look at what Microsoft, they went really far with WMI and the idea and they kind of stopped short, but I always thought that, the idea that maybe CIM or WBEM would basically be the –- had a way how to use it, you know what I mean, tool set stuff.

How do I manage a system and that’s the place I have always been I won’t frustrated; it is just like, I wish I would have seen more adoption from a CIM perspective.

Winston Bumpus: You know, one of the things that we did there John that we didn’t have in the earlier days and maybe again this is probably a function of the website, but we have actually gone and created all these things called profiles and the profiles and again you know, it would be hard to find just looking from the outside, but the profiles actually talk about. So, you know, let’s take a power supply and I want to do power management or I want to do power management on my system.

So, there is a power management profile and in that power management profile there is a lot of used cases, so it is like, how do I you know, this used case that my –- in the power management profile is like, how do I monitor the wattage of a power supply? How do I change power states on a power supply and that is what the profile does.

So, each of one –- there is one virtualization is how do I do — there is one called the virtualization system profile, for example. So, it says, how do I pause a virtual machine, how do I suspend it, how do I resume it, how do it start and stop it. I mean these are very high level examples, but there’s profiles for each one of these things now and I think there is probably 40 or 50 profiles out there that are basically, how do you take the model and the protocols and how you actually do something useful and you might want to take a look at those, because those are —

John Willis: Yeah, I know admittedly I have a lot more research to do and I actually, I mean one of the things I was going to do is try to invite myself into some of these discussions because I’m very interested in where this — this is the right way to do it, I’ve always felt it is the right way. Now onto another interesting thing so there was a specification, I’m thinking you’re the only other person on the planet that remembers this and I’m hoping you are, so and let me back off for a second.

Winston Bumpus: Okay, I can almost guess what it is but go ahead John.

John Willis: Yeah, all right let me set up the –- do the set up and that is, it’s funny in this new DevOps movement, there’s a lot of themes about culture and clearly automation and but one of things I think DevOps is kind of really been driven is the self service, the ability for developers to really kind of control their destiny, right?

I mean the idea that they can get instances with APIs and with products like configuration management and automation, they can actually go the last mile and deploy and integrate stacks, but one of things that the DevOps community seems to be bringing up a lot that’s like, I’m like –- ah I think this happened before and the idea that’s developed –- you know exactly where I’m going but let me say this, that developers should be able to define the manageability of the — and it’s getting, the voice is getting louder, and louder, and louder, and I’m like this is done before and it was called the Application Management Specification. It somehow died an ugly death.

Winston Bumpus: Well, so here’s the issue and I probably –- because one of the books that I co-authored around 1999, and at that time it was probably only book written it was called the “Foundations of Application Management”. The title is almost an oxymoron because applications really aren’t manageable and instrumenting — and I know the AMS stuff that you were working on at Tivoli, which is that whole Application Management Specification, which is a whole concept of let’s instrument applications at the time they’re manufactured for manageability rather than trying to bolt stuff on later on and figure out how to do it right?

John Willis: Right.

Winston Bumpus: This is — you know and stuff like ARM, which is the — ARM which is –-

John Willis: Well, I’ve never been a big fan of ARM but it works. The AMS was going, it’s like here’s the thing I know that need to be modeled about myself.

Winston Bumpus: Yeah, yeah.

John Willis: Here’s the overts that are very important to me, here’s my inventory signatures, and it was rudimentary back then but if you look at what we’re doing in the WebOps and the DevOps, I mean they’re doing this kind of stuff with Cucumber, I mean they’re kind of defining it on their own now.

Winston Bumpus: You know what, what I think it’s happening in the IT space and even if you look at your desktop, I mean there was a time when describing all the applications and the relationships between it was somewhat possible but now the complexity is such and I have to say particularly in the Windows environment, I tell people you don’t install software you compile it into the OS. So it makes it really difficult to separate out and to manage those individual components.

So I have to tell you that’s kind of — part of the reason I think I emotionally punted on this issue and why OVF is so charm, because we get a new a layer of abstraction it’s like we got all these thousands of components that we put them together and they get to work and it’s perfect, great, dip it in lacquer now let’s deal with managing that. That dealing with the complexities of all the components in an application and in the platform itself, and in the OS.

John Willis: Well yeah, so I think that’s the difference though, that’s the defining line. So the application developer never worries about the underlying infrastructure, right?

Winston Bumpus: Right, right.

John Willis: That’s seamless, but I think that — I was a firm believer in AMS and I just spend my travel in dealing in this DevOps community and I’m pretty heavy into it. I mean there’s a crime, people are doing this now and at the app level, the developers want that kind of last piece they can automate everything about what they’re doing except they have –- they can’t automate the manageability of their apps and that’s still –-

Winston Bumpus: Do you know what that piece was John and I –- this is a –- it’s certainly been an area that I would — because I was Chair of the software MIF work that we did early on in ‘93, that’s how I got involved in DMTF. I said, we’re doing this software management, maybe we can create a set standard MIF and when CIM came along, I immediately ramped up the application management I think it was called the application workgroup to do the CIM model.

Certainly there’s still CIM model of the application but there was never the –- so the software developers themselves, I think we’re at a –- it was never strong desire or to instrument software for management, they assumed the platform was going to take care of it. The platform once it got the applications didn’t know enough about the application to do it, and the tools which are the third leg in the stool didn’t really have the expertise to do either.

So AMS had a great concept but it was never buy in from the ISVs on it. I’ve always — I’ve had discussions in the past in the DMTF of how we could attract or get interest from the ISVs but they were too busy trying to get that next product out the door ahead of their competitors and customers we’re dealing with — and willing to accept that it didn’t have the stuff embedded. But I feel it’s an unsolved problem. I think out there, that it’s fighting the challenge, maybe there is a different way and different constituency that maybe interested in solving it.

John Willis: Well I think the problem still exists and I think what you said with AMS was premature right, we didn’t have the technology, the idea of self service develop didn’t exist. I firmly believe — I mean they — if they would make me the king for a day, I would dust off AMS and clearly there will be lot of things that would have to be re-factored, but I think the fact when I go around telling people, when they explain this 00:26:42 I’m like, you know there was actually standard for this and we actually addressed all these concerns and it would be — I think it would be interesting if the DMTF kind of dusted that off and said, we had this thing, this is a growing — at least I see it hard in the WebOps in the large scale, the new type of fast growing infrastructure datacenters, the Flickr, the Twitter those guys, and it should be a nice way to drive DMTF into that world.

Winston Bumpus: Yeah and it really would take, to be honest John, it would take two are three companies that were really — or a group of two or three guys from different companies that would say, gee lets go work on this problem to drive everybody here, its hard to, from a platform vendor to go drive that which is really something that needs to driven by the ISV, so that’s just my sense.

Michael Coté: Yeah and I think I mean that’s one of the — its not really a side-effect but one of the exciting effects of like all the cloud stuff and DevOps is, is we are kind of emerging from this drought where developers are trained not to care about infrastructure and like its not cool for developers to like care about all this sort of stuff. I mean there is — personally I always sort of blame is the wrong word, but I attribute it to kind of right ones run, anyway mentality of JAVA and .NET and everything where you are sort of — there are various run times of programming languages where they are, “Protect you from the infrastructure” and you are supposed to ignorant in all of that.

And then as you do high scale web application stuff that cloud computing emerges from and then cloud computing, its funny, its all about knowing the infrastructure that you have and programming that infrastructure or as people say treating it as code. It does seem like a rare opportunity for things like AMS to kind — of sort of get traction again, because it definitely is — there is a certain level of maturity you get in IT management where you start to sit around and you think if we can only get those damn developers to write their stuff so it’s easier to manage, things would run a lot better.

John Willis: Yeah exactly.

Winston Bumpus: Well maybe not and the truth is where we — if we can really get the development platforms to integrate that in the bill time —

Michael Coté: Exactly. And I mean the point of what you are saying as far as having ISVs do is, I mean there needs to be that demand from everyone that translates into tools doing it and people supporting it because its — no matter how nice it seems like some theoretical efficiency is going to be unless it actually like — what was it, you quoted sometime ago, is like unless it like makes money, saves money, or does something for the government, no one is really going to care about it in IT. So it needs to fit one of those two things and it does seem like the promise of cloud stuff and we’ll see if it plays out, is one of those first two things I don’t think that government is involved at the moment but at least its good motivation to do things.

John Willis: I have to say –-

Michael Coté: I mean I should say that government is involved but I don’t think the government is involved in mandating a bunch of cloud technologies except using it for their own and things like that.

John Willis: Well the CIM work, so just kind of an update. So people have falling to 00:29:53 mantra, moving the government to the clouds of 00:29:58 the first CIO across all the government agencies and — so they are — and the DMTF has been working pretty close within this on some of the standards and they even announced on May 20th this thing called SAJACC which is the Standard Acceleration to Jumpstart Adoption of Cloud Computing.

Michael Coté: And hopefully it will be MC’d by Wheel of Fortune’s Pat Sajak, that would be fantastic.

John Willis: Exactly, I think it would be perfect. I was trying to think of an acronym for PAT, so to be PAT SAJACC but the — so they’re really looking at this whole program of looking at what standards are available, what — where the gaps are and then actually road testing these things to say, yeah these things really work and then putting them out there.

So it’s really kind of interesting to see the government being gutsy enough to take a lead on some of the stuff. Certainly what’s happening with NASA and the Nebula Project. Chris Kemp has been driving from NASA, but a lot of the government agencies are all moving down this road, Apps.gov, which has put up with lots of samples of things.

We are on the edge and its going to take some guts and some innovation to move to the next layer but its pretty exciting time and I — so my — it make money, save money, or government regulation isn’t too far off and certainly OVF is one of the things that they are pretty excited about. But we still got the app side as I think is an issue.

One of the issues that’s come up recently and I will just share this with you because it’s kind of interesting, is on just license management. I mean that’s another whole issue. As we started moving work loads around in the cloud, how do we track and monitor usage not necessarily enforce, but how do we attract it, we’ve got all these licenses that we’ve deployed and what’s the license model that we need to be tracking because I think customers are starting to get pretty nervous about moving stuff around and not being able to make sure they’re in compliance, so there’s lots of little challenges we’re going get yet to solve.

Winston Bumpus: Wanted to add that one more question about OVF is just you kind play devil’s advocate on OVF and that you know I think it is really the primary standard right now for what’s going on in the cloud and obviously got a lot of adoption. But I wonder that if one of the things I say a lot when I’m presenting about operations and new operation when I first structure is that I got to say that operations is not about cloud, it’s about a cloudy world.

And what I mean by that is the reality is that there are a lot of flavors of services, you know bear metal is going away anytime soon and you have different versions of virtualization and then you have the extreme hypervisor abstractions like the Amazons and some of the other cloud players and I’m wondering if OVF didn’t dive too deep into or in a point of time look at with the way the world looked in terms of focused as a virtual instance and what the — kind of a coupled version of an instance where many of the cloud providers for example today are very much decoupled on their service, like their storage or outside of — not coupled to the instance their services like key models and different things like that.

OVF you kind of agree that maybe that there needs to be some type of diversion now, but where does OVF go to address that if you believe it is a potential.

John Willis: Yeah and there’s been lots of discussions, you can imagine. Everything from using OVF for bare metal provisioning, certainly that’s one extreme. There’s been discussions about where do SOAs fit.

So you can do extensions with OVF and some people have started to put SLA like things, Service Level Agreement like things in OVF. Like, not only do I need this kind of — you know this many processors or this much memory, but I need — you know I want to get something with kind of response time or something. So the questions really where does all that really belong and it starting to be discussions of well maybe this stuff needs to be external and reference, reference to OVF and it’s maybe not part of the OVF. So that’s one piece of the equation.

The other piece is OVF is really deployment, but people are starting to use information in OVF further down the lifecycle, you know you talked about life cycle management. OVF can represent three or four different virtual images that may make up a web server, database server and some middleware and you can deploy that and then you can say this is the sequence that I want to start those up and that well, maybe that’s the sequence I want to take them down and then you cross the line. But people are using it for that. They’ve crossed the line from deployment to taking some of the information in OVF and using it for the run time management.

So there’s lot of questions yet, because it’s moving in lots of directions and there’s also the questions about the granularity of all of the apps and the configuration that it actually represents in that image and the management. So now that I deployed it, I want to provide some patches or do some configuration management, how much do I have there?

So it’s kind of moving — you know the discussion is moving in all kinds of directions. I think we’re trying to move fairly carefully in that, but at the same time the extension is allowing people to experiment outside of it’s intent.

Winston Bumpus: Yeah, I think that — I mean I also have a horse in the race, but I mean that’s the important thing about configuration management. There’s two schools of thoughts – there’s the school of thought that you can solve all your cloud problems with images and then there’s the school of thought that you solve cloud problems or even virtual problems, virtual instance problems, cloud and unclouded, just enough operating system and we use configuration management or desired state and system integration to glue them together.

In this way it’s always a defined state update and you blow away systems at any point. So I think that’s a piece clearly that I think probably needs to be baked in at some point.

Michael Coté: It’s always the blobs versus the scaffolding approach. It’s a nice divide to have between the two. Well, I think, I mean that was like the overview and update I was looking for and definitely there has been some ongoing questions that John was good about hitting up that we’ve always had. So unless there was any other updating about the DMTF is up to. I think that’s a good way to close it out.

Winston Bumpus: So I think it’s terrific. I appreciate the time and it’s always good talking to you Michael and John and there is a lot of good stuff going in DMTF, anybody listening to this wants to get involved go to dmtf.org and if you have any questions you can always pop me an email at president@dmtf.org and glad to answer any questions or provide any help I can.

Michael Coté: Yeah it sounds like there’ll be an exciting new website some time in the future as you referenced several times.

Winston Bumpus: Yeah may come up with two.

Michael Coté: I mean we really appreciate you being on and we’ve been looking forward to it so that’s fantastic. Thanks so much.

Winston Bumpus: Great. Thank you very much.

Michael Coté: And with that, we’ll talk with everyone next time.

Disclosure: OpsCode, where John works, is a client. Be sure to check the RedMonk client list for other relevant clients.

Categories: IT Management Podcast.

Media Synching – a squiggly can of worms

Old iPod

Ever since Napster made us (well, not me, of course!) all into casual pirates, entertainment and technology companies have been wrestling between locking down digital media and keeping it as freely copyable as possible. Now, with many people toting around multiple devices – phones, MP3 players, laptops, desktops, work machines, etc., not to mention the “traditional” devices like stereos and TVs – the need to move your media between each devices freely is felt more painfully than ever.

That ability to watch “my media” anywhere at anytime has yet to be fully realized for the mass-market. If you’re a geeky enough person with some spare hardware, bandwidth, and media rippers, you can do extremely well – but that’s a far cry from the ease of pushing the Power button on your TV, inserting a movie into your DVD players, or touching play on your iPhone.

Erica Naone at the Technology Review recently asked me about these issues in relation to a story on Libox. In addition to the part she quoted, I replied with the following:

The core issue is that the easier it is to sync media, the easier it
is to share media, and the easier it is to pirate media. That said,
with DRM encoded into the media, the point becomes somewhat moot.
But, thanks to the “information wants to be free” crowd and the
annoyances of DRM (like not being able to burn a CD with music you think you “own,” and can do anything with), there’s been sufficient pressure to remove DRM from popular digital media distributors like iTunes and Amazon. Without DRM, making syncing music easy is a night-mare for traditional businesses that are built around using copyright to make money. Companies who own the copyright on media want to extract as much money as possible out of each asset; trying out new business models is scarier than doing “what works.”

That said, as a consumer it’s incredibly annoying that things like the iPod and iPhone have this one-to-one relationship with your desktop. You can’t really sync a iPod to more than one computer, and forget doing it over the air. For as innovative as Apple is in this space, they still have a quant lock-in to the desktop. You have to plugin your iPhone and iPod to a machine with iTunes running on it to sync music and you can only pull media from one such iTunes instance. It’d be an obviously handy feature to be able to sync from the cloud instead of a desktop (though it may be slower, the connivence would seem to win out ongoing) and I’m hoping Android pressure everyone (e.g., Apple) to do so.

Categories: Quick Analysis, The New Thing.

Tags: , ,