Blogs

RedMonk

Skip to content

MMS 2007: IT modeling with SML, and the current state thereof

The model of a system or service is created from a collection of definitions from one or more SDM
models defining the individual components. The basic structure for this model is typically specified
during the development process, by architects, software developers, and from feedback from IT
operations and support staff. This base definition serves as the skeleton on which all other information
is added. In addition to the structure, the system model will also be able to contain deployment
information, installation processes, configuration information, events and instrumentation, automation
tasks, health models, performance and capacity models, operational policies, and so on. However, this
is just the starting point: other information can be added by operations staff, vendors, and management
systems throughout the life of a distributed system. Potentially any development or management tool
may contain authoring tools for creating and extending SDM models; likewise, any application may
express its configuration, desires and constraints in an SDM model.

It is also important to note that SDM does not replace instrumentation such as CIM or WMI; it simply
provides a layer of model above the instrumentation, describing the desired state, interconnected
relationships and management policies of the distributed system.

“Understanding the SDM to SML Evolution”

Last week at MMS 2007 and this week following up with some reading I caught up with Microsoft & friends IT management modeling efforts: most recently, the Service Modeling Language (or SML) and the predecessor for SML, the System Definition Model (SDM).

In brief, my take-aways are:

  • SML are still very much in development as standards. While Microsoft is using SDM to represent IT run-time requirements in Visual Studio Team System, the migration of SDM into the (recently) open standards tracked SML will come sometime in the future, definitely not now. That said, sure, SDM is there as a data model now if you’re interested, but it’s a Microsoft data model.
  • There seems genuine, in action effort by Microsoft to make it’s IT data model into an open standard, whatever that data model may end up being. Not only has Microsoft been pursing SML as an industry-lead standard, they’re trying to get SML run by the W3C (the reasoning being that the W3C would seem to be the best source to developer XML/WS-* standards). Sam Ramji also points out that Eclipse COSMOS is doing something with SML.
  • So far, SML and SDM seem to come from the school of complex, WS-* think rather than stripped down, lean REST approaches. The funny thing is, once you actually dig up an example of using the data model, strip away all the SOAP envelops, weird ways of referencing and describing relationships to other IT assets, the XML is quite simple. After all, most IT assets (as JMX MBeans show in their own OO-complicated way) are just bags of name/value pairs, with types if you’re snooty. As always, all that SOAP-y ceremony and its reliance on tools over text editors results in complexity.

Data Models in IT Management

Having developed IT management software while at BMC, I know the pains of coming up with a way of representing, or modeling, the IT landscape. The problem is the wide variety of “raw things” out there in hardware, but especially in software. More so than coming up with some way — now, always in XML — to represent a server, a business application, or a transaction, the key to a model is having more than one product use that model towards the goal of interoperation between different systems.

IT management is, almost by definition, about making a bunch of different pieces of IT work together. At the very least, this environment demands the ability to normalize all those different IT assets into one conceptual framework. The ultimate goal is establishing one, or as few views of IT possible, allowing you to more easily answer the question, “is IT helping me make money or taking away money?”

That normalization is (theoretically) made possible by jamming everything in the IT world into one, common data model. Every machine, no matter it’s make and model, has some amount of memory, some amount of storage, a collection of applications running on it, network connectivity, and the cascading hierarchy of attributes associated with each of those things. And then there are “services,” transactions, and other metaphoric (or “logical”) IT assets to keep up with. As an industry, we lack that common data model. Many companies and standards bodies have tried, but the efforts have tended to create more ways of representing IT rather than less.

I would argue, in typical naive code-monkey fashion, that coming up with a simple, low-fat way of representing any given IT asset is relatively simple. What’s difficult is getting everyone to agree with and continually update to that spec. More dangerous is the temptation to put everything in a spec, following a big bang approach, rather than working on the simple things at first — just representing a physical computer, a service, or even an application — and releasing versions of the spec incrementally.

The worse danger is coming up with and implementing a standard in a closed method and format, finally emerging after the big-bang and laying out The Almighty Thud.

Avoiding these traps are even more poignant for IT management vendors now in the world of CMDBs. In this context, the goal of a CMDB is doing the normalization described above, and they’ve been sold accordingly. Over the next few years as CMDBs are not only put in place but put through the mucky sloggery of day-to-day life, we’ll have an emperor wears no clothes situation if each CMDB is simply another, proprietary data silo that has to be integrated with other systems. That is, instead of being the single source of truth, it’ll be just another wild eyed piece of vendor IT in the wilds shimmying it’s way into quarterly numbers with Vision and Road-maps.

(Of course, some would argue that there’s plenty of nude emperors mobbing the village alleys at the moment.)

What is SDM and SML?

SDM Boxes

Microsoft’s go at an IT management data model started some years ago with the System Definition Model, or SDM. SML is the evolution of SDM into an industry, and it’s hoped, open standard. That is, Microsoft developed it’s own, in-house model for IT assets and is rolling it into a standard.

Key to, if not at the center of, Microsoft’s overall IT management vision/plan (the Dynamic Systems Initiative, DSI) is the idea of modeling all of IT both ahead of time (a priori) and in real-time. Old hands will see that part of this is, essentially, the idea of a CMDB (the database that you store records of all your IT assets in): if you’re going to manage a chaotic, sprawling landscape of IT assets, you need to meticulously keep track of just what’s in that landscape. Indeed, SDM and SML will provide the core Configuration Items (the IT assets or “things”) for Microsoft’s CMDB.

What’s different in Microsoft’s approach with SDM is the idea of defining these models ahead of time, during development. Microsoft is big — big! — on getting developers to define these data models and associated artifacts when they’re developing software. In this case, the deliverable is a “Management Pack” that not only contains the data models that represent the structure of the software, but also the “knowledge,” or best practices, ideal configuration, and even management tasks (how to recover from bad states).

At the moment, IT architects who are working in the Microsoft world can use SDM to define the data center requirements for any given piece of software; that is, express the operational requirements for code. When a developer is working in Visual Studio Team System, they can refer to and build to that SDM model. Internally, of course, all Microsoft products are supposed to be developed with Management Packs.

This is worth calling out because, from my view, most all IT management vendors have long given up on instrumenting ahead of time. Most effort is put on working with management data and interfaces that are available (SNMP, WMI, JMX, SSH screen-scraping [yow!]), or retrofitting them with virtual machine magic (Wily, Identify, and AppVision). Low-level systems management is very much a lowest common denominator affair: good luck!

As long term readers can probably imagine, I’m quite skeptical of the idea that developers will take the time and effort to create Management Packs ahead of time. Don’t get me wrong, I understand and appreciate the value of doing so: it only takes a few weeks doing third level support for an application developer to get the fever for instrumenting the crap out of your application. The problem is getting developers to appreciate the suffering they’d avoid by doing that extra work.

The PowerShell Honey Pot

As a side note, Microsoft has a great asset in PowerShell for getting over that hump. While it may be extra work, PowerShell is fun — a shiny object — and that the draw of the new and cool is irresistible. As we’ll get in to below, PowerShell commandlets are a breath of fresh air compared the complexity of SML and, I assume, Management Packs.

Co-creation of Management Packs

One interesting pivot here is to consider what role collaborative systems management could play in defining the Management Packs. Users would upload the Management Packs they’d created to a central site, after which other users could browse those Management Packs, pull down, use, and even refine the Management Packs. Open source data modeling, run through with co-creation, if you will.

From a completely different domain, Adobe’s kuler is an extremely simplified version of this co-creation idea around color swatches, while, back in the IT management domain, Klir is offering early, but fleshed out, versions of the idea for thresholds and reports. Many other people I’ve spoken with are “working on” similar things, so hopefully we’ll see more examples soon.

Openness Driving Simplicity

When it comes to any standard, the problem is also making the standard simple enough to both understand and implement for developers, with or without tools. As with most XML and WS-* inspired spec efforts now-a-days, I’m most worried about those aspects when it comes to SML. I’ll let the first pass at SML speak for itself, though: go check it out.

Now, at the moment, SML is only the first part of the over-all goal of providing a model for IT. The SML 1.0 spec, and it’s friend the SML Interchange Format spec (SML-IF) simply lay out the structure for describing how a pile of IT assets are related. If you’re looking for how to represent a storage device, say, you won’t find it.

Presumably, that’s the next thing to pull out of the in-house world of SDM into the open-house world of SML. I’ve been told that the next step is releasing CML, or the Core Modeling Language. CML would have more of the actual semantics of IT assets rather than a scheme for packaging and wiring them up as SML 1.0 does.

Microsoft tends to develop standards behind closed doors, often with other vendors, and then release them into the wilds. The explanation here, of course, is that it’s more efficient that way. I can only really shrug at that idea as there’s truth to it, but that it over looks one of the (painful) goals of open standards development: you’ll be told “no” more often than when you’re behind closed doors. While that sounds nutty, the results as the OpenID and microformats folks (there’s wide cross over between the two groups) are showing us, are hopefully a spec that’s developed more rapidly and actually does what more people, including end-users, want at the end.

My hope is that the SML crew — it’s more than just Microsoft now — are going to continue opening up the spec, to the lowest levels as fast as they can. It is at the W3C, after all. Maybe not the coolest standards body of late, but an open standards body with good rep, no matter what the RDF and WS-deathstar haters or Web 2.0 cool kids may say. The question is: will that be fast enough to ensure simplicity over complexity?

Disclaimer: Microsoft is a client, as are Adobe, BMC (Identify), and Sun (JMX).

Technorati Tags: , , , , , , ,

Categories: Conferences, Systems Management.

Comment Feed

6 Responses

  1. This entry states that SML is unduly complex and related to SOAP. Complexity is a relative – writing a "hello world" program may be complex for someone who has no background in programming but trivial even for a novice programmer. SML is based on XML Schema and should not be complex for anyone who understand XML Schema. It is not related to SOAP – you don't need to know SOAP to understand SML and the SML specs do not use any SOAP concepts.
    SML is a general-purpose modeling language and is not meant to define any IT-specific models. The CML (Common Model Library) effort will create a standard library of SML-based common models for IT services and systems.

Continuing the Discussion

  1. […] I mentioned kuler in passing. Of all the Adobe SaaS apps, it’s both the simplest and coolest. (OK, maybe […]

  2. […] like self-flagilation (read: are a programmer) but not really what we’d like to have. While SML and SDM look to be baroque XML, it’s a nobel effort. What’s depressing for me is how slow and how opaque the process […]

  3. […] that Microsoft was going nuts over the general idea of an IT management modeling specification with their standardizing of SDM to SML. Also, it should be noted, the DMTF’s CIM found wild success in Redmond: it’d be […]

  4. […] and whitepapers around the Common Model Library (CML), which is a further evolution of the SML family of IT Management data models. There’s a large cross-vendor effort, similar to the CMDBf, but there doesn’t seem to […]

  5. […] Hearing that William worked on SML, we launch into a discussion of the evolution of SML, CML, and it’s origins in Microsoft’s SDM, MoF, and then CIM. As background, I tend to write-up posts on that topic when at Microsoft events like MMS or TechEd, for example, this one from MMS 2007. […]