While collaborative, UI-driven Web 2.0 technologies get most of the attention in the “Enterprise 2.0” discussion, there’s plenty of (seemingly “boring”) value in enterprises simply sharing their meta-data with each other across their firewalls. In the simplest case, sharing raw numbers to establish industry benchmarks. To pick one example, take software usage. How many people are using IBM’s ESB or MuleSource’s ESB? What ways are they using them? Did they stop using them?
Imagine a scenario of an enterprise architect trying to pick which identity management system to use. There’s really no Amazon you can go to and see how different users of and IdM system rated the experience (though, there’s attempts in other areas). Instead, we have industry analysts, peer groups, the press, and Google Blog Search to figure it out. We don’t even have the raw numbers to know how many instances are installed. Sure, vendors can collect together such stuff for you, but that’d be like relying on a publisher to tell you if a book they’re selling is good or not. Worse, a music label or movie studio.
Tracking Open Source Usage
Along these lines, Chris Kanaracus wrote-up OpenLogic’s Open Source Census today. We’ll see what data OpenLogic put together and, more importantly, how many enterprises decided to participate by installing the OSS Discovery agent.
But, at least in the niche of open source software usage, the idea here is to pool together the collective intelligence about product use from all enterprises to help those and other enterprises make build/buy decisions instead of (more likely: in addition to) relying on vendors, analysts, and other middle-men in the process of doing IT procurement.
To me, the new thing here is tracking usage, not just downloads, of open source software. Right now, downloads are all we really have and they’re not indicative enough of the success of any given piece of open source software. You can download a piece of software once, for example, MySQL, and then spin up 50 instances. Spread that across different companies, projects, lines-of-business, etc., and things get weird fast. Or, you could download a copy of MySQL, find it doesn’t work for your needs, delete it, and that download still counts as a “positive” tick in the download metrics.
When trying to evaluate the “worth” of a piece of open source software, then, ore important that downloads are number of instances being used and, if you’re luck, how long they’ve been used and when they’re taken out of commission. These kinds of stats get you to notions like “Linux just works, end of discussion” which is the kind of industry-wide assumption (read: marketing and branding, never mind the facts positive or negative) that very few open source products have attained.
Keeping A Gun Trained on Your Foot
There’s an incredible cultural problem here that creates much of the money-making potential in the enterprise software world: the people in enterprises seem unwilling to share basic IT data with each other. The idea of installing an agent behind your firewall that surveys your IT assets and then sends it up into the cloud has problems six ways from Sunday for most of those said enterprise people. You just don’t let any information about your corporation beyond the firewall. It’s the Enterprise 2.0 equivalent of privacy policy freak-outs in the Web 2.0 world.
Instead, why not pay analysts lots of money to tell you that information? Or just fly blind with no data?
Tongue pulled out of check, there is a very pragmatic benefit to total information lock-down. The benefits of applying all this “2.0” thinking to enterprises usually come from relaxing such information hoarding fears, if only between groups in an enterprise. Spreading that collective knowledge, even for something as simple as usage numbers for open source software, would be an incredibly novel and valuable asset if only enterprises will let it happen. They’re no the only barriers of course: large vendors, if that knowledge disfavors them, would no doubt rather keep the status quo in effect as well.
Telemetry Help From SaaS
This is also the kind of telemetry, as Stephen likes to call it, that SaaS-based enterprise software could/should start using. Companies like RedMonk clients FiveRuns who do SaaS-based IT Management are storing all of their customers IT assets up in the cloud. Splunk, Paglo, the SaaS version of Sun’s xVM, Canonical’s Landscape, and SourceLabs as well. From a different domain, Genuitec’s Pulse pools such usage data on Eclipse usage. And, there’s plenty of other outfits that have such pooled usage data.
Peering through that data to find positive and negative patterns in IT is the abstract idea of what collaborative IT management is all about. As a simplistic example: “60% of the people who use XYZ storage arrays also use ABC storage management software.” Mint, from the consumer world, has a similar use case, compare your spending to others in your region:
I assume Wesabe does the same.
Now, if you had actual spending charts mixed with usage to compare across enterprises: now you’re talking about all sorts of high-priced middle-man disintermediation for enterprises.
People making IT procurement decisions have a tough job because there’s so little “raw” data out there. Most of it comes in the form of silver-tounge laced PDFs and sales calls. It’s either that or frustrated bombast that’s hard to weigh against context-driven blow-ups of some chunk of enterprise software. Again, I have no idea if OpenLogic’s survey is going to be both in-depth and accessible enough to help out the industry as a whole, but someone wanting to profit from jamming “2.0” at the end of “enterprise” needs to tackle this problem.
Disclaimer: Sun, IBM, MuleSource, FiveRuns, and Splunk are clients.
OooOOO this fits in well to the OS community tools stuff I wanna talk about at drupal dev day at communityone.
Can I bend your ear a bit there?
cheers!
Silona
Silona: sure thing, bend away!