So i am talking to Dan Sholler about industry analyst firms and the quality of data we provide.
I made some points about the rise of the analyst aggregator (and the associated fall of the glass tower proprietary maven, replacement obsessiveness disorder notwithstanding) here, based on the idea that analyst skillsets will need to change, just as they are in journalism. We can reach out to new sources of information more effectively using social software tools, in order to include “citizen analysts” in our worldviews, when the quality of their information is strong. Its not just that we can, but we must.
Assessing information quality is surely a key analyst role, perhaps the key analyst role–assessing the quality of information, whether we create it or someone else does. Whether quantative or qualitative. Sometimes we’re sleuthing or fishing when vendors brief us. We have to be prepared for enterprise users to lie to us on occasion (no reference wants to admit they were gamed by a vendor in a purchasing situation).
We have to prove our ability to parse. Every day. All analyst firms should. We need to be able to distinguish between propaganda and the real deal, otherwise we become no better than partisan hacks. It is the analysts role to analyze, sift and question, which is one reason i get so worried about firms that go to market half-cocked. If we can’t trust you to assess the quality of your own information, how can we trust you to assess the quality of data provided to you by vendors or other parties? We must be able to recognize rhetoric (and bullshit) when we see it. Sometimes its a hunch, a blink. That’s often the time to dig in.
I agree with James Surowiecki that diversity is essential to better information quality. The Wisdom of the Crowds enables better decision-making. Which brings us back to Dan.
But first a couple of stories. I recently shared a car with someone whose opinion i greatly respect. We were talking about the analyst business, as he drove, and he said, paraphrased, that “what really annoys me about the big firms is that we brief an analyst, get them up to speed, so they really understand our strategy, and then we see someone else in the company that is quoted on the subject, knows nothing about it, and slamming us. The problem with these guys is that they dont share any information.”
In another conversation, i was actually a little surprised when an analyst relations professional pointed out that most analysts don’t like to read each others research, even within one firm. But he was right. I thought about it, and i know i have worked in environments where that’s the case. But if we don’t read each others research how can we build any kind of company view? With process? This to me, seems a big problem in the notion of scaling analyst “wisdom”. What is the mechanism for doing so? I don’t know, but one thing that makes it easier is that tecosystems is probably my favourite feed, especially now Stephen is in Denver and separated from me by 7, rather than 5 hours.
One final story. I am talking to the EMEA marketing director of a major content management company. He says: How can I take the quadrant seriously when there is a competitor rated in the top right under ability to execute, and it hasn’t made a profit in more than five quarters?
Dan though argues, in a response to a question about the value of scale in the analyst business:
Scale matters for analysts, because a good portion of what customers ask for is, in effect, a collaborative filtering exercise. As you know, for those filters to be accurate (even one that is mediated by a smart person) they require large scale input, or they are subject to all kinds of strange distortions.
I totally agree with Dan here in many respects. A collaborative filtering exercise would create real benefits. Unfortunately for the big firms i don’t think this is usually what they do, or how they work.
I also don’t believe that all the views aggregated in order to avoid “strange distortions” must come from within one company. On the contrary, we scale diversity by aggregating information from outside RedMonk.
That is one problem with an appeal to economies of scale. Its all just so much hoo-ha without mechanisms to deliver value across silos. And everything i hear from enterprises, vendors and other analysts is that Gartner doesn’t systematically join the dots and doesn’t have mechanisms for avoiding single source strange distortions and groupthink. Of course Gartner isn’t transparent about the methodologies it uses, so perhaps we’re are all wrong.
Lets go back to my analogy in open source. IBM has huge economies of scale in its developments processes. But it is learning a great deal from smaller teams using other approaches and mechanisms to drive scale. Apache isn’t a big organization. But it is a powerful mechanism for scalable development, through its licensing and governance structures. Apache harnesses the wisdom of the crowds effectively. Another organization with an effective mechanism for wise development is the Eclipse Foundation. Apache scales individual contribution, while Eclipse scales corporate participation (a rough distinction based on membership structures). Both create massive value in coordination, collaboration and collective action. Eclipse is also now seeking to ratchet up individual participation in order to help with marketing efforts.
The thing about big organizations is they often try and solve problems by throwing resources at them, rather than coming up with smarter solutions. Top down resource heavy approaches also tend to enforce groupthink (this is the problem we’re solving, this is the problem we’re solving, this is the problem we’re solving…). What was it our customers wanted again?
So just what is the mechanism for filtering? Prove me wrong. How do you share knowledge? How do you take from the consultants that actually talk to enterprises and share with those that mostly work with vendors.
If there is one thing i hate in my business, and vendors are culpable in seeing us all as RFP creators or valueless, its the shorthand of “customer” meaning enterprise. It’s just so much bull. Who is the customer? Often a vendor. Working with vendors can actually be more fun because their challenges may be bigger.
But back to mechanisms for avoiding strange distortions. I still want to know what yours is. We need to avoid The Warren Harding Error, just picking tall men.
Who builds the magic quadrant – all of Gartner or one specialist? That is not an academic question. Wouldnt it be better, anyway, to aggregate the views of every analyst in the market covering a technology. That way we could provide better data to our customers and consult to that. Perhaps this idea will gain currency through an association. I doubt it though – canvassing at Gartner and Forrester events hardly seems the ideal way to get in touch with a range of analysts…
If the answer to the question what is the benefit of scale is that we have scale, then the big firms really are in trouble. Will Gartner fall into Silver Lake Partners hands? [note to the Silver Lake: Gartner actually is not a “technology company operating at scale“. its not even a technology company, which means your investment strategy is more diffuse than you claim. If you were managing my funds i might have some questions about that…].
Can Gartner maintain its independence in the face of VC interference? Other notable brands have been destroyed under the weight of shareholder interference. These were smaller firms, that morphed into white paper companies, because when you have more mouths to feed, and shareholders to placate, it gets harder and harder to turn away money offered for shilling.
That’s the disadvantage of scale. And that’s why RedMonk can put integrity first, and making money second. We turn away dollars and pounds all the time. Because we can. The advantage of being a micro-persuader perhaps.
[For transparency sake i should say we’ve been doing some consulting with Eclipse to help with RSS/community-building efforts. Apache on the other hand, has never paid us a bean :-)]
Jon Collins says:
May 26, 2005 at 6:19 pm
Perhaps all of these issues are symptoms of a greater problem – as an IT analyst myself, I have to hold my hands up as playing a part, all the while recognising that any influence I can bring to bear to solve it is likely to be small – the downside of scale. I believe the IT analyst community as a whole is doing the IT industry, and its customers, a disservice. There’s some great people working for analyst firms large and small, and they are doing some great work, but somehow the overall effect is lost as we try to apply old, functionality driven models to what is a far more amorphous, business and architecture-driven set of requirements. Today’s bandwagons cross many segments of functionality, but analysts often still work in a siloed fashion, seeing only a small part of the picture and trying to make sense of it. Its like one of those quiz shows where they show a zoomed in picture, and the contestants have to guess what they are looking at. No wonder perhaps they don’t want to share their findings.
Somewhere in the none-too-distant past – possibly slap in the middle of the dot-com boom – IT analysts started talking slightly more than they were listening. There’s only a limited time left before vendors realise as a community that the IT analyst industry is treading water at best, and failing to deliver useful insight at worst. The mutterings from marketing VP’s are becoming more and more vocal, that they are being charged too much and benefiting too little. If this erosion of credibility continues without being counteracted, it is only a matter of time before the fall.
I agree as well – insight is nothing without independence. The dependencies are not just financial however – I think that some companies are not only held back by their financial structures, but also by their heritage.