tecosystems

Mashup Camp: Day 1

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit


Checking Out the Session Board

Originally uploaded by sogrady.

The transit time from Maine to California is long (even when you are not delayed by almost four hours). Much longer than I remembered. It’s not London to California long, but long enough that I watched the entire All Star Game (the only time I’ll ever root for Mariano Rivera; there’s no time in which I’ll root for Derek Jeter), finished the last three quarters of Sebold’s The Lovely Bones (fairly good, and quite similar in some respects to Case Studies) and still had time to catch part of the Daily Show and the Colbert Report (if there are two more important people on TV right now, I’d love to know who they might be). All of which is to say that I’m always hopeful that spending the money and taking the hours out of the schedule is justified by the conference on the other end, but worried that it might not be. Worried, that is, unless the event is Mashup Camp.

While this iteration of David Berlind and Doug Gold’s unconference for developers/by developers is different than its
Feburary counterpart, it’s no less valuable for that. Indeed, in some respects it’s even more useful than its predecessor.

Where Mashup Camp 1 was characterized by a sort of breathless enthusiasm, the sequel is exuding an air of maturity. Not the kind of cynical, sell-out maturity that got the Boston LinuxWorld killed, but the kind of maturity that says if we’re going to make a serious go of this some of us need to actually make a bit of cash. Otherwise, you know, the whole dot com thing. The kind of maturity that leads to sessions like this mornings led by EVDB’s Bryan Monroe (very intelligent guy) that deal with the opportunities and pitfalls associated with monetizing APIs and feeds.

My biggest complaint with Mashup Camp, in fact, is that there are too many sessions of interest. In the 10 AM slot alone there were no fewer than four sessions I wanted to attend (I ended up having an interesting chat with John Herren and some other folks talking about the role of PHP in mashups). When that’s a conference’s most significant issue, you know the organizers – Berlind, Gold along with every single attendee – are doing something right.

But most of you are probably more interested in what’s been said here than whether or not I think it’s of value. It’d be impossible to answer that question effectively, given the breadth and scope of the conversations I’ve had today and the variety of Mashups I’ve seen, but let me discuss a few things that I believe are significant:

  1. What Are Folks Using:
    Mashup Camp devotes a significant portion of each day to an event called “speed-geeking.” Basically this means that the developers in the audience grab a table and demo their best efforts to groups of campers for five minutes, and then it’s on to the next table. This Mashup Camp, I asked every participant a simple question: what language did you use to build your application? While the small sample size means that the answers even in aggregate are not terribly relevant, the ability of the developers in question and their overall significance belies that assertion. PHP was the undisputed winner in terms of popularity at Mashup Camp 1, and I was interested to see if it could duplicate that success.

    The answer so far is no. There’s a long way to go, as I’ve seen less than half of the available mashups, but so far none of the projects has been based on PHP. That’s sure to change tomorrow, but my scorecard at the halfway mark reads as follows:

    • C++ (2)
    • C# (3) – Mono folks take note
    • Java (2)
    • JavaScript (many)
    • Perl (2)
    • Python (2)
    • Ruby (2)

    No run away winners to be sure, but some interesting datapoints. The Perl and Python (TurboGears was most popular there) were unsurprising, as was Ruby and even Java (WeatherBonk kicks ass). The C++ and C# did surprise me a little bit, until I considered the nature of those mashups, which brings me to point #2.

  2. Desktop Mashups:
    I’ve got another excessively long post half written about some desktop thoughts which I’ll try and finish before the end of the week that will touch on a similar topic, but one of the interesting datapoints to me here has been the focus on desktop applications. There was even a session called “Mashdowns” which was designed around precisely around that topic (couldn’t make it). But consider, for example, the mashup demoed by the folks from Strike Iron (more on them in another post – they’re significant for a couple of reasons). Speed Geeking station #20, manned by Strike Iron’s David Nielsen, featured a C++ rich client built into Microsoft Excel. It called in sequence a series of services from Strike Irons catalog and routed the information right back into Excel, finally messaging results calculated from the returned information to a cell phone via another Strike Iron service. This blending of network services with client side components is not new at all – applications have been doing similar things for years. But the plethora of available services and the creativity of some of the resulting combinations takes the concept to the next level. The desktop could well be the next area of opportunity for mashup developers seeking to break new ground.

  3. Go Forth and Build Me THE Feed Repository:
    As I discussed with 411Sync’s Anu Nigam over beers at the cocktail hour, I’m seeing a fairly straightforward yet significant opportunity in mashups, Web 2.0, call it what you will, that is (IMO) begging to be delivered on. I discussed this briefly in a recent podcast, but let’s describe it simple terms. Let’s say that I’m a business provider, and I have a vague idea that I’d like to build a mashup that involves, say, maps, weather data, and my own personal (maybe even proprietary) value add.

    For the sake of argument (as I’ll be tackling this in my next post), let’s say that’s up-to-the-minute plane departure information. Where do I start? Where would I look for feeds? How would I choose from two or more feeds that offer what seems on first glance to be the same data? What are the terms of these feeds? What are the service levels associated with them? What restrictions are placed on my usage? How does the definition of commercial vary from one to another? Who might I approach if I want to pay for a certain level of service? Are there How To’s that help me get up and running with this? Code snippets available? An integrated services offering for me if I’m unwilling/able to do the work myself?

    These questions, as far as I’m aware, can be answered through no single resource – and some can’t be answered at all. We have resources that address a subset of these concerns; Strike Iron, for example, maintains a services library replete with offerings from the likes of D&B, and as covered previously John Musser’s programmableweb could be – in time – an excellent answer to all, rather than some, of these questions.

    But whoever delivers on that vision, I’m convinced, would be in an excellent position to monetize the coming wave of mashup investments, which are heralded by applications like IBM’s evolving PHP-based QEDWiki. Think about it. You could a.) monetize traffic via ads, b.) monetize providers (in several different ways) for hosting their feeds/APIs, or c.) monetize premium services such as documentation or service/support/deployment. It’s distinctly non-trivial technically, and would be a difficult to manage marriage of community and commercial interests, but the potential returns would be volume, if not margin. Incidentally, I think SWiK, if Alex should decide to open source it, would make an absolutely ideal platform for such a repository.

    And if any of our existing or prospective clients would like to talk through the nature of the opportunity, I’m more than happy to do so.

  4. API Economics Inconsistency:
    One of the reasons that I think the above offering would find such an eager audience is the simple fact that the economics of feeds and APIs are fundamentally broken at the current time. One of the unfortunate conclusions I came to during the API and Pricing discussion led by Bryan this morning is that virtually no one has any idea what the hell they’re doing with respect to monetizing feeds. eBay, when I spoke with their developer relations staff a little while back, wasn’t too bad, but generally speaking the economics of feeds are chaos. There is no consistent and accepted best practices for a.) when to charge, b.) what you can charge for, c.) how to charge, or d.) what levels of service are required even within single vendors, let alone from one to another. Right now, most of the discussions that involve significant feed usage amount to one off, custom deals which is fine for an immature market but a significant throttle for growth.

    If I’m a mashup developer, as I said in the session today, I don’t want to have to project my costs using volume based metrics for one feed, user based metrics for another, monthly license fees from yet another, and no fees (and thus no guaranteed service) from yet another. It makes my business ugly, unpredictable and ultimately unsustainable. We’re trying to get away from actuaries, not propagate them.

    I’d love to say that I have the answers and here’s how it should work, but that’d be a lie. Nor would I try and make the case that all feeds should be governed by a single economic model. But some simplification is necessary, clearly. Unfortunately, what I think it’s going to take is trial and error. A lot of it.

With that I close the books on Day 1 at Mashup Camp. More on the tomorrow’s action whenever I get around to it.