Blogs

RedMonk

Skip to content

Innovations in Requirements Management for Better Feedback and Better Software

Making sure your software does what users want is one of the most difficult tasks in software development. Managing those “requirements” usually comes under the rubric of “requirements management.” Thanks to Agile development and now new, cloud-based delivery there’s all sorts of interesting ways to improve the process of making sure users get what they want. To me, it alls comes down to getting fast feedback by using what I like to call “frequent functionality”: pushing small, new features out there to see how users like it.

View more presentations from Michael Coté

As you may recall, I did short talk on this topic last month. The slides are above and you can see the replay as well, including two other talks from vendors on the topic. In my part of this ADT Supercast on requirements management, I quickly outline some of the new ways and tools to help with requirements management and also get into some new and better ways to do software development and delivery in general.

Disclosure: some companies mentioned in the presentation are RedMonk clients.

Categories: Agile, presentations, Programming, The Analyst Life.

Tags: , , , ,

IBM Pulse 2011 – The Tivoli with two minds – Trip Report

Steve Mills talking up Smarter Planet at IBM Pulse

  • Pulse is two conferences: bow IBM helps companies manage computers and how IBM helps companies manage the world’s infrastructure. Bridging those two can be difficult.
  • Often, IBM emphasizes the end result and the “big picture” solution instead of showing demos and speaking to exact technologies used to solve the big problems. Balancing both establishes better credibility than just doing one or the other.
  • IBM gave insight on how cloud sales are going: 1,000’s of engagements, mostly private cloud, and, IBM claims, non-x86 systems (which IBM has to sell) can be a better fit.
  • For the first time, IBM started talking about “dev/ops,” speaking to the benefits of having development and operations work more closely together moderated by new technologies and tools. They’re looking towards OSLC and some in-beta provisioning and image management software to help.
  • Tivoli has been doing much work beyond the data center, helping utilities and other companies manage their non-computing infrastructure.

In recent years, Tivoli has been of two minds: classic Tivoli managing data centers and IT, and then the new Tivoli looking to manage the world’s infrastructure, not just its computers. After picking up MRO Solutions and going all “Smart Planet” at the corporate-level, Tivoli speaks much more in public about waste water, building, and city management. One presumes that there’s a better market in managing the “Smart Planet” than there is in just the data center.

The annual Pulse conference, then, always leaves some part of the audience flummoxed asking who this conference is for: sysadmins, building managers, IT bosses, power companies, BSM dashboarders, or utilities? As you can imagine, it drives us IT analysts crazy: we’re used to working on narrow products and lines of businesses – at the very least, technologies. IBM instead wants us to take a more “big picture,” solution-driven approach to looking at them: ithe details of the exact customer use of IBM (and partners!) matters less than the fact that IBM was involved and played a major role in improving the business.

Digging into the Smart Planet

Just as a math teacher would demand of you, a company must show its work when it comes to impressive solutions.

Schiphol is not really buying software. They’re buying performance and we took up the challenge of IBM to design and deliver performance.
Amsterdam Airport Schiphol Case Study Video

IBM has a bit of a presentation problem, really with this: they want so much to talk “high level” and not get into the gorpy details of the technologies used. They have the exact opposite of the problem Microsoft has: all technology and product with not enough business. The thing is, as a technology company you have to open up the can of worms and spill it out to gain credibility: we trust that you can assemble technologies together to solve big problems only if you tell us about the technologies. The more high-level and the less technology based, the vaguer it is and the harder it is to take seriously without a lot of extra leg-work hunting down those missing items.

For any given Smarter <insert noun with a big budget here>, the technology story is there and should be easy to tell. In Tivoli land it’s this: imagine if everything had an IP address and was sending and receiving data on a network of a bunch of “things” that are all working together towards some goal(s), just like a computer network. Garbage trucks, rooms in buildings, pipes in utilities, and power meters on homes – all of these things (can) exist in networks now, and Tivoli knows how to manage networks of things (starting with computers). Here’s some screenshots and demos of that management in action…

Structurally, that’s just like a data center problem: lots of data flying around that you need to track; domain knowledge to understand what the data is and means in itself and in the overall context; analytics to report over the state of things that let you decide what to do next; and, eventually, a fancy dashboard with the Three Colors (Green, Yellow, Red; Good, Get Ready to freak out, FREAK OUT!). This pattern applies to computer networks, to telco networks, clouds, buildings, city-as-networked-system, and whole planet-systems if you’re lucky enough to get that contract.

Instead of having to be Indian Jones to ferret this out, it’d be great to just see Mr. DNA explaining it, gnarly SKUs and all. If dig even shallowly into the details of the oft cited DC Water win, there’s plenty of fascinating technology details. Imagine what you could show with a 10 minute demo in a keynote!

Products, Statements, and Meaty Propositions

More analyst action at #ibmPulse

Nonetheless, amongst the passionate stories about running Swiss trains on time and tracking beef from field to feast, there were some definitive pointers to what you’d expect, technologies:

  • BSM – A Business Service Management demo showed how “everything” is pulled together to manage a theoretic airport (I don’t think they realized that “airport” and “BSM” were all but trade-marked by BMC back in the BSM heyday) – it was very nice and much welcome as, really, the only demo in the keynotes.
  • Product Announcements – A rapid-succession of bullet points on the second day keynote slide introduced a slew of new and updates IT Management offerings. This press release seems to round them up, including a new beta offering for cloud management that looks nice (but that’s difficult to find outside of the press release).
  • 1,000’s of cloud customers – In that same press release, there’s some “cloud wins” chest thumping: “IBM has helped thousands (!!) of clients adopt cloud models and manages millions of cloud based transactions every day in areas as diverse as banking, communications, healthcare and government, and securely tap into IBM cloud-based business and infrastructure services.”
  • Private Cloud is where the money is – And when it comes to those clouds, IBM said time and time again that the revenue was all in private cloud at the moment. One snarky analyst said that when faced with the notion from IBM that their mainframe (System z) is the best way to run a “cloud” he likes to challenge them to start charging Amazon prices. Indeed, at several points, claims were made that a z was at least comparable in price to cloud pricing – be a nice study to hunt down. (In the meantime, check out this RedMonk overview/interview of the newest z IBM has available.)
  • How do companies decide on public vs. private cloud? “What we came to see: if that processes that are not critical to my business, [companies] don’t want that in the public cloud,” IBM’s Robert LeBlanc said, instead they want private cloud.
  • What is a “private cloud”? But then what makes for a private cloud? As Tom Rosamilia said in another part of the analyst sessions: the cloud ends up giving us a new method of delivery and what people charge for [those IT services].
  • Image Management – When you look at how IBM would like to do cloud, they’re largely (only?) still on images instead of the model-driven automation (see one discussion via Amazon CloudFormation here) we see from Puppet and Chef.
  • Warming up to dev/ops – There was a strong (enough) theme of dev/ops: the word was uttered much. While there weren’t really products ready to go to support a dev/ops like way of delivering software, there was much speaking to it. “It’s time for ops to insert itself and really be heard,” Harish Grama said.
  • The Developer Land-grab – During an executive panel at the analyst event, there was much “dev/ops love” as it were. Neeraj Chandra even spoke to the developer land-grab saying that “not so long ago, QA was the enemy” and in the same way that they were brought into the team, it seems like getting operations involved as part of the overall team is a good idea.
  • My advice to IBM was to get out in front of this dev/ops now – they had a tragically long lag time with cloud, and there’s a tremendous amount of room in the enterprise-y space to get at least one IBM “evangelist” out there talking about dev/ops and cloud at a technical, developer friendly level.
  • A Standard for dev/ops – IBM really wants OSLC to work out as a model to do all of this. Indeed, the work of the likes of TaskTop of Rational’s Team Concert can be impressive. When asked by one analyst about getting more industry heavy-weights involvement to make it standardized, it was pointed out that Oracle is involved, Microsoft won’t ever join anything (the implicit rhetorical question being: so who cares?), and that HP should sign-up. See the sort of “how we’re doin'” dashboard for the group for more.
  • Get rid of your IT – The idea of consolidation was a sort of theme through-out: consolidation onto the higher end boxes IBM has to sell (POWER, System z [mainframe], and x if you really want that). In a characteristically bombastic and plain-spoken keynote, Steve Mills summed it up, as in here: “The fewer boxes you have means the fewer power supplies, which means less energy and less labor. So doing more computing with fewer boxes is a good thing fundamentally. You can’t deny it. It’s arithmetic. Anybody who thinks otherwise is not thinking clearly.” Later on, he added: “Now, obviously, if you’re not a company that makes one of these, you don’t like them. So, if you’re only in the Intel server business, you don’t like these things and you have a lot of disparaging things to say about these systems. Duh, of course that’s what HP is saying, of course that’s what Dell is saying, of course that’s what Sun/Oracle is saying, of course they’re saying that. Duh, doesn’t mean you have to listen to it. People say stupid crap every day.” You can put that last sentence on a button and start wearing it at conferences. (Also, see TPM’s recent overview of IDC’s server market-share estimates.)

More

Categories: Companies, Conferences, Enterprise Software, Systems Management, The Analyst Life.

Tags: , , , ,

Lotus pulling in consumer tech – Press Pass

I talk with the press frequently. They thankfully whack down my ramblings into concise quotes. For those who prefer to see more, I try to dump publish slightly polished up conversations I have with press into this category, Press Pass.

As part of her story “Experimenting on Themselves,” Erica Naone asked me about Lotus’ work to adopt consumer technology to enterprise use. Here’s my reply:

Lotus has sucked in virtually all of the concepts from the Web and Enterprise 2.0 worlds – I like to think of what they have as your own version of the web, for behind the firewall. It’s true that many of their features swirled around in IBM research for awhile (and that there’s more in there), but they’ve upped the pace of getting commercializing such efforts relative to their past performance it seems. Of late, they’ve done pretty well bringing consumer technology into the enterprise.

Most of what they have is around collaboration – white-collar folks (“knowledge workers”) working together in teams, hashing out a decision, a business program, what goes in some document, etc.

IBM is also building up offerings in the analytics and tracking space to help companies do business in more social-friendly ways on the web, a lot of “b2c” (business to consumer) stuff to use an Internet Bubble 1.0 term. Some of that stuff runs around in the WebSphere group, but it boils down to tracking everything you can about your customers and applying analytics to get better customer service (make them happy with what they’ve bought already and keep them from leaving you) and/or sell them more stuff. I call the idea “better junk mail”, but that’s a pretty cynical take on it. Getting a more intimate relationship with your customers is always handy, and hopefully you do it to study how to serve them better not just send them coupons to trick them into spending money they wouldn’t have otherwise.

Also, the LotusLive offering is nice looking: for as long as RedMonk and others have been trying to encourage the elder companies to go SaaS, it’s nice to finally see the likes of IBM doing it. The success of Google Apps, Salesforce, and other SaaSes shows that people want SaaS for certain types of applications and Lotus has done a good job finally catching up to that. The muddy fight here is always over pricing (as it should be: why bother going SaaS if it’s not both easier and cheaper?) and there’s a continous back and forth for different players to prove their cheaper, expose their competitions “hidden costs,” and the usual shenanigans.

Disclosure: IBM and Salesforce.com are clients.

Categories: Collaborative, Enterprise Software, Press Pass.

Tags: , , , , ,

Links for February 25th through March 4th

Friday BBQ Time

Disclosure: see the RedMonk client list for clients mentioned.

Categories: Links.

Links for February 23rd through February 25th

@cote, on EightBit

Disclosure: see the RedMonk client list for clients mentioned.

Categories: Links.

Building a Hybrid Cloud

View more presentations from Michael Coté

What’s the deal with “hybrid clouds”? To use the NIST definition, It’s just mixing at least two different types of clouds together (public and private, two different public clouds, or a “community cloud”). For the most part, people tend to mean mixing public and private – keeping some of your stuff on-premise, and then using public cloud resources as needed. It’s early days in any type of cloud, esp. something like hybrid. Nonetheless, I got together with RightScale and Cloud.com recently to go over some practical advice and demos of building and managing hybrid clouds, all in webinar form.

The recording is available now for replay. I start out with a very quick definition and then some advice and planning based on what RedMonk has been seeing here. The RightScale and Cloud.com demos are nicely in-depth, and the Q&A that follows was primarily very specific, technical questions. Viewing the recording is free (once you fill out the registration form, of course). You can also check out my slides over in Slideshare (or above).

A question for you, dear readers: have you been doing and “hybrid cloud” work? What’s worked well, and what hasn’t worked so well?

Disclosure: Cloud.com is a client.

Categories: Cloud, presentations, Systems Management.

Tags: , , ,

The Ground Floor – Back of the Envelope #001

Austin Winter

It’s a brand new podcast, all about the money-side of the technology world. In this first episode, co-host Ed Goodwin (@egoodwintx) and I go cover the very basics of the investing world and how it relates to looking at business in the technology sector:

In addition to clicking play above, you can download the episode directly or subscribe to the Back of the Envelope podcast feed (in iTunes or wherever) to have this episode automatically downloaded for your listening pleasure.

After talking with my old friend Ed in email for awhile about the financial side of things, I realized he’d be a great discussion parter for a podcast and on a topic I don’t touch upon much here: finances, investing, and how that set looks at all of the technology companies and happenings we here at RedMonk spend our time steeped in.

If there’s any topics you’d like to see us cover in future episodes, leave a comment below or otherwise contact me. Tell us what you think of this podcast idea and the first episode!

Since Ed works in a highly regulated job as a portfolio manager, his lawyers require this exciting disclaimer, which you’ll get to hear my friend Charles Lowell read at the beginning of the episode:

This podcast is for entertainment purposes only. The content and opinions expressed in this podcast are merely the opinions and observations of Mr. Goodwin and Mr. Cote. Michael Cote is a technology analyst who may have conflicts of interest concerning the companies mentioned. Ed Goodwin is an investment adviser to various funds that may have a financial interest in any companies mentioned. This podcast should not be construed as investment advice of any kind. Both Mr. Goodwin and Mr. Cote may be buying or selling any of the securities mentioned at any time; either for themselves or on behalf of clients of theirs. The content herein is intended solely for entertainment purposes only. This podcast is not a solicitation of business; all inquiries will be ignored.

Seriously, don’t rely on this podcast for investment advice. Ever.

Now sit back and enjoy the show.

Categories: Back of the Envelope, Podcasts.

Tags: , , ,

Toad for Cloud Databases – Brief Note

Brief notes are summaries of briefings and conversations I’ve had, with only light “analysis.”

The venerable Toad database tool line launched a “cloud” version last year, allowing users to work with NoSQL and cloud-based databases such as SimpleDB, Cassandra, SQL Azure, and Hadoop among others. In the relational database world, Toad has always been an good choice for messing around with databases so it makes sense for Quest Software to extend into the NoSQL world.

While I still don’t feel like there’s massive “mainstream” adoption of NoSQL databases, interest in new types of databases (“NoSQL” for unprecise shorthand) is certainly high and there’s enough “real” uses in the wild. RedMonk has certainly been fielding a lot of inquires on the topic as well and in-depth research notes on selecting NoSQL databases for various clients.

Thus far, Toad for Cloud Databases has 2,000+ “active users,” which is pretty good given the level of “real” NoSQL usage we’ve been anecdotatly seeing at RedMonk. As Christian Hasker (Director of Product Management) said, Hadoop tends to lead the pack, and then there’s a “sharp drop-off” to other database types.

In addition to tooling, Quest is building itself up as a “trusted voice” in the NoSQL-hungry world with community efforts like the NoSQLPedia, which actually has been doing a good job cataloging all the new databases, as in their survey of distributed databases.

For Quest, it of course makes sense to chase tooling here. They’ve maintained a huge install-base for their relational database tools and as new types of databases emerge and become popular, keeping their community (paying customers and non-paying users) well-tooled is important. Also, applying my cynical theory of “make a mess, charge to clean up the mess,” the rest of Quest has and could have plenty to sell when it comes to managing all those “cloud databases” in the wild. As an early, non-Quest, example of a janitor here, we’ve been talking with Evident Software of late about their the NoSQL support (for example, Cassandra) in their ClearStone tool for application performance monitoring.

Disclosure: Cloudera is a client, as are some other “NoSQL” related folks.

Categories: Brief Notes, Cloud.

Tags: , , ,

Links for February 16th through February 22nd

WebEx meetings in progress now

Disclosure: see the RedMonk client list for clients mentioned.

Categories: Links.

Beyond Jeopardy! with IBM Watson – Quick Analysis

Packed Watson watching at IBM Austin

Seeing a computer play two humans at Jeopardy! is a lot more entertaining than I thought it’d be. I’d been ignoring most of the hoopla around Watson figuring it for a big, effective PR campaign on IBM’s part. It is that for certain, and good on them for doing it. I’ve been more interested in what practical and “work-place” applications the technology behind Watson has, and I got a little bit of that along with some other interesting tidbits at a Watson event this week at IBM’s Austin campus.

The Technology Used

In addition to IBM PR and AR reaching out to me, the Apache Software Foundation sent me info on the Hadoop and UIMA software being used by Watson:

The Watson system uses UIMA as its principal infrastructure for component interoperability and makes extensive use of the UIMA-AS scale-out capabilities that can exploit modern, highly parallel hardware architectures. UIMA manages all work flow and communication between processes, which are spread across the cluster. Apache Hadoop manages the task of preprocessing Watson’s enormous information sources by deploying UIMA pipelines as Hadoop mappers, running UIMA analytics.

The ASF press release is actually jammed with a lot of “how it works” info.

Additionally, Watson is run on POWER7 machines with Linux, one of IBM’s exotic (but revenue pulling – $1.35B last quarter by TPM’s estimates) platforms. I was wondering why the team chose POWER, and though I didn’t get a chance to ask, once of the IBM’ers I was sitting next to said that the cooling ability of POWER machines meant they could pack more of them into the Watson cluster(s).

Here’s a brief hardware description from an overview whitepaper:

Early implementations of Watson ran on a single processor, which required two hours to answer a single question. The DeepQA computation is embarrassing parallel, however, and so it can be divided into a number of independent parts, each of which can be executed by a separate processor. UIMA-AS, part of Apache UIMA, enables the scale-out of UIMA applications using asynchronous messaging. Watson uses UIMA-AS to scale out across 2,880 POWER7 cores in a cluster of 90 IBM Power® 750 servers. UIMA_AS manages all of the inter-process communication using the open JMS standard. The UIMA-AS deploy- ment on POWER7 enabled Watson to deliver answers in one to six seconds.

Watson harnesses the massive parallel processing performance of its POWER7 processors to execute its thousands of DeepQA tasks simultaneously on individual processor cores. Each of Watson’s 90 clustered IBM Power 750 servers features 32 POWER7 cores running at 3.55 GHz. Running the Linux® operating system, the servers are housed in 10 racks along with associated I/O nodes and communications hubs. The system has a combined total of 16 Terabytes of memory and can operate at over 80 Teraflops (trillions of operations per second).

During Q&A an audience member asked if Watson could do better if it took longer to answer questions. In the game, of course, Watson is trying to answer questions as quickly as possible. The answer was, yes. And, in fact, Watson already does this: it’s actually running two processes to answer a question:

  1. The first is a quick process that favors speed instead of accuracy. This fast process is used by Watson to see if it should buzz in at all.
  2. The second is a longer process that favors accuracy and is the process used to actually answer questions.

So, you’d think, at the start of each question, Watson spins up these two processes, handing the real answer off to the one that gets a few more seconds.

There’s a 6 page whitepaper on the POWER7 (and software) angles of Watson over at IBM, tragically, you have to lead-gen your way into it, but it’s worth the typing if you’re interested.

Open Source

What I find interesting here is the big reliance on open source software for this impressive Big Data application. The innovations are interesting on their own, but from a “how do I apply this to my situation?” perspective, in theory, the fact that it’s open source opens the possibilities of using the underlying technology to a wider set of people, if only because it’s cheaper than proprietary options.

For the IBM Systems & Technology Group (STG, who produces and sells all the hardware IBM has), it’d be gravy: why spend all that money of software when you can spend it on hardware? (To be fair, for sometime now and especially with Software Group [SWG] head-honcho Steve Mills running both STG and SWG, IBM would prefer to collect on both types of -ware.)

It’s part of what John Willis would call “The Cambrian Cloud Computing Explosion.” In my words: there’s an excess of technological innovation at affordable prices (the big difference) out there just waiting for business demand.

Applications beyond Trivia

In addition to the technologies used, the most commonly asked question around Watson has been what other uses. As one of the professors at the Austin event said, what they wanted to do was have a system where “you give a question, and it comes up with a specific answer, not just a [list of documents like Google].” That should remind people of what WolframAlpha is trying to do (in fact, see an in-depth comparison).

Dealing with unstructured text (much of what we humans produce) has always been difficult. Getting “computers” to understanding the nuance in human questions has also always been hard – I can barely understand my UK-dialected fellow English speakers at times, I wonder how a computer gets by? Part of what Watson does is prove advanced in both of those. The costs for this initial run (and those that have come before it) are high, for sure, but watching that thing zoom through oddly phrased questions on TV is pretty amazing.

The IBM folks sent along some possible applications post-Jeopardy!:

Making better decisions- Companies relate to the problem of data-overload. Potential applications for Watson are:

  • Healthcare and Life Sciences – Diagnostic Assistance, Evidence-Based, Collaborative Medicine. More, as quoted by Michael Cooney: “… a doctor considering a patient’s diagnosis could use Watson’s analytics technology, in conjunction with Nuance’s voice and clinical language understanding solutions, to rapidly consider all the related texts, reference materials, prior cases, and latest knowledge in journals and medical literature to gain evidence from many more potential sources than previously possible. This could help medical professionals confidently determine the most likely diagnosis and treatment options.”
  • Tech Support, Help-desk, Contact Centers – Enterprise Knowledge Management (looking stuff up, documenting it) and Business Intelligence – Watson’s analytics ability generates meaningful and actionable insights from data – in real time.

Heathcare is the most cited industry for application that I’ve come across. As an analyst presentation on Watson said, providers could ask Watson questions like “What illness presents the following symptoms…?” And check out more from Mike Martin on the healthcare angle.

A post from Louis Lazarus over at “Citizen IBM” about using Watson in the non-profit sector ads some more possible uses:

It’s not hard to imagine how the technology could be used to help triage health patients, or field phone calls placed to municipal quality-of-life hotlines, or assist teachers in helping to score complex essays on tests, or help provide information to disaster survivors.”

Check out this IBM video for some more possibilities discussion.

Injecting UX into AI

Several people have alluded that part of what’s special here is the interface – how humans use – the technology. Coming up with just one, or a handful, of definitive answers over a massive body of content is no doubt helpful – going to wikipedia when you know a topic is generally faster than simply searching Google (esp. considering all the spam-crap it’s loaded up with on general topics).

In the health-care sector, as one Enterprise Irregular said, doctors often find themselves in wikipedia instead of the better, official references simply because it’s easier to take out your iPhone and look up the topic there. This is one of the under-appreciated aspects of “the consumerization of IT”: realizing that if you make your user’s life easier (focus on UX and usability), the overall software will be more valuable because (a.) users will use it, and, (b.) they’re be more productive using it. Speed is a feature here (how many times has someone at a call center told you “the computer is being slow, please wait”) but honing workflows to be help is too. And when it comes to helping find the answer instead of a pile of crap from a knowledge base, that’s huge.

Getting your hands on it

The question, as with any whiz-bang technology, is a depressing: so, how much is that gonna cost me? Hopefully, the open source angle helps drive down the cost, but the hardware needs are still high. Part of the reason to build Watson on POWER7, IBM says, was that the systems are commercially available, as opposed to the custom-built machine used for their previous AI, DeepBlue. Perhaps there’s some help from cheap cloud infrastructure, but I’d wager you’d be sacrificing speed.

It’s fun to watch that polite flat screen beat human at buzzing in, but it’ll be even more interesting watching the technology be industrialized for the mainstream.

Also, you can check out my quick debriefing recording of the event.

Update: Ideas from John Arley Burns

An old friend of mine, John Arley Burns, suggested some possible uses over in Facebook:

  1. a google labs plugin that returns watson search results alongside normal results, maybe a watson tab
  2. watson was not connected to the internet – connect it to a webcrawler and let it give you answers
  3. watson’s search results, instead of being a list of sites like google, will be a list of hypothesis for the answer, in order of descending cofindence, as the reasoning tab on the TED lecture showed
  4. i was disappointed that it was getting the information electronically instead of via understanding what was being said – hook watson up to a speech processor so it can crawl audio content as well
  5. hook it up to a visual pattern recognizer – IBM already has one of these – and let it crawl images and videos so it can begin to form semantic constructs around them as well
  6. put it on the cloud for long-running questions you could submit in batch jobs, such as, here’s all my research data, i want you to tell me how many nanotubes i should use for this circuit layer
  7. give it long-running backend goals at low priority, as with SETI@home, that serve a socially useful function
  8. allow it to rank importance in recent semantic hypothesis, so that important new items it has with high confidence can be placed on an always-updated news page: what’s watson learning now
  9. feed it news wires so that it can answer time-dependent questions about current and just-now events
  10. connect it to incoming data feeds at all air control towers so that it can reason where probable collisions or bad weather encounters may occur, and automatically warn pilots
  11. connect it to flight schedules, stock prices, pipeline meters, so that it can form a current world view of the instantaneous state of reality
  12. allow it to improve itself by testing program hypothesis, evaluating if they cause its answers to be more or less correct, faster, higher confidence, and then updating to new code if it performs better than previous code (using genetic algorithms, perhaps)

Disclosure: IBM is a client, as is ASF and Cloudera.

Categories: Companies, Enterprise Software, Ideas, Marketing, Open Source, Quick Analysis, The New Thing.

Tags: , , , , ,