tecosystems

Ten Things About AI

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit


Since the term artificial intelligence was coined in the summer of 1956 at the Dartmouth workshop, the field has had a surprisingly uneven history. It’s generated volumes of breathless headlines, but also disappointment to the degree that AI teams over the years were forced to rebrand themselves as anything but AI to escape the stigma of a term that had come to be associated with overpromising and under-delivery.

While the field made progress and indeed startling breakthroughs along the way, innovation was typically limited to machine-friendly spaces – games, for example, that operated according to prescribed and narrowly defined rules. With rare exceptions like Watson’s appearance on Jeopardy, AI successes took the form of inhuman performances in specialized niches.

For several years now, however, there have been rumblings about secretive work being conducted at large, well resourced technology players by teams of high end artificial intelligence engineers. AI’s long history of boom and bust hype cycles meant that a lot of this chatter went ignored, but every so often rumors would surface that would make even the most jaded of industry observers raise an eyebrow. Machine interfaces that might well pass a Turing test, for example, and that could plausibly end up triggering a need for Gibson’s Turing heat. The purported abilities of the AI tech being worked on seemed to explain, to a degree, the otherwise inexplicable presence of large teams of ethicists on these projects. Ethicists, it should be noted, who in many cases were subsequently either ignored, fired or both.

It was thus with a lack of any real concern for the wider societal implications typical of the technology industry that large language model-trained offerings began to emerge, and emerge quickly. The first signs of the potential for a step change in the breadth of AI abilities arguably began to emerge in 2021, building on LLM work begun three years before. DALL-E, capable of generating digital images from text prompts debuted in January of 2021. GitHub’s Copilot, functioning as a virtual pair programmer, followed six months later. Midjourney and Stable Diffusion, both generative AI applications, debuted a year later within a month of one another.

But it was ChatGPT, released upon the public this past November, that really set the world ablaze. The question that virtually the entire technology industry – along with the rest of the world – has been grappling with since is: what happens now?

With all due respect to ChatGPT and its industry counterparts, all of which would likely answer that question quickly and confidently, it’s not answerable at present. It can be useful, however, to look at the current generation of AI through other smaller questions to begin to assess its possibilities and its portents. There are many more, but here are ten areas worthy of exploration and questions.

The Stakes

Nine days before ChatGPT was made available, an open source founder called with a basic question: is all of this AI momentum – finally – for real? Are we on the cusp of a major industry shift? The answer here was affirmative, in both cases. ChatGPT was and is, by some estimates, the fastest growing application in history. That’s not an accident.

Unlike previous generations of AI which were generally either thinly disguised linear regression or hyper-specialized for narrow use cases and irrelevant to every day users (with obvious exceptions like Alexa, the Google Assistant or Siri), LLM-trained models were incredibly versatile. While clearly imperfect they were immediately useful day to day, to the point that holders of office jobs – be they designers, application developers, marketers or even baseball executives – began to wonder whether said jobs were secure, as we’ll come back to.

Fast forward five months, and virtually every technology company pitch at this point that RedMonk sees features AI. A decade ago, containers generally and Docker specifically swept through the industry. Almost overnight, it went from a technical curiosity to a standard component of every enterprise messaging deck. Whether containers were actually relevant to the technology in question was irrelevant; everything was Docker-washed. At RedMonk, we had never seen a technology grow as quickly as containers, but the pace of AI adoption makes the spread of containers look like a gradual rollout.

There are many reasons for this, but the simplest is AI’s surpassing versatility: there’s little that it is genuinely unfit for or off limits at this point. Which means that the stakes – for everyone – could not be higher.

The Costs

A month ago, an executive responsible for an AI team described a stark change pre and post ChatGPT funding environment. Prior to its introduction, the AI team had to beg, borrow and steal to try and get the GPUs they needed. After, executive leadership was ready to throw money at the team to accelerate its work. The problem, by that point, was availability.

Demand for GPUs has soared, which has had the predictable effect on the cost and difficulty associated with procuring them. “We’re constrained on GPUs” is the current industry refrain. Nor even are those few select companies capable of manufacturing their own chips immune from the economics of running large scale AI infrastructure. There are few hard numbers on precisely how much more expensive it is to field a conversational AI query versus a traditional web search, but the estimates not only all agree that it is more expensive most seem to imply that the delta is significant – potentially 10X.

The costs associated with building and operating these services at scale have implications for consumers and providers alike. Consumers may face rough transitions from free to paid services in the months ahead absent heavy subsidies from other revenue sources (read: advertising). This obviously has the potential to disadvantage those not in a position to afford paid services. Most software providers, meanwhile, are unlikely to develop competitive technologies themselves, and are therefore poised to become increasingly dependent on large platform providers, as we’ll come back to.

For all of the promise of AI, it seems clear that insufficient attention is currently being paid to the costs of, and therefore implications of, the technology.

The Money

Many years ago, when the first iteration of Docker the company was near its zenith in industry importance and visibility, Lumen, then CenturyLink, released a project called Panamax. Its function was in the same vein as Kubernetes, management of containers. With many such projects emerging at that time, this was not unusual; what was was the fact that Docker had reportedly given the CenturyLink team their blessing with the project.

This was unexpected because it implied that Docker either did not see or did not wish to own the revenue opportunity associated with the management of containers. Given that the history of similar markets like virtualization suggested that the money would not be in the substrate but the management of same, it was a curious decision at the time – though ultimately an irrelevant one because Kubernetes ended up sweeping away the competition.

Just as when containers originally emerged, one of the most important questions to be answered at present in AI is where does the money come from. Unlike with containers, however, there aren’t particularly close revenue model analogs to examine and replicate.

At present, industry players are struggling to project what the revenue and pricing models are, both for consumers and industry players. Who gets paid what, and by whom? Clearly the owners, operators and originators of the LLM models have products in demand and therefore are in a position to monetize these heavily – OpenAI is exhibit A in that regard. But what of those who would like access to those models? How will the pricing models play out? What might they look in future as the products become more and more dependent on LLM functionality? And as if that was not all tangled enough, these conversations are heavily complicated by questions around data access, ownership and moats.

The Control

One of the existential fears for smaller firms partnering with AI providers to gain access to LLM-trained functionality, conversational or otherwise, is that such a partnership may represent a Faustian bargain. According to this argument, it amounts to trading a short term gain in functional capabilities for long term irrelevance as the data fed from the smaller provider to the larger AI platform ultimately makes the former obsolete.

On the one hand, the logical conclusion of this thesis – that one company would or could, simply by amassing enough data, own virtually every market – seems implausible. In practice, the more diffuse organizations become functionally the less competent and competitive they become in any one market. That’s without getting into the natural aversion enterprises have to working with, and being subject to, virtual monopolies.

But on the other, the long history of platforms demonstrates conclusively that if you’re operating an application or service on top of someone else’s platform, and your app or service becomes popular enough, the platform will attempt to replicate or absorb your app or service. If for no other reason than because the parent platform’s users demand it.

Control, therefore, and data ownership and protections are and will be a core question for AI partnerships moving forward – as AWS seems to be acknowledging with its offerings. And somewhat uniquely, it’s likely to be a top of mind concern because it’s true on an individual level; this is not a concern unique to providers. Individual users engaging with generative AI tools are increasingly wondering whether they are in so doing providing the means to train the AI to replace themselves.

As the saying goes, “it’s not paranoia when they really are out to get you.”

The Trust

Beyond, and arguably more important than, the question of whether various ecosystem parties can trust one another, one of the core questions facing AI offerings – at least those whose output is complicated and not instantly parsed like code or text – is whether or not the AI itself can be trusted.

When Copilot was first announced, reactions were skeptical. As previously described:

Even amongst developers that have no moral qualms about the service, you see comments like “it’s interesting, but I’m not paying them what I pay Netflix.” Which is a perfectly cromulent decision if you don’t trust the product at all. If you do trust the product, however, that’s nonsense: $10 is a relative pittance for a pair programmer that’s always available. It seems probable, then, that some of the initial hype will give way to more meager returns than would otherwise be expected for something so potentially transformative on an individual basis.

There was and is, therefore, a trust gap for generative systems. Even in a best case scenario, they are prone to basic, fundamental factual errors. And the worst case scenarios – machines that go the HAL 9000 or Skynet route – no longer seem all that far-fetched.

The good news for providers is that usage of these systems is accelerating in spite of these errors, in part because they’re both less likely and less problematic in basic tasks such as generating web scaffolding. Usage is also growing, in many cases, because users feel like they have no choice. Much like PEDs will spread amongst even those athletes who don’t wish to use them for fear of being at a competitive disadvantage, so too is AI now perceived as a necessity in many scenarios from academia to the workplace.

The bad news is that trust gap remains, and it is not clear that it can be remedied either within the systems themselves or with their users. While the sheer utility of the systems guarantees their usage moving forward, the last mile of trust will be a difficult one to cover.

The Legal

If the issues with trust and generative AI are complicated, the legal uncertainties around these platforms are fractally more so. The entire concept of generative AI, in fact, has multiple legal challenges against it currently pending, while some of the early decisions have been less than clarifying.

Then there are the issues with the licenses, which range from open source to open(ish) to proprietary. Then there are the issues with the non-obvious dependencies on non- or less than open assets. Then there are the issues of who, precisely, is potentially at legal risk. Providers, potentially. Users of these systems? It seems unlikely, but as we are constantly reminded in the traditional open source world all it takes is one overly aggressive and litigious party to cast a pall over a given market.

The tl;dr of the above is that there is little that can be conclusively stated about the legal situation with generative AI at present other than to say that it’s a mess. Too many things are in flux, certain matters are currently in flight with more deliberately paced court systems and the stakes, as mentioned, could not be higher. Which is why, as is pointed out here, it is likely that some of the players will be asking forgiveness rather than permission.

The best place to follow the varying legal challenges and issues, incidentally, is this newsletter.

The Abstraction

As Grady Booch has said, “the entire history of software engineering is one of rising levels of abstraction.” This trend has perhaps been most notable of late in the struggle of developers to digest and traverse the ever expanding catalog of so-called cloud primitives.

AI, and more specifically conversational AI, would seem to offer the potential for an ultimate high level abstraction. If LLM-based models can generate source code, why not directing and managing infrastructure? Or handling analytical or database queries? And so on.

The question facing the industry at present, then, is not whether generative AI can serve as an abstraction over various primitives, but what the appropriate level of abstraction is. How much specificity and guidance is required? What can be left to the AI, and what has to remain solidly in the hands of humans?

The market is, as might be expected, aggressively experimenting with this question at present. As yet, as with so many questions in this market, there is little in the way of industry consensus. It seems probable, however, that the next layer of abstraction – or at least one of them – is going to consist of an AI intermediary.

The Jobs

There are multiple schools of thought when it comes to AI and jobs. What isn’t in dispute is that AI has the potential to impact office workers in ways that are likely without precedent. In the one camp are those who remind us that the introduction of the PC or the spreadsheet did not put office workers out of work any more than the introduction of ATMs eliminated human bank tellers, as Dr. James Bessen covers. As one person put it on Twitter, there are a large number of people who are learning about Jevon’s Paradox in real time.

There is another camp, however, that is somewhat less optimistic about the jobs, at least (though more so for the societal implications):

The resulting labor disruption is likely to be immense, verging on unprecedented, in that top-right quadrant [e.g. software engineers, tax preparers, call centers, contracts] in the coming years. We could see millions of jobs replaced across a swath of occupations, and see it happen faster than in any previous wave of automation. The implications will be myriad for sectors, for tax revenues, and even for societal stability in regions or countries heavily reliant on some of those most-affected job classes. These broad and potentially destabilizing impacts should not be underestimated, and are very important.

While it feels like replacing a human with a machine – even assuming the work is equivalent – is more easily done on paper than in practice, as that assertion tends to assume minimal transactional value beyond the work itself, there is no question that change is coming, and coming quickly. As a species, humans are no strangers to changes and disruption; it’s not clear, however, that any have previously come as quickly as what we’ve seen over the last two years.

Where previous generations of parents that worked in factories, farming or other industries disrupted by industrial automation have encouraged their children to go into safer, less threatened white collar occupations, it’s no longer clear what professions will remain untouched by the rise of the automation of intelligence. There are real opportunities for developers to remain front and center, retaining their prominence as king and queenmakers, as a colleague argues. But the threats are equally real.

Long term, for developers and so many others who work behind a desk, whether the impact of generative AI is as a complement to humans or a replacement for them is, arguably, one of the most important questions for societies moving forward.

The Landscape

One of the questions the industry has been pondering for well over a decade has been how best to compete against AWS. Dominant in the industry’s most important technology market and showing no signs of slowing their relentless pace of development, rival executives wondered how and whether AWS could ever be dethroned from its perch.

The answer to that question is still unknown, but the industry has come to the first juncture in many years in which AWS has been put on its back foot. As expected, this did not arrive via competition in the company’s core markets but another quarter entirely. It was never probable that someone was going to beat AWS by having better compute instances or a faster managed database service. Instead, AWS has been attacked on a relatively lightly defended flank via a model that – somewhat incredibly, given the size and scope of their portfolio – they had no initial answer for. Two weeks ago, finally, the company responded with, unsurprisingly for AWS but in contrast to its peers, a set of primitives in the form of hosted models and compute instances for developers to work with. It’s positioning itself as neutral ground for companies concerned about making Microsoft, primarily, stronger by feeding it data. A position that is somewhat ironic given that AWS’ competitors have been stoking fears amongst various industries, most notably retail, about making Amazon stronger.

While it’s unusual to find AWS in a reactive rather than proactive posture, it is at least as surprising that Google was as well given the company’s history and reputation in the AI space. It invented the basis for the LLM that are being used to compete with it today, after all. One simple way to understand Google’s reluctance to embrace LLMs is through an Innovator’s Dilemma lens. Given the preeminence of search within the company’s revenue profile, a technology that might or might not deliver higher quality search results but would cost up to ten times as much per query might be perceived as a mixed blessing at best. Throw in some reasonable ethical concerns and Google, also, was caught short. To the point that an obviously rushed if mildly bungled PR response lopped six percent off its market cap, while Microsoft – whose bots were exhibiting symptoms consistent with severe mental illness – gained five percent. In spite of the fact that, at least by some accounts, Google’s latest iteration of its Bard engine is offering superior performance in some coding tasks to OpenAI and Microsoft’s latest and greatest. Google’s hand here has clearly been clearly forced. The company has had technologies similar to ChatGPT on hand for some time, but between the cost and the knowledge that they weren’t ready for prime time, kept them under wraps. Then the OpenAI and Microsoft partnership took off running and Pichai and Google’s executive leadership team had no choice but to follow. The question at this point is less about the technologies for Google, and rather can they build a business around them as quickly as Microsoft and OpenAI can – and whether they can work to counter Microsoft/OpenAI’s first mover advantage.

Which, in turn, brings us to Microsoft. What Nadella and co seemed to have perceived is an opportunity to occupy the high ground for the next big wave in the industry. The company’s behavior clearly telegraphed its urgency. The products in question, while incredibly impressive in so many areas, were at the same time clearly not fully baked upon their release. The dismissal of its ethics and society team, meanwhile, sent its own message. With its extended stake in OpenAI and the assets GitHub brings to the table both on the product development side as well as the singular corpus of data it represents, Microsoft’s in a relatively strong position moving forward. It would be stronger yet if some of the branding and go to market overlap with GitHub in particular was resolved in the latter’s favor, but at present Microsoft has extensive capabilities in AI, unmatched data on the code side and broad and deep footprints with developers and office workers alike via the product suites of GitHub and Microsoft respectively.

The great AI race has begun, in other words. What remains to be seen is how the ecosystems line up around the large players.

The Outcome

Dating back to the industrial revolution, the arrival of many if not all new technologies is accompanied by a wave of hand wringing and doom crying. Jay Rosen, for example, was interviewed years ago about Victorian opposition to the telephone, some of which was based on the fear that it would fundamentally disrupt the process of courtship. Which of course it did, society just learned to accept it.

What’s interesting and different here is that it’s not usually the creators of the new technologies prophesying doom. Yet that is precisely the case with many of those who have been instrumental in the creation of these technologies now moved to speak out against them, from open letters to op-eds.

It’s natural to want to refer back to the Pessimist Archive‘s litany of foolish overreactions to new technologies for comfort, but when even the nominal optimists are warning of potentially unprecedented job loss, it’s difficult to watch these new technologies without a measure of trepidation.

But with Pandora’s Box now open, what’s left to us is to try and direct the development of the existing technologies in ways that complement the average human, not render her irrelevant.

Disclosure: Amazon, GitHub, Google, IBM and Microsoft are RedMonk customers. Midjourney, OpenAI and Stable Diffusion are not currently RedMonk customers.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *