This iteration of the RedMonk programming Language Rankings is brought to you by Amazon Web Services. AWS manages a variety of developer communities where you can join and learn more about building modern applications in your preferred language.
As part of RedMonk’s analysis about language rankings, here’s a visualization that tracks the movement of the top 20 languages over the history of the rankings.
More commentary about notable movement among languages can be found in the primary analysis, and further notes about our data sources are available here.
You can track a specific language’s ranking over time by following the horizontal progression of the language’s rank over time. You can review the Top 20 languages of any given iteration by running through the respective data points vertically from top to bottom.
Any time points are clustered, that means there was a tie and multiple languages share the rank.
If a language was previously on the chart but is no longer visible, it means the language is no longer in the RedMonk Top 20. (While they are no longer included in this specific visualization, rest assured they are still active and vibrant communities.)
Languages that break into the RedMonk Top 20 are seen as new entrants to the chart. (Just like the languages that drop off the top 20, these didn’t ascend from nowhere. They were previously rising in the ranks before becoming top 20 languages.)
Why do you create these rankings?: These rankings attempt to correlate trends between language usage and discussion around a language. We don’t proclaim our rankings to be precise, statistically-valid measurements of popularity; instead we see them as an attempt to aggregate trends across two major developer communities.
How do you create these rankings?: Please see the full analysis for a complete description of the process, but at a high level we measure traction as seen via GitHub pull requests and Stack Overflow discussion.
Why GitHub and Stack Overflow? That’s going to over-represent/under-represent certain communities.: Agreed, these measures are imperfect. More specifically:
We use GitHub and Stack Overflow first because of their historic import and second because of their public exposure of the data. We encourage interested parties to perform their own analyses using other sources.
Don’t incumbent languages have an inherent advantage here? Indeed they do, as the metrics from GitHub and Stack Overflow are accretive. While rates of growth will be fastest for new projects with a smaller base, from a cumulative perspective new language entrants are behind from the day they are released. Displacing the most popular languages is a significant and uphill battle.
Has your process been consistent over time? We’ve tried our best to keep things as consistent as possible, but we had to adapt to changes in data availability from GitHub in January 2014 and again in January 2017. You’ll notice there is higher than typical change in those periods; the linked posts above may be helpful for those trying to sort out change due to process and change due to adoption trends.
CSS is not a language. This is inevitably raised every iteration of this analysis. There is probably someone who wants to debate this with you in the comments below or on Twitter / Reddit / Hacker News. While we mostly stay out of the debate on this particular topic these days, we welcome this grand tradition.
Related links:
– January 2024 analysis
– What’s Going On With Language Rankings?
– Prior Analysis: January 2023 analysis
– Prior Analysis: January 2023 rankings over time
This year marks the much-anticipated return of Monki Gras, the conference about craft and tech culture, after a bit of a hiatus. Happening March 14-15 in London and run by our very own James Governor (Monkchips), Monki Gras 2024 is all about Prompting Craft. Or, in prompt form, we aim to:
>_create a warm, inclusive tech conference about craft, AI and social, with prompt engineering as key theme. have really good food and incredible beer.
I, for one, am beyond excited to have Monki Gras back on the calendar this year. As I have written previously, I had the opportunity to speak at Monki Gras and its sister conference, The Monktoberfest, before I was even a monk. It is one of the reasons I work at RedMonk–I never have to worry about missing out on a ticket!–and an event that without fail brings together truly brilliant and delightful people.
However, it has been a few years since we all last gathered in London. And so, for those of you who have never experienced a Monki Gras before–and especially for those of you who might be on the fence about attending this year–here are a few things you should know about Monki Gras 2024.
While generative AI has been the talk of the town for well over a year now, this year’s Monki Gras promises its own brilliant take on the space. From the conference homepage:
At Monki Gras we will “go meta”. We’re always interested in what one discipline can learn from another. “Prompting Craft” is a proxy for an event which looks at AI adoption beyond just LLMs and GPT. We don’t want to get too bogged down in “prompt engineering” – the first accepted talk was actually about how a parent prompts her young children to do things. Of course kids can be even more unruly than the large language models we use. Inputs and outputs, what can you do with a prompt? We’ll also consider governance, economic and cultural aspects of the new technologies – because tech is always adopted in a social context.
That’s what the conference will be about – Prompting Craft.
As AI has become a hot topic beyond the tech industry per se, I find this broader view to be both apt and intriguing.
To quote James in a previous conference preview
Our talks are going to be amazing.
Seriously, how can the talks NOT be amazing with a speakers list like this?
And while we haven’t published talk titles ahead of time, at least one speaker has dropped a hint about their talk topic:
And, quite frankly, I can’t wait to see whatever brilliant talk comes out of this rockstar collaboration:
I can’t tell you how excited I am to be a part of this amazing event. If you are near the London area or are traveling for KubeCon, we’d love to see you there! Grab your ticket at https://t.co/5lGZCTQZ5L https://t.co/avnBlq2bfP
— Farrah C (@FarrahC32) February 26, 2024
https://platform.twitter.com/widgets.js
James is an absolute foodie, so when he promises “really good food and incredible beer,” it is no idle promise. Past Monki Gras attendees have been treated to cheese mountains, afternoon tea, craft (of course) coffee, gourmet food trucks, and really, really good beer (I am partial to past conference selections from The Kernel).
And it looks like James has been doing his research for this year’s event as well:
Gluten free cloudy IPA thinking about #monkigras during dryjanuary pic.twitter.com/MYSDRXvbCw
— Grumble Bundle (@monkchips) January 26, 2024
https://platform.twitter.com/widgets.js
Monki Gras is a single-track conference that makes a point of allowing ample time for attendees to just chat. And while I don’t have access to the attendee list, based on social media chatter I would totally get on a plane and cross an ocean just to chat with some of the folks who will be in attendance:
Hey folks, I’ll be at @monkigras 🇬🇧 and @KubeCon_ 🥖next month.
If you’re there, let’s catch up! 🍻🥐🍷🧀
— Tracy Miranda (@tracymiranda) February 23, 2024
https://platform.twitter.com/widgets.js
Who else here is going to @monkigras??
— DormAIn 🧟♀️ (aka “part of the problem”) (@DormainDrewitz) February 22, 2024
https://platform.twitter.com/widgets.js
I just got my ticket for Monki Gras 2024: Prompting Craft 🙌 https://t.co/MU9p38BiVi
— Danilo Poccia (@danilop) February 27, 2024
https://platform.twitter.com/widgets.js
Confession time: because Monki Gras is taking place a little later in the year than it has in the past (mid-March instead of late January/early February), yours truly committed to a conflicting event and will not be able to attend this year. And let me tell you: the FOMO is very, very real.
But what to do if you can’t make it (like me)? I plan to live vicariously through the social media posts of all of my much wiser colleagues who did not doublebook themselves and will be at the event. ICYMI, Rachel Stephens will be speaking. Stephen O’Grady will be there as well, along with a few other monks.
I am also handling my FOMO by aggressively blocking my calendar for Q1 of 2025 so that I don’t miss Monki Gras next year.
Many thanks to our Monki Gras 2024 sponsors, including AWS, Civo, Deepset, the CNCF, Neo4j, MongoDB, Akamai, Griptape, Pagerduty, and the amazing Betty Junod (who personally sponsored a round of delicious beer)
Want to know more about Monki Gras? Here are some resources on Monki Gras past:
This iteration of the RedMonk programming Language Rankings is brought to you by Amazon Web Services. AWS manages a variety of developer communities where you can join and learn more about building modern applications in your preferred language.
As many are aware, our latest analysis of the programming language rankings has been delayed. While there are a number of factors that have contributed to this including schema changes and query issues, the biggest single factor in the latency between this ranking and its predecessor was our attempt to explain some serious anomalies discovered in the data. We can’t definitively identify the source of those as yet, but our current hypothesis is that they are manifestations of the acceleration of AI code assistants. We’ll continue to monitor the impact of these tools on both the industry and the underlying data that these rankings are built on.
In the meantime, however, as a reminder, this work is a continuation of the work originally performed by Drew Conway and John Myles White late in 2010. While the specific means of collection has changed, the basic process remains the same: we extract language rankings from GitHub and Stack Overflow, and combine them for a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion and usage in an effort to extract insights into potential future adoption trends.
The data source used for the GitHub portion of the analysis is the GitHub Archive. We query languages by pull request in a manner similar to the one GitHub used to assemble the State of the Octoverse. Our query is designed to be as comparable as possible to the previous process.
With that description out of the way, please keep in mind the other usual caveats.
With that, here is the first quarter plot for 2024.
1 JavaScript
2 Python
3 Java
4 PHP
5 C#
6 TypeScript
6 CSS
8 C++
9 Ruby
10 C
11 Swift
12 Go
12 R
14 Shell
14 Objective-C
16 Scala
17 Kotlin
18 PowerShell
19 Rust
19 Dart
The top 20 languages cohort was not devoid of movement this run, but it was more static than not as we’ve come to expect in recent years. There was no movement in the top 5 languages, and less than a third of the top 20 languages moved at all. For better or for worse, these metrics reflect an environment resistant to change.
The last analysis of these numbers considered the possibility that coding assistants had the potential to ease and speed the uptake of new languages by lowering barriers to entry and speeding ramp up time in adopting new, unfamiliar languages. If that’s occurring at present, however, it’s not observable in the data. We see some new language entrants, and we have a list of languages of interest from our qualitative research, but to date these rankings offer minimal evidence of any AI-fueled acceleration of new language adoption.
With that, some results of note:
TypeScript: ever since it became the first new entrant to our top 10 since – for one brief quarter, Swift – TypeScript has been the language to watch. Its upward trajectory has been slowed over the course of its ascent as these metrics are accretive in nature, but this quarter’s run continues a period of slow but steady growth as it moves up one spot from seven to six. With an ever increasing industry focus on security as well, it’s not implausible that the language has further growth yet in front of it. It’s arguable that some of that growth came at the expense of our next language, in fact.
C++: in the initial incarnation of these rankings, C++ debuted at 7. It climbed as high as 5 at times, and recently had settled back into seventh place. For this run, C++ dropped to eighth place for the first time in the history of these rankings. It’s important to note at this point that top 10 languages are, relatively speaking, enormously popular and have achieved a ranking that dozens if not hundreds of other languages would envy. All of that said, it’s worth asking whether C++ can sustain its back end popularity in the face of the influx of more modern and accessible lower level languages like Go or Rust.
Dart / Kotlin / Rust: speaking of Rust, the notable thing about it as well as Dart and Kotlin, as has been true in recent quarters, is that there is no news. All three of these languages have outperformed their various peers to achieve entry into our top 20, but none have been able to capitalize on their newfound popularity to resume their upward trajectory in the manner of TypeScript. The incumbents ahead of them have proven hard to displace, and there is also emerging competition for some of the languages from the likes of Zig as we’ll come back to.
Swift: even in a long list of languages that have been static in their progress, Swift stands out having not moved one spot in either direction in six years. To put that in context, the last time Swift wasn’t 11 in our rankings Google’s transformer paper was merely one quarter into its journey towards upending the technology industry. On the one hand, as noted above, this means Swift is enormously popular as a language. On the other hand, its ranking and the lack of objective evidence to the contrary seems to prove that the efforts to make Swift more of a force for server side applications have failed.
Bicep (86), Grain, Moonbit, Zig (97): as with the Dart/Kotlin/Rust grouping above, these languages are grouped here not because they’re all technically similar but rather because they are on the languages of interest list mentioned above. They are included here for a variety of reasons: Zig is on here because it has attempted to learn from the languages that preceded it from C++ to Rust. Grain and Moonbit are on here, meanwhile, because they are optimized for WebAssembly. Bicep is on here because it comes up with surprising frequency – and a mildly surprising ranking – for a cloud DSL. Only two of these languages are currently ranked, but we’re watching all of them to see if these or any other new languages begin to emerge.
Credit: My colleague Rachel Stephens wrote the queries that are responsible for the GitHub axis in these rankings. She is also responsible for the query design for the Stack Overflow data.
]]>I have made the case that in 2024 frontend developers are the Newest New Kingmakers, and therefore deserve a place beside more established Kingmakers hailing from the backend and IT operations spheres.
Evidence of the frontend’s rise is everywhere. Client-side interactivity and the UI is where much of the most exciting software innovation is happening (lately I’ve been following what Stepan Parunashvili calls the Database in a Browser, with Carl Sverre’s SQLSync being a particularly original approach). The frontend’s reputation has also benefited from engineers becoming valued consumers of managed cloud services intended to handle backend ops. All of this has spurred a renaissance surrounding the frontend’s image, so that practitioners working at the top of the stack are now vocally embracing the label “frontend” as a worthwhile and hip segment of the developer community.
This pro-frontend sentiment is a very recent occurrence. It’s significant that Vercel’s messaging around “The Frontend Cloud” is less than two years old. Even five years ago many developers I know shuddered at the thought of committing themselves to the label “frontend,” and most aimed to assume the title “full-stack” (a distinction I expand on below). How should we account for the rise of the Frontend Kingmaker? In this second installment of my three-part series, I discuss the history of the frontend and web development. I will also cover the contentious idea of the full-stack developer, which until 2019 threatened to subsume the sovereignty of frontend development as a viable profession. I also touch on the recent SPA Wars, which have gone far toward saving developers working at the top of the stack from stagnating or becoming pigeonholed as React developers.
So let’s talk about the frontend’s journey to its newly celestial rank.
Tim Berners-Lee at CERN (Image credit: CERN)
I know, I know, but hear me out.
Tim Berners-Lee’s “hypertext project” for CERN shared documents written in HTML. Without interactive elements, hypertext offered a pure frontend experience. Sure, when it launched in 1990 Berners-Lee also performed operations labor by hosting his website on a computer famously adorned with the warning label: “This machine is a server. DO NOT POWER IT DOWN!!” And yet, into the present it is hypermedia—text and images—that distinguishes the WorldWideWeb from the sort of internally-facing computation that corporate mainframes handle.
How the web looked and performed were top of mind in those early days. The HTML Working Group was founded in 1994 to standardize the language, which the W3C took over from 1997 to 2015. 1996 witnessed not only the CSS 1 recommendation’s publication, with the CSS Working Group being founded the following year, but also the creation of JavaScript as part of Netscape (which became the ECMA-262 standard). Of course, innovations have only accelerated from there. If the internet is a vehicle for sharing media, it means that all SaaS, serverless, and web-based services are heir to the sort of client-side, user-driven experience for which the frontend specialized.
The frontend experienced an auspicious beginning, but you would be excused for not realizing it until very recently.
Since the 2010s, there has been a persistent stereotype that frontend development is a stepping-stone to real, serious engineering work. Practitioners laboring at the top of the stack are perceived to have intern or junior-level skills. Moreover, the code that frontend engineers write is often discounted as trivial or, worse, optional, particularly when it comes to accessibility and QA. For too many, the frontend is window dressing while the backend is essential. The frontend is also frequently gendered feminine. As a 2017 lawsuit against Google brought by three women developers, plaintiffs Kelly Ellis, Holly Pease, and Kelli Wisuri, notes:
Google pays backend engineers more than frontend and fastracks them for promotion. On the teams Ms Ellis worked with and observed at Google, almost all backend software engineers were men. Almost all female software engineers, however, were frontend engineers. The skills required to perform these jobs are equal or substantially similar.
One reason vendors historically ignored frontend developers is because they were perceived to have little involvement in purchasing decisions. Sales consultants continue to discount the frontend’s importance for this reason, with the Boston Consulting Group, for instance, recently claiming that, “cloud-native application developers and DevOps engineers have particular clout in purchasing decisions.” It is true that the greatest spend in an engineering department tends to be on infrastructure (servers, databases, hosting). But it is shortsighted for the providers selling these products and services to target only backend and operations engineers. As more and more sophisticated infrastructure and operations abstractions appear in the marketplace, the developers responsible for selecting and managing ops is diversifying (more on this in a follow up post).
But the nadir of the frontend engineer is in large part a consequence of the rise of the “full-stack developer.” The notion that frontend developers were the persona non grata of an engineering department in terms of decision making and resourcing sway emerged in 2008 when Randy Schmidt, currently a senior project manager at Burns & McDonnell, identified the “Full-Stack Web Developer” as “someone that does design, markup, styling, behavior, and programming.” This sense of the phrase shifted full-stack’s previous definition from the 70s and 80s, which referred to full-stack developers as those practitioners responsible for the operation of both hardware and software. Today, “full-stack” is used exclusively to refer to engineers tasked with developing for web as well as mobile, and possessing mastery of both frontend and backend skills.
Throughout the 2010s, the full-stack/ frontend/ backend distinction became central to how hiring managers and engineering leadership formulated their teams. Frontend has come to distinguish developers who write JS, CSS, and HTML, as well as any code touching the UI which includes APIs, GraphQL, CMSs, browser testing, and WCAG compliance. Backend handles server side code, which generally covers areas like database maintenance and security, but full-stack developers can do it all. It takes a very small leap to understand why pay disparities and sexism attached itself to these distinctions.
Between 2010 and 2020, no one wanted to be a frontend engineer (ok, except for me, my friends, and loads of talented developers who realized the frontend is where it’s at). For pragmatic reasons many engineers working at the top of the stack instead positioned themselves as “full-stack.” Of course, even today the full-stack role persists. At a Christmas party last year I asked a woman if she was a frontend engineer, probably after spending several minutes chatting about front-endy things, but she looked genuinely horrified at the accusation: “There are no frontend engineers at our company. Only full-stack.” Mea culpa. There are clear historical reasons to shy away from the label, particularly among woman developers and those looking to secure the highest salary, but the full-stack has failed and suspicion against the frontend is waning.
i've been saying this for a long time, but was told last week that it qualifies as a hot take.
for whatever it's worth, though, i don't believe it's possible to be a legitimate full stack engineer for applications of even moderate complexity in 2023.
— steve o'grady (@sogrady) March 7, 2023
https://platform.twitter.com/widgets.js
Companies benefited inordinately by having developer jack-of-all-trades available to work at every layer of the stack. It’s like having 2 developers for the price of one! But the expectations placed upon full-stack developers have proven unrealistic and often unpleasant for workaday practitioners. For this reason, the full-stack role has received significant backlash from within these communities. As Laurie Voss succinctly explains:
You can’t learn the whole stack. Nobody can. Maybe it was possible in 1990, the day after the web was invented, but I’m not even sure about that.
Beyond its impossibility, the insistence of so many employers upon hiring full-stack developers has had the consequence of leading to burnout, anxiety, and unnecessary cognitive load. A decade ago, “discouraged developer” Tim Bray voiced his frustration with the idea of a full-stack developer, who is expected to master the front and back of the stack as well as IT operations:
But there is a real cost to this continuous widening of the base of knowledge a developer has to have to remain relevant. One of today’s buzzwords is “full-stack developer”. Which sounds good, but there’s a little guy in the back of my mind screaming “You mean I have to know Gradle internals and ListView failure modes and NSManagedObject quirks and Ember containers and the Actor model and what interface{} means in Go and Docker support variation in Cloud providers? Color me suspicious.
The full-stack role undermines the tinkering, exploratory spirit that has long invigorated the profession of software development. DevOps engineers are expressing a similar sentiment to their disillusioned full-stack developer counterparts. Tired of trying to master the skills required for development and operations, many DevOps practitioners lobby to resegregate these roles into separate domains in order to lighten the mental strain imposed by assuming a mastery of both. It is no wonder that things began to look up for the maligned frontend engineer in the 2020s, heralded by Coyier’s “The Great Divide,” which established the frontend’s renewed identity and mastery of client-side interactivity and the UI.
I will leave an extended analysis of the frontend’s future for a later post, but I want to round out this history-focused analysis with the most recent challenge to the sovereignty of the frontend role: the rise of single page applications (SPAs).
For many aspiring frontend (and full-stack) developers looking to write JavaScript professionally, learning React—the most popular JS framework—remains the most direct path to employment. Advice on job boards and podcasts often recurs to this contentious theme. Indeed, whether folks hoping to reskill into the tech field ought to learn vanilla JS, or jump straight into React, continues to be hotly debated. Although learning a JS framework is a pragmatic approach, particularly for individuals looking to maximize their earning potential quickly, the suggestion that frontend === framework narrows and condenses the frontend’s range.
It is imperative for folks invested in developing apps with rich interactive UI experiences to keep the frontend profession from being condensed into mere framework maintenance. Developing and designing for the frontend is a broad and creative endeavor. If frontend engineers are reduced to React developers, their collective identity will suffer.
React may be a large and vocal player in the frontend space right now, but the SPAs continuing dominance is uncertain. Many apps are adopting HTML-first architectures that eschew heavy JS wrappers. Static site generators (SSG) and server-side rendering (SSR) have also seen a resurgence in popularity owing to their performance benefits. Moreover, issues relating to client-side interactivity, including hydration, caching, signals, and islands, continue to generate excitement among the web developers I follow. Innovations and improvements in these areas in particular have drawn many of the brightest minds to take part in, and invigorate, this space apart from the JS frameworks that threaten to suck all the oxygen from the room.
To wrap up this history and analysis of the Frontend Kingmaker, the renewed respectability of developers working at the top of the stack has been significant. Instead of a dirty word, devs are now wearing the profession of “Frontend Engineer” with pride. Although, many web developers will continue to eschew this title as a means of professional survival owing its long tenure of being perceived as lesser, as I have endeavored to show, the times, they are a’changing.
Header image: “A King eating Pancakes” created with Dall-E
]]>Each of these incidents acts to dilute the term open source, and thus weaken it.
Some would excuse if not actively condone this behavior because when it comes to the question of what is open source AI, the answer is we don’t know yet. It is not clear, at present, precisely what the term open source means in the context of AI. There is no industry consensus, and the primary, underfunded defender of the term is still working on a definition.
The implicit assertion of those would defend the description of assets as open source that are objectively not is that the blame should not go to bad actor authors, but rather the OSI. If only their definition had been available, the reasoning goes, these parties deliberately and willfully misusing the term open source would be more respectful.
This position ignores some obvious challenges. Most obviously, defining open source with respect to AI is an enormous industry challenge. It is not clear, for example, that copyright – the fundamental intellectual property mechanism open source licensing is based on – can be applied to embeddings and other abstruse, numerical portions of released projects. And while the open source definition was designed in an era where the source code was all that mattered, it is but one small piece of an AI model. What, then, should a definition in an AI era require of project authors to ensure the same rights to an end user? How encompassing should it be? And what are the downstream implications of that? A project trained on massive datasets stretching across the internet, as but one example, is clearly not going to be able to convey that as part of its release.
But it’s not just that defining open source is difficult. Those who would blame the OSI for the repeated misuse of the term open source with respect to AI models are ignoring a simple truth: that while we can’t yet say what open source is, precisely, with respect to AI, it’s easy to tell what it is not.
It is true that we do not yet understand what the scope of an open source AI license might be, and whether it touches on training data or whether weights, parameters and embeddings are sufficient. We can say with confidence, however, that licenses that pose artificial use restrictions based on the user counts and revenue mentioned above will not qualify for this definition.
It is possible, therefore, to be respectful of the term open source and its specific meaning even in the absence of a definition that applies to models. And it’s possible to do so in a manner in which full credit is still received for making assets open rather than keeping them private and proprietary. We know this is possible because this is precisely what Google has done with Gemma.
Released last week, Gemma are two small but high performing models from Google intended to compete with the likes of Meta’s LLaMa. Like LLaMa, Gemma is an open model. Unlike Meta, however, which falsely claimed that LLaMa was open source, Google was careful to state that while Gemma is open, it is not open source.
Their reasoning is as follows:
We’re precise about the language we’re using to describe Gemma models because we’re proud to enable responsible AI access and innovation, and we’re equally proud supporters of open source. The definition of “Open Source” has been invaluable to computing and innovation because of requirements for redistribution and derived works, and against discrimination. These requirements enable cross-industry collaboration, individual innovation and entrepreneurship, and shared research to happen with exponential effects.
However, existing open-source concepts can’t always be directly applied to AI systems, which raises questions on how to use open-source licenses with AI. It’s important that we carry forward open principles that have made the sea-change we’re experiencing with AI possible while clarifying the concept of open-source AI and addressing concepts like derived work and author attribution.
The gist, in other words, is that while we don’t yet know what open source AI is, we do know what it isn’t.
This articulation and branding is important – vitally so – for the long term health of the term open source, and thereby, the industry. But note that it comes at no cost to Google. There is no ambiguity or uncertainty about whether the model is open and available: it has been described and received as such. “Open model” conveys precisely what it needs to, and makes no promises it cannot fulfill. Unfortunately, the press has not yet internalized the difference between open and open source that Google so clearly articulated, and took it upon themselves to apply to Gemma the term open source that Google so assiduously declined to itself.
Unfortunate as that may be, Google should be commended for its behavior here, for doing the right thing by open source and for providing a clear path that with luck, others may follow.
Open is good. The industry succeeds and is driven forward when groundbreaking new models are released and made available. But for the health of open source and the industry as a whole, it’s important to choose our words carefully and to understand that while open is good, open is not open source.
Disclosure: Google is a RedMonk customer. Meta and the OSI are not currently RedMonk customers.
]]>In recent years the frontend has undergone a renaissance. Frontend developers—individuals that have traditionally focused on writing HTML, CSS, and JavaScript code, but increasingly work on everything touching UI, which includes APIs, build tools, interaction, GraphQL, accessibility, design, and QA—are seeing new frameworks, services, and tools revolutionize both the ways they work, and the apps they build. Today, the proliferation of JS frameworks offers developers a veritable cornucopia of options, while a wave of managed services caters to their need for backend infrastructure. Vercel’s branding as “the Frontend Cloud” acknowledges this subset of developers’ power as both consumers and industry movers. I have also written about several BaaS companies targeting frontend devs. Illustratively, Paul Copplestone, CEO of Supabase, does not skip a beat when identifying the specific market he hopes to nail down:
Definitely JAMstack… Eventually we’ll target more established full stack developers, even people who really enjoy Postgres already.
All of this hype and attention has done wonders for the perception and identity of frontend practitioners. These developers are no longer ashamed of the title, or unilaterally hung up on questions of whether “frontend web developers have a bad reputation for having poor abilities?.” Although some stigma persists, frontend engineers are in ascendence and therefore increasingly disinclined to hide behind the label “full-stack.” In 2024, it would seem that developers working at the top of the stack, and the vendors catering to them, are thriving.
In that spirit, I argue it is time we at RedMonk expand on Stephen O’Grady’s now canonical 2013 concept of The New Kingmakers by formally acknowledging the role of frontend developers. Software engineers possess a tremendous amount of power and influence in organizations, but those working at the top of the stack are largely absent from O’Grady’s book. Recent trends in the platform and developer tooling space suggest that this segment of the developer population can no longer be ignored. Of course, the frontend-developer-as-Kingmaker is not, in fact, novel, as I will explain in a follow-up post, but the history of sidelining is real.
This blog-based coronation acknowledges the frontend developer’s elevated status. Although the desktop isn’t going anywhere fast, as IoT and mobile devices improve and evolve (this is Apple Vision Pro’s release month, after all), the frontend will continue to be where innovation happens. Moreover, API-driven development enables top of the stack developers to leverage backend services easily and affordably. New solutions in the data management and compute space are addressing significant unresolved problems. Solutions and protocols like GraphQL, WebAssembly, and WebSockets all look to disrupt business-as-usual for data fetching, storage, and use.
The backend dev and the frontend dev pic.twitter.com/yvsAa5BluL
— 𝐃𝐚𝐦𝐢 ✨ (@damimoonbb) August 23, 2022
https://platform.twitter.com/widgets.js
The frontend developer’s image and popular appeal has also undergone a makeover. These engineers have a hand in the design sphere, needing to be able to navigate the Adobe suite and Figma to ensure the design and UX team’s mocks-ups are translated into pixel perfect code for desktop and mobile devices. For this reason it is no great wonder that in the popular imagination the frontend developer is closely aligned with designers. The stereotype persists that frontend engineers carry themselves accordingly, with fashionable haircuts and clothes. Vercel has made this hipster aesthetic core to its branding. At Render ATL last year you could see the Vercel folks a mile away in their uniformly sleek black tee shirts. Among others, Supabase—who works with Vercel—has joked repeatedly about this curated lewk. Far from the frumpy developer stereotype of yore, frontend developers have an identity all their own, and they won’t be diminished or ignored.
Dressed like they’re about to tell me what’s new in Next.js 15 pic.twitter.com/ynYhQKgRMl
— Supabase (@supabase) November 10, 2023
https://platform.twitter.com/widgets.js
I date this frontend renaissance to the 2019 publication of Chris Coyier’s “The Great Divide.” In this influential CSSTricks post, Coyier surveys the state of frontend and speaks about this domain’s plasticity. The label frontend is diffuse: while some focus on “HTML, CSS, design, interaction, patterns, accessibility, etc.” others specialize in JavaScript: “modern frameworks, fancy build tools, and interesting data layer strategies… React as a UI library, Apollo GraphQL for data, Cypress for integration testing, and webpack as a build tool.” Part of this separation came out of the web’s snowballing complexity, and particularly the Server/Client Two-Step which seeks to harness client-side interactivity with server rendering’s recognized performance. But what Coyier’s post makes crystal clear is that the frontend is enough. Developers don’t need to feel bad about working at the top of the stack or else claim membership within the full-stack guild in order to demonstrate their ability. Frontend engineers not only perform serious, necessary work, their domain is evolving rapidly and involves some of today’s most exciting and disruptive tech.
The result of frontend engineering’s renewed respectability in the tech space has been significant. Instead of a dirty word, devs now wear the profession of “Frontend Engineer” with pride. I’ve made the case that in 2024 the frontend developer deserves space beside more canonical Kingmakers from the backend and IT operations spheres, but how did this transition come about and what does it mean for the future of the software industry? This post is the first in a series on this subject. The second will historicize the frontend Kingmaker and assess the relevance and persistence of the term “full-stack engineer.” Next, I will examine the future of frontend by situating this domain within a larger conversation concerning cloud, the rise of abstractions, and exercises in software packaging versus primitives. There is still a lot to say about the importance and evolution of the frontend, and this necessary conversation is far from settled.
]]>From its birth through the first few years following the turn of the century, technology was most properly considered as a centralized industry. Dominated by a smaller number of large players and their monolithic, relatively homogenous platforms, it was predictable in its progression and characterized by a steady, stately pace. There were disruptions and revolutions, of course – the personal computer era being perhaps the most notable of these – but overall, the landscape was largely coalesced around a small number of vendors and their respective technologies.
Around the turn of the new millennium, however, a variety of macro market pressures were building. What would come to be called open source, for one. The internet, for another. Twelve years after one of the original internet pioneers was born, meanwhile, it gave birth to yet one more market moving factor: the cloud. These and a myriad of other pressures, trends and developments combined to start history’s pendulum on a reverse trajectory.
Seemingly overnight, the wider technology market developed not only a tolerance of but appetite for specialized technologies – specialized technologies that could only come from a more decentralized, distributed technology industry. So it was that the small stable of big tech companies gave way to a new, far more crowded landscape of players. As but one example, what had been staid, heavily centralized markets like databases blew up into the fragmentation that was NoSQL. One software category populated by a small number of relevant players became a half dozen or more categories each of which had four or more relevant vendors of its own.
As Ashlee Vance quoted Andrew Feldman, then CEO of SeaMicro (acquired by AMD in 2012) in 2010:
“There is a foment happening. It’s a bubbling of ideas and technology.”
None of the above should be controversial, or particular open to dispute. No one can credibly question the assertion that the market, on the whole, swung towards specialization. The only real question is when the pendulum would swing back in the other direction. When, the question was, would the market again advantage general purpose offerings, with consolidation as its direct, inevitable consequence?
Based on the evidence at hand, the answer appears to be now.
Two and a half years ago, it was noted in this space that the database market – the same market that was aggressively and forcefully decentralized beginning around 2009 – was beginning to consolidate. In principles consistent with convergent evolution, specialized database players began to add general purpose, adjacent features to expand their addressable market and to satisfy customers seeking to reduce their vendor overhead. General purpose relational databases, meanwhile, increased their ability to compete with their more specialized counterparts by adding specific areas of capability – the ability to ingest and operate on JSON, for example.
That was an interesting development for the database market, certainly, but if it was an isolated development its significance would be limited. Instead, however, this type of functional consolidation is rippling outwards and impacting category after category. The APM, logging and observability markets, for example, have been aggressively merging for years. Developer toolchains, once made up of independent, best of breed components from version control to issue tracking to CI/CD to vulnerability scanning, are driving towards native, pre-integrated experiences offered by a single party. Developers individually, meanwhile, have been clamoring for Heroku-like platforms that abstract all of the underlying plumbing away, leaving them with only the simple task of pushing code to it. CDNs become more general purpose, clouds become better CDNs. One time areas of high visibility and focus like operating systems, virtual machines or container orchestration platforms are gradually fading from sight, obscured beneath multiple layers of abstraction and consolidation. Where RedMonk’s language rankings were once somewhat dynamic, characterized by growth, sometimes rapid, of late they have been more static and reflective of minimal movement.
From individuals to markets, the inevitable outcome of having too many choices is a desire to make fewer choices. Consolidation and centralization are real, and they are here, observable in most markets today with one notable exception which we’ll come back to.
Some will likely argue that, like every other aspect of life today, that fragmentation was a zero interest rate phenomenon. And inarguably the availability of free money contributed to a world with more startups. But to point to ZIRP as the sole or even primary driver of fragmentation and decentralization is reductive, and ignores the reality which is that this trend was always inevitable independent of interest rates, because history demonstrates conclusively that pendulums that swing in one direction will inevitably, at some point, swing in the other.
The question facing us, then, is not whether the pendulum has started to reverse its course, but what that swing means. The implications are wide, but a few conclusions seem obvious:
Speaking of AI, however, that is the notable exception to this trend mentioned above. If anything, fragmentation in the AI world is expanding, not contracting. New models arrive by the day, as do new use cases and the vendors that would target them. This is likely to continue in the near term, but there are already signs that AI is poised to be impacted by precisely the same trends currently being experienced by other software categories. AI is new, so its period of frothy experimentation is tolerated, but there will come a time when customers, and even most developers, tire of having to continually making choices about models and otherwise, and default to whatever general purpose platform they’ve imprinted on.
Fourteen years ago, this was the observation regarding the arrival and onset of specialization.
As to when the pendulum will shift back towards general purpose, all I can tell you is that it will at some point. The inevitable result of an explosion of choice is a reactionary market shift away from it. All this has happened before, and all of this will happen again.
Today, in 2024, that same question remains. When will the pendulum shift back to specialization? The answer is that we don’t know, but we can confidently state that it will. It always does.
]]>This week it’s The State of Open Conference 2024 at The Brewery, London. It was great last year. I heartily recommend you attend. As I said on twitter at the time:
The UK now has its own OSCON. the event will happen again and will go from strength to strength.
Attendees and speakers were a who’s who of open source, open hardware and open culture people generally. The open data track was particularly lively. There were so many of my friends there, it really felt like my people had all congregated in London for the day. The event also felt very inclusive, in terms of both speakers, but also attendees. It reflected London’s rich diversity.
The speaker list is extremely impressive again this year.
But the real reason I think SOOCon24 is so important is the focus on policy, governance and open source sustainability. Open source is under a great deal of pressure right now. VCs are encouraging their portfolio companies to adopt “business source” licenses, which are not actually open source. Why does this matter? As my colleague Stephen O’Grady argues:
A world in which non-compete licensing grows at the expense of open source is problematic enough. A world in which vendors blur the definition of open source such that regular users can no longer differentiate between the two is much, much worse.
Pedantic as it may seem, then, the question of whether something is actually open source really does matter, as those who would redefine the term will find out if they get their way.
This movement has also bled into the current AI explosion. What is “Open” AI? That’s something we need to work out – and major market players are casually calling things open source, which frankly aren’t. Another area of governance and policy under scrutiny is regulation of AI – we can’t just leave this as the era of “You Only Live Once.” Controls will be necessary, and governments are scrambling to put them in place. At SOOCon24 the organisation behind the conference Open UK will be capturing opinions and data to feed back the UK government about regulation going forward. I believe we’re going to see AI Bill of Materials requirements regulated at national level.
It’s a pivotal time, and these discussions are vitally important – that’s why they need a home. We’re literally talking about the economic foundations of the digital economy, the means of production which have served us pretty well these past couple of decades, and the opportunities for making and learning which have made tech such a transformative success. Authors and creators need stable foundations to work on. Copyright and licensing matters. Back to Stephen:
Instead of the embarrassment of riches of open source projects we have today that developers may take up and use for whatever they choose without reference or restriction, we’d have a world dominated by projects carrying varying, conflicting usage restrictions that would render the licenses incompatible with one another and not usable by some.
I am glad Amanda Brock and team are pulling this event together, for all of the reasons outlined above, and I look forward to seeing you there. I believe there are a few tickets available.
If you’re interested in AI and prompt engineering, and all of the craft, sustainability and social angles, you should also check out my conference Monki Gras 2024: Prompting Craft. March 14th and 15th, Shoreditch London. Tix here.
]]>Members of the Cloud Native Computing Foundation (CNCF)’s Environmental Sustainability Technical Advisory Group (TAG) addressed this issue on stage in the Kubecon Chicago keynote. The TAG’s mission is to “advocate for, develop, support, and help evaluate environmental sustainability initiatives in cloud native technologies.”
I enjoyed the topic on stage enough that I invited Niki Manoledaki, Software Engineer at Grafana Labs and member of the Environmental Sustainability TAG to join me for a deeper dive.
The entire conversation was enlightening, but in particular I left my time with Manoledaki curious to know more about the Kepler project.
Kepler stands for Kubernetes-based Efficient Power Level Exporter. It’s a sandbox project in the CNCF, donated in 2023 after being “founded by Red Hat’s emerging technologies group with early contributions from IBM Research and Intel.”
Understanding the carbon intensity of cloud workloads is particularly challenging because most cloud providers do not expose power usage metrics through the hypervisor in their virtual machines. The Kepler project is designed to help end users to better “estimate power consumption at the process, container, and Kubernetes pod levels.”
Because there is no direct way to measure the power that a VM consumes in a cloud environment, Kepler uses ePBF to probe for CPU performance counters and Linux kernel tracepoints. These metrics are correlated the appropriate pod using cGroup stats. These data points are fed into machine learning models to estimate data about workload power usage where concrete information is not available. This information can then be exported as Prometheus metrics.
This paper is an excellent resource for people wanting to better understand more specifics about Kepler.
Kepler utilizes a BPF program integrated into the kernel’s pathway to extract process-related resource utilization metrics. Kepler also collects real-time power consumption metrics from the node components using various APIs … Once all the data that is related to energy consumption and resource utilization is collected, Kepler can calculate the energy consumed by each process. This is done by dividing the power used by a given resource based on the ratio of the process and system resource utilization.1
In short, no.
While it’s good to start to build some transparency into cloud compute power usage, we still have a long way to go on the path to sustainable cloud computing.
Per my conversation with Manoledaki, runtime energy accounts for only a small component of total energy use.
With carbon monitoring, the issue is we have scope one, scope two and scope three emissions. Scope one are direct emissions, scope two are indirect emissions, and scope three are everything else in the supply chain in the manufacturing process. And we know that in the cloud, it’s estimated that 70 to 90% of the carbon emissions are scope three. So they are from the supply chain of software from the manufacturing. And this is data that we as cloud users are lacking to a certain extent. So yes, runtime energy consumption is a small part of the equation.
In addition to checking out the Kepler project, there are many potential starting points for those interested in better understanding their energy use in the cloud.
Related: Read more about why RedMonk thinks eBPF is interesting
Disclaimer: AWS, Google Cloud, IBM, Intel, Microsoft, and Red Hat are RedMonk clients. Grafana Labs and the CNCF are not, though CNCF paid my travel expenses to their conference.
Neither the video nor this accompanying piece were sponsored, and as always this work reflects RedMonk’s own opinions.
The results of my HeadshotPro experiment were both humorous and somewhat disconcerting.
Using the website was fairly straightforward: pay $29, answer some questions, upload some photos, get some headshots.
I was asked to define (via dropdown) my:
Then I chose backgrounds and clothing styles. (I went with brick wall/white top and white background/blue top, hoping that I would land somewhere in neutral territory.)
The system requires you to upload at least 12 images and has various constraints and recommendations around quality and composition. This was slightly difficult for me primarily because my camera roll is almost entirely pictures of my kids. But I pulled together images, all from the last couple years.
I should have taken a screenshot of the input step, but here’s a flavor of my inputs photos.
HeadshotPro returned ~120 possible pictures to sort through. You select your favorites of the batch and they provide fully rendered, non-watermarked versions of your top choices. (I’m purposely sharing the watermarked versions here because while I appreciate all you weirdos reading this, I absolutely don’t trust you.)
In the process of sorting through them I encountered the standard AI issues. Six fingers. Weird rendering of photo details on things like shirt collars or jewelry. But beyond any of these specific issues, there were many uncanny valley moments of feeling like I was looking at something that was close to but not quite me.
But also: holy problematic encoding of societal beauty standards, Batman!
Look at how absurdly thin they rendered “me” in some of these photos
Notice how there’s completely unnecessary, AI-generated sexualization
These professional headshots are with generated cleavage and nipples visible through clothing, even when the instructions explicitly tell you not to upload prompt images with exposed skin. Why?
Look at how long and lustrous the hair is
And notice that the AI clearly leaned into longer and blonder hair as the desired outputs even though I definitely shared prompt photos that also included my current darker, shorter hair.
Notice how the skin in any of the above photos shows no evidence of age at all
There are no pores or wrinkles or rosacea in AI-ville. In fact, there is no aging at all. After I got these results I called my mom and had her dig up my high school senior picture so I could compare them. And because I love you, internet, I am going to share the results.
This is 17-year old me compared with an AI generated version of what I ostensibly look like now. That’s the same skin! I know I definitely spend more than is reasonable on moisturizers, but the amount of anti-aging in this photo is absurd. This AI is depicting what I looked like half a lifetime ago.
Given that my goal was to see if I could get an updated professional headshot using recent-but-not-professional photos, seeing my 17-year old self reflected back in the results was not super helpful.
I know that we are societally conditioned to want to look younger, thinner, and more polished. You can argue that the AI is merely reflecting these societal standards back to us.
These concerns are not new, and in fact come up with every technology wave. See, for example, pieces on the danger of TikTok’s Bold Glamour Filter (2023), the problem with Snapchat filters (2018), 25 years of how Photoshop changed the way we see reality (2015), etc etc.
Over and over again we wring our hands about the societal impact of a given technology, and the answer always seems to be that the problem is not the technology itself but how people using the technology. Yes, and no.
We build these systems knowing that there is an interplay between the world as it is and the world we’re creating, and we have to consider what we’re encoding into these AI outputs (either via what training data is used, how it’s labeled, how models are weighted, how the algorithm responds to adversarial feedback, etc etc.)
In the grand scheme of things it’s absolutely not a big deal that my experiment with headshots didn’t pan out. But it is worth thinking about what happens as AI applications move from being a toy we experiment with to something that becomes load-bearing in significant ways. What do we need to be noticing, considering, and changing about AI outputs now if we want to ensure future outputs reflect the society we want?
I leave you with a quote from Cathy O’Neil’s Weapons of Math Destruction:
“Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics.”
Related Content
]]>
“Mobile app development is broken,” Jenna Bilotta, SVP of product and user experience at LaunchDarkly, told me recently. I’ll identify these fractures, and how vendors like LaunchDarkly are addressing them shortly, but I want to begin this post by acknowledging a sentiment that we hear repeatedly at RedMonk. While mobile’s difficulties are material and speak to the industry demand for practitioners with deep domain knowledge, they also signal a significant transition in this space. Many of the developers I know are pointing to the Progressive Web App (PWA) as the likeliest solution. PWAs are web-based applications with significant interactive capabilities. They eliminate the friction of downloading a native mobile app from the app store by allowing users to complete tasks requiring compute and data management all from the browser.
Progressive web apps are not new, but with modern browsers’ expanding capabilities they are becoming an ever-more viable alternative to native apps. Consider this recent exchange between two Hacker News users. In response to one contributor’s claim that Meta’s UI framework for developing mobile apps, React Native, “is fantastic for getting out basic applications that may need camera, map, browser, and storage, etc capabilities,” another user responds:
The thing is, though, browser-based APIs let you do all of that now, in a way that is usually a lot simpler to access. In these cases it’s often a lot easier to just build a Progressive Web App … With the big exception of games, there are a vanishingly small number of apps that really require native functionality these days, given what you can do in the browser.
With technologies like WebAssembly (WASM) making it easier than ever to shift executables client-side, and htmx giving developers “access to AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML,” the developer community seems optimistic that mobile development will be relegated to only the heaviest applications (namely games). Most everything that can be accomplished on a native mobile app will soon be fully available on the browser. Of course, the important word here is “soon.” PWAs may become the default in 5 or 10 years, but today the mobile app market remains vibrant—for good or ill.
The remainder of this article contextualizes the challenges faced by mobile developers beside the promise of PWAs. The tyranny of mobile development may not be going anywhere soon, but at the start of 2024 it is useful to consider this domain’s challenges beside a browser-based alternative that many consider to be the future.
Apps that work natively on smartphones and tablets need to accommodate a range of devices, operating systems, and platforms. This diversity means that unlike traditional software development, which targets a limited number of platforms, mobile apps must work on all the things. Moreover, applications must function at various screen sizes and resolutions. This range increases the testing and optimization efforts required, making the development process a slog while keeping device farms like BrowserStack in business.
PWAs serve to alleviate the device, operating system, and platform variety issue since browsers—the platform for accessing these apps—are designed to be interoperable. In terms of logistics, this means that the labor mobile app developers currently devote to the task of ensuring their apps run on different devices will shift from mobile to browser development teams.
Chart from Embrace’s 2023 State of Mobile Experience Engineering Report
The approval process for the Apple App Store and the Google Play Store is famously complex and tedious. These platforms have strict guidelines and policies that can be onerous to meet. Moreover, any violation of these requirements may result in the app’s removal.
Both Apple and Google have an app review process to assess the functionality, design, and content of submitted apps. These reviews can be time-consuming, and developers may face delays if their app requires multiple iterations or if there is a backlog of submissions. What is more, the different app stores have specific requirements and guidelines that developers must follow. For example, iOS apps must adhere to Apple’s Human Interface Guidelines (HIG), while Android apps need to align with Google’s Material Design principles. Unsurprisingly, adapting the app to meet these platform-specific requirements can be time-consuming.
Developers must not only ensure that their apps conform to often stringent specifications, they must also be performant. Thorough testing is needed to identify and fix any bugs, but guaranteeing compatibility with various devices and operating system versions adds additional complexity to this process. Of all the possible faults, Android developers are particularly incentivized to avoid Application not Responding (ANR) errors which, to quote Sentry’s docs, “are triggered when the main UI thread of an application is blocked for more than five seconds.” App store ratings depend upon the number of ANRs they experience. High ANR rates are a big deal for developers because the Google Play Store penalizes non-performant apps by making them less discoverable in the store. In response to these high stakes a number of vendors have stepped in to address these challenges. While Sentry touts their “mobile crash reporting and app monitoring,” LaunchDarkly is targeting mobile devs with their feature flag product by promising to make “mobile releases safe and predictable.” An entire ecosystem of third party tooling has appeared to assist customers in mitigating the risk of errors like ANRs jeopardizing a company’s position in the app store.
But for all the headaches the app store’s demands place on software engineering teams, it may all be worthwhile because users want the kind of fast and non-buggy apps that these obstacles are intended to safeguard. Moreover, because users that enjoy an app’s experience are far more likely to remain engaged meeting user demands for performant apps also benefits app vendors. Last year I spoke with Embrace.io’s Andrew Tunall, VP and head of product, and Virna Sekuj, product marketing manager, about their 2023 State of Mobile Experience Engineering Report: a document featuring a number of interesting insights on the subject of user frustration with subpar performance. As Sekuj explains:
we’ve found that in the mobile space, especially, engineers tend to deal with a lot of frustrations around resolving app performance issues and optimizing their end users’ experiences, especially because mobile is such a unique development environment with a lot of different variables
The report breaks down these frustrations into specific categories and pain points. It is significant, if not surprising, that the experience of users is different from that of developers, as these groups have substantially different perspectives and goals. If the app store approval process is largely to blame for mobile development’s brokenness, as I so often hear, then it is noteworthy that according to Embrace’s chart above “Complaints/bad reviews” number among developers most significant concerns when it comes to “customers/end users having a poor experience on your app.”
As PWAs could eliminate the need for the app store’s vetting process, considering what is lost by sacrificing this platform cannot go amiss. Significantly, PWAs are not absent from these platforms. Google hosts PWAs in the Play Store directly (Chrome Labs offers a CLI tool to assist with this called Bubblewrap), and Apple allows for PWAs as long as they use a native wrapper (Cordova and Capacitor offer these wrappers). Although browser-based apps eliminate the need for an app store completely, users may come to prefer the performance guarantees these marketplaces provide.
Smartphone and tablet users today are conditioned to interact with apps, both native and browser-based, by way of quick links. My colleague Rachel Stephens explains that her father used to call apps “his squares,” and is tremendously confused by what was “a square” and what was “a website.” PWAs buck decades of ingrained behavior concerning how users access and interact with apps on their devices. If native-like experiences can be supported through the browser then existing behaviors and incentive structures for building and maintaining downloadable mobile apps will need to shift in kind. Users will ultimately determine whether the Apple and Google stores merit the friction they add.
How often have you added a website/app to your home screen on your phone?
— Cassidy (@cassidoo) December 11, 2023
https://platform.twitter.com/widgets.js
I have suggested why users may continue to demand an app store provided experience, but developers may also benefit from these platforms. Done correctly, the app store acts as a potent advertising and discoverability platform. Despite its challenges, successfully navigating the app store landscape can lead to an increased user base, revenue generation, and brand recognition. Although the sheer volume of apps available on the Apple and Google Play stores make it challenging for developers to achieve visibility—requiring effective marketing strategies, a compelling app description, engaging visuals, and positive user reviews—the marketplace’s ability to funnel potential users and buyers into a single feed can be beneficial.
Of course, the looming issue here is how the app store owners benefit, and what value they bring to the table. As PWAs gain momentum the fact that browsers do not guarantee website safety or performance will become increasingly contentious. The app stores already cite security as an exclusive benefit their platforms offer. It is this argument that Apple and Google make when defending their app stores’ more controversial practices:
The two companies argue their app stores help unlock billions in revenue for small businesses, while ensuring that Android and iOS users benefit from security oversight that the technology giants provide.
Epic Games’s lawsuits against the Google Play Store and the Apple store, which antitrust activists in the US and Europe continue to watch with great interest, hinge upon the argument that app stores hold illegal monopolies responsible for stymieing competition. From the vantage of an advocate of PWAs, it is noteworthy that the creators of Fortnite chose to pursue a legal case against the app stores, not only because Epic looms so large in the mobile app ecosystem, but also because browser-based gaming is unlikely to take off. Even if the majority of business and shopping apps move to the browser, gaming’s data and graphics intensive requirements make it a better fit for native apps.
Relearning iOS development after nearly a decade since my last mobile app: so much has gotten better!
Swift is a huge step forward from ObjectiveC, and Xcode got way better- no longer takes days to download, install, configure. Nice previews and autocomplete suggestions 🛠️
— Sarah Drasner (@sarah_edo) December 21, 2023
https://platform.twitter.com/widgets.js
If you speak to mobile developers like the incomparable Sarah Drasner about challenges in the domain of mobile development they inevitably mention the ecosystem’s tools and languages. Mobile developers use niche and historically difficult programming languages. In brief, there is a well-trod stereotype that many Kotlin (Android) and Swift (iOS) developers transitioned to these languages from Objective-C. The motives for this pedigree are obvious. Apple created Swift to support ideas central to Objective-C like “dynamic dispatch, widespread late binding, [and] extensible programming,” while in 2020 Kotlin 1.4 began offering support for Apple’s Objective-C/Swift interop capabilities. Fair enough. The problem comes from the relative dearth of mobile developers outside the Objective-C community. Kotlin and Swift developers have work enough to avoid branching out into rival languages and UIs like React Native and Flutter. This means there are very few engineers in the labor pool with deep knowledge of these, so that React Native shops like FanDuel who need to hire qualified devs are often forced to hire from among the much more plentiful React engineers, and then upskill these former frontend developers in React Native.
One benefit of PWAs, with their adherence to standard W3C web practices and browser-based APIs, is that they support more programming languages. Backends can be written in nearly any language, while frontend developers can select from any of the numerous JS frameworks available (or, if they prefer, just stick with HTML, CSS, and JS). This simplicity opens up hiring to more engineers. PWAs enable developers to work on web apps that are consumed on mobile devices without specifically working in mobile development.
https://embed.reddit.com/widgets.js
When Samsung announced its Galaxy Fold smartphone featuring bendable screens in 2019 I was working as a frontend engineer. I vividly remember seeing the memes that came out of the developer community concerning this tech—not only based on the hardware’s doubtfulness, but also the new challenges it would pose to web developers. Ensuring that apps are responsive on smartphone, tablet, and desktop monitor-sized screens is challenging enough, but every new monitor type adds complication to the established CSS breakpoint schema.
Therefore, beyond the requirements of diverse devices, operating systems, and browser engines, in order to render correctly PWAs require responsive experiences at every screen size. And I don’t want to hear anyone (*cough* backend engineers *cough*) claiming that Bootstrap and its heir Tailwind makes web apps responsive by default. As someone who works without a second monitor I am frequently annoyed by how poorly web apps work when they aren’t full screen. In 2024 QA continues to sadly disregard tablet width screens (between 600 and 768 pixels, or 6.25 and 8 inches). But I digress.
If PWAs become the dominant means of interacting with web apps on mobile devices it will make responsive web design at the fiddly smaller screen breakpoints required for smartphones and tablets doubly, triply, quadrupally important. Users will not tolerate a subpar experience on browser-based apps that aren’t responsive. PWAs will only succeed if they can adequately mimic a native app’s look, feel, and performance.
In speaking with developers and leadership for this project, both on and off record, the sentiment that mobile is broken strikes me as well founded, but changes are afoot! Devs have high hopes for the PWAs. Browser-based interactive apps consumed on mobile devices promise to lower friction and increase performance. While the challenges faced by mobile development teams show no signs of abating, the rise of PWAs at the very least offers the welcome boon of choice.
Disclaimer: Embrace, Google and LaunchDarkly are RedMonk clients.
]]>This is part of a series on publishing (in academia and/or tech).
A Victorianist friend once stated that she loved the quaint practice among medievalists (those of us who study the Middle Ages) of celebrating our colleagues through festschrifts: collections of critical or reflective writing honoring the contributions made by an individual over the course of their career. Such collections are often presented to the honoree at an important milestone (such as retirement). While “festival script” might be a closer cognate of the German, I prefer to think of festschrifts as “celebration writing”, or ways to leverage the written form to honor people whose writing has made a mark on our world.
Here at the end of 2023, a time that juxtaposes seasonal celebration with reflections of a year of global conflict, tech layoffs, and general uncertainty, I find myself contemplating this concept of celebration writing, both as a genre and a larger philosophy of writing. This pensiveness arises in part because, as I have written, 2023 has felt like a year of losses, and festschrifts as a genre–in contrast to tributes in memoriam–ask us to honor people while they are still with us. But I suspect that another part of this is driven by an impulse to reflect on the aspects of writing–an act that feels particularly in a state of flux–that make is suited to such celebratory acts. So please also consider this a celebration of celebration writing.
But first thing’s first: some more on festschrifts.
If this is your first time hearing the word “festschrift” you are not alone. Even among academics, the term feels old school, as demonstrated by the above anecdote of a Victorianist–a scholar who studies a period that ended over a century ago–referring to the practice as “quaint” (and indeed, the OED traces the word’s usage in English back to the nineteenth century). Festschrifts can include writing from current and former colleagues and students/mentees, and said contributions usually directly address or build on the honoree’s prior work. For instance, a recent festschrift for Dr. Katherine O’Brien O’Keeffe, my undergraduate advisor (who, when we were both at Notre Dame, taught one of my favorite courses, which required us to read Beowulf in the original Old English) includes a mix of new critical essays on early medieval textuality and subjectivity as well as an overview of her expertise in this area. It is also worth noting that the volume employs an “Essays in Honor of” subtitle–a common way to denote a festschrift without using the word per se.
Prior to the wide availability of digital formats, festschrifts would often take the form of a hardcopy volume of essays, a copy of which would be physically presented to the honoree at an in-person event. Such events can be organized to coincide with a professional conference–and I recall quite a few taking place during my once annual trips to the International Congress on Medieval Studies. These days, of course, there are a variety of digital formats that can be leveraged for festschrifts, sometimes in what feels like poetic ways. When medieval art historian Dr. Rachel Dressler retired, Different Visions, the online open access journal that she founded devoted an entire issue to celebrating her work, and with the forward-looking caveat of “Though this issue is inspired by past accomplishments, it looks resolutely to the present and future of medieval art history.” As Dr. Dressler was on my dissertation committee, I had the privilege of contributing an article to the issue that reflects on how her work and mentorship helped shape my own way of looking at the world.
While festschrifts are a common format among medievalists (at least those from my generation and earlier), a quick look at the history of the term (and wikipedia has a workable one if you don’t want have OED access) reveal that the genre is not just a medievalist thing or even a humanities thing, and may be even more common in STEM fields. My research into festschrifts turned up an article with directions on how to “Prepare A Festschrift” from the British Medical Journal and a piece on the festschrift genre from life sciences magazine The Scientist. If you want examples that are more recent (and at least as science-y), check out this 2022 issue of The Journal of Chemical Education and this 2021 issue of Physical Chemistry Chemical Physics. And in case you missed it, these last two directly use the word “festschrift” in the issue titles (i.e., no flowery “essays in honor of” language here).
It is worth noting that certain aspects of the genre can be problematic. Who a festschrift honors (and for what) can sometimes reproduce patterns of recognition and exclusion that we’ve seen play out in the academy and tech industry alike. The practice of “surprising” honorees with such efforts can also lead to mixed results (and indeed not everyone is comfortable being the center of such attention). Furthermore, as editorial projects and published scholarship, festschrifts constitute a category that some institutions will (while others will not) recognize as scholarship when making hiring and promotion decisions. As such, they are often characterized as “labors of love”, and yet even then the practice raises questions of who takes on the labor involved in putting such a volume together or producing (or modifying) a contribution.
And yet, even with these concerns, I still say there is a case to be made for celebration writing, even beyond the confines of a formal academic festschrift. Looking back over my own writing in the past few years–the pieces that have given me the most joy are those that celebrate people and/or their accomplishments. The article for Rachel Dressler’s festschrift was tricky in that I had to carve out time amidst the many other demands on my schedule, but it was also an absolute joy to write. Even among my RedMonk writing, I think back most fondly on pieces I wrote to celebrate the publication of books on illustration and fencing, to enthusiastically welcome a new colleague or two, and to chronicle the people and events that helped make RedMonk what it is today. Notably the last of these also catalogs a #20YearsOfRedMonk social media celebration that had a very festschrift-ish feel, which made me realize that the tech industry might have its own take on celebration writing.
Such writing also represents a gift of time, effort, and thought that feels all the more valuable when many writing tasks are purportedly being relegated to AI-based tools. That is not to say that such tools have no place in celebration writing (and how long until we get our first prompt engineered festschrift?). While I do think artisanally crafted celebration writing may signify differently–to both to writers and honorees–than whatever genAI can currently cook up, I suspect that recognizing and celebrating the people whose work means something to us is a worthwhile effort regardless of the tools and formats used.
To that end, if you have any examples of celebration writing that you’d like to share (especially those related to tech), please let me know in the comments below.
]]>