console.log()

Optimizing JavaScript Delivery: Signals v React Compiler

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

JavaScript in 2025 isn’t exactly lightweight. Shipping JS code involves managing browser quirks, massive bundle sizes, hydration woes, and performance tuning that can sometimes feel like black magic to developers. The ecosystem has responded with an arsenal of tools and strategies to mitigate “The Problem of JavaScript Code Delivery.” Recently I’ve been following two rival JavaScript code delivery paradigms: compiler-driven optimization (Meta’s React Compiler) and signals/fine-grained reactivity (Google’s Angular, Vue 3, Preact, Solid, and Qwik). Which will dominate the single page application (SPA) market? Let’s discuss what we know so far, and why it matters.

 

React Compiler

In order to squeeze out extra performance by preventing unnecessary re-renders, React developers since 2019 have become increasingly reliant on useMemo and useCallback hooks, as well as wrapping components in React.memo(). The React Compiler automatically applies those performance tricks under the hood at build time. Originally codenamed React Forget (because it helps React “forget” about re-rendering things that don’t need it), it was first previewed back in React Conf 2021 and the React team open-sourced it last year. Many in the community welcomed the React Compiler because it promised to eliminate not only confusion around “When to use or not use useMemo and useCallback?” and “When to wrap React.memo for a component?,” but also a lot of the fiddly manual optimization these developers had been doing for years. As Bob Ziroll, head of education at Scrimba, among others noted after the release, by automating memoization the React Compiler promises to improve developer experience by simplifying application optimization:

Demand for performance solutions within the React ecosystem goes far toward explaining why the React Compiler is such a big deal to the community. According to Theo Browne’s grandly titled video “React Just Changed Forever” discussing the React Compiler:

React has never really thought about build tools too much… Historically React has just been the runtime. With [React] Server Components they moved to the server, but with [React Compiler] they’re moving to build … The benefit comes in on the client side because now your apps are going to behave so much faster, and the code you write to make it that fast is going to be even simpler.

With the React Compiler’s launch, the Meta development team has entered the build space, and because React is the unchallenged elephant in the JS framework space, what happens in React has ripples across the frontend landscape. This is especially true for React’s metaframeworks like Vercel’s NextJS, Netlify’s Gatsby, and Remix. Moreover, by bringing an automation renaissance to React performance, it manifests the “compiler-as-framework” trend Rich Harris is credited with kicking off in 2016 when he created the UI framework Svelte.

 

Svelte & the compiler-as-framework

From day one, Svelte has been a compiler. Developers write components using primarily HTML, CSS and JavaScript, and Svelte compiles them into tiny standalone JavaScript modules during the build process. Beginning in the mid 2010s the frontend community was eager for alternatives to JavaScript-heavy SPAs. As Tom Dale, Principal Staff Software Engineer at LinkedIn, predicted in 2017: “Compilers are the New Frameworks.” He elaborates:

My current “investment thesis” is that what we call web frameworks are transforming from runtime libraries into optimizing compilers. When it comes to eking performance out of hand-authored JavaScript and accompanying runtime libraries, we’ve reached the point of diminishing returns.

Harris and Dale’s bet on compilers has steadily gained traction. Instead of React being just a runtime library, it now also has a compilation step that restructures your code for optimal delivery. But this trend of frameworks turning into compilers isn’t isolated to React and Svelte. Beyond the creation of frameworks modeled upon a similar philosophy to Svelte (Stencil, Glimmer), the integration of reactivity primitives like signals in SPAs like Angular suggests that this idea has finally and unequivocally gone mainstream.

 

Signals

So what even are signals and how to they compare with the React Compiler? Signals are based on the observer pattern from the Gang of Four’s influential Design Patterns (1994) applied to UI state. Developers create a signal to hold a value, and when they update it, any part of their UI (or any computation) that reads that signal will automatically update. No need to call setState and re-render an entire component; signals can update things more granularly. Signals provides a pattern that libraries like Vue’s Options API and MobX for React have used for years. In fact, the idea of using reactive primitives has been around in frontend development for quite some time too, with early examples like Knockout observables and Meteor’s Tracker system appearing over ten years ago. All permit developers to update a program’s values reactively, so that wherever these values change, everything using those values changes as well.

With the release of Svelte 5, the framework has introduced a major new feature intended to give developers a clearer and more flexible way to manage state called runes, a set of reactive primitives using signals under the hood. While earlier versions of Svelte leaned heavily on JavaScript’s native syntax to create a “reactive by default” feel, this approach often confused developers and imposed strict limitations. Runes solve these problems by offering a more expressive and transparent syntax similar in style to what Vue 3 developers might recognize.  Although developer sentiment around the update has been mixed, with some complaining that it adds too much heaviness, runes allow for highly precise updates to the DOM that can significantly boost performance and reduce unnecessary re-renders.

Although the idea of signals isn’t new, mainstream frameworks adopting it as a first-class thing surely is. Angular, SolidJS, Svelte, Qwik, Vue 3, and Preact, have all adopted this reactive programming pattern. The Angular team, which historically relied on a different mechanism (zone-based change detection that checked lots of things after each event), introduced signals in Angular 17. Miško Hevery, creator of Angular & Qwik, for instance, is extremely excited about this change. Not only does Qwik use signals under the hood, he’s been singing praises about making Angular’s reactivity more fine-grained and efficient. In fact, Hevery extols the superiority of signals over the React Compiler because this rival pattern only addresses part of the problem. He explains that although the React Compiler:

aims to improve the rendering performance of React applications by automatically generating the equivalent of useMemo and useCallback calls to minimize the cost of re-rendering while retaining React’s programming model … Signals don’t need memoization for the most part. Signals and memoization may be equivalent, but in reality, signals are a lot more efficient because they are not bound by the component render tree.

The motives for Hevery’s preference are straightforward: signals offer performance and simplicity. By only updating exactly what needs to be updated, you do less work. There are many variations of the signals pattern. For instance, Ryan Carniato, author of the SolidJS UI library, attributes the speed of his UI to the use of less memory and CPU overhead owing to virtual DOM diffing, a process where a framework compares two versions of a Virtual DOM, one representing the current state and another representing the updated state, to identify the changes needed to update the actual DOM.

The advantages of simplicity and performance that signals promises have converted many developers to prefer this pattern. Matt Czapliński, for instance, explains in a Stack Overflow discussion comparing the performance of “React Hooks with Optimization vs. Preact Signals”:

In real world scenario the Signals became my go-to approach for state management. They’re also easy to debug.

 

Ecma support for reactivity

Are signals and React Compiler fundamentally at odds? Not necessarily. In fact, they attack the problem at different layers. React is saying, “Keep the programming model the same, we’ll just use a compiler to make it faster.” Users of signals say, “Change the programming model to something inherently more efficient.” Lately we’re seeing glimpses of a world where these models converge.

The Ecma International Technical Committee is currently considering the “JavaScript Signals standard proposal” for a common model backing a reactivity core. Originally authored by Rob Eisenberg, chief software architect at Blue Spire, and Daniel Ehrenberg, software Engineer at Bloomberg, it’s a slick idea and could have tremendous consequences for the entire JS community. Most importantly from my perspective, this proposal has significant support from framework authors/maintainers including of Angular, Bubble, Ember, FAST, MobX, Preact, Qwik, RxJS, Solid, Starbeam, Svelte, Vue, and Wiz, among others.​ Modeled on Promises/A+, an open standard for interoperable JavaScript promises, if successful we could soon see signals in Javascript.

Relatedly, a future version of React might someday compile into a signals-like system under the hood. Daishi Kato, creator of Waku, is already toying with this idea. He introduced an experimental React hook called use-signals to explore how Eisenberg and Ehrenberg’s Signals proposal might be integrated into React applications. And let’s not forget, compilers optimize signal-based code too. SolidJS’s compiler compiles your JSX into highly efficient, minimal DOM operations. Qwik’s compiler (aka optimizer) provides code-level transformation. It re-arranges code for lazy loading and runs as part of rollup in order to achieve that “resumability” magic (Qwik’s alternative to hydration). Vue’s single-file components get compiled into JS that sets up reactivity. So really, the best of both worlds might be frameworks that use compilers and signals, each for what they do best.

 

React Server Components & the Network


Ryan Florence from “Mind The Gap” at Big Sky Dev Con, 2024.

One thing that interests me about the signals v. React Compiler debate is what it reveals about the “Server/Client Two-Step.” Conversations around reactivity engage broader questions of how we deal with the network boundary separating client from server. Last year the React team introduced Server Components (RSC), which blur the line between front-end and back-end code. Server-side rendering (SSR) involves a Node server generating the final HTML and sending it to the browser, while client-side rendering (CSR) shifts that work to the user’s browser. React Server Components blend both approaches by letting the server pre-render some components and sending only the needed JavaScript for the browser to handle the rest.

In his “Mind The Gap” talk at Big Sky Dev Con 2024, Ryan Florence, author of React Router, a library for handling navigation in React applications, articulates the relationship between backend, network, and frontend, and how React 19 effectively dissolves the network. He explains that: “The best apps you pretty much never notice the network.” Florence has been perennially interested in where compute occurs for React, remarking that:

What does this have to do with compilers or signals? Potentially a lot. To dissolve that network boundary, frameworks need to cleverly decide what runs where and when to fetch data, all while giving the developer a simple mental model. This smells like compiler territory, because a compiler or build tool can analyze your code and split it between client and server bundles automatically. It’s not hard to imagine future compilers that optimize not just for client runtime performance, but for network efficiency by automatically inlining data fetching on the server or preloading certain code on the client. We’re already seeing early signs. NextJS uses webpack (and eventually turbopack, I presume) to code-split by route and preload bundles. Qwik’s compiler breaks up code at an extremely fine granularity so that only the code needed after an interaction is sent to the browser which, in effect, treats the network as part of an app’s execution model.

By eliminating the artificial wall between client and server logic, the boundaries between what is code, what is data, and what runs where are getting ever blurrier. In the future, maybe frameworks won’t just offer a set of JS functions to include, but a build step that produces an application optimized for the specific browser it runs on. In 2025, the rise of React Compiler and signals offers glimpses of this in action.

To wrap up, I’m following reactivity and compilers in the JS ecosystem because given these challenges of code delivery on the web owing to diverse browsers, huge bundles, hydration costs, etcetera, the industry has increasingly turned to smarter tooling. A vendor’s compiler can be a tremendous differentiator in this crowded market. Compilers occupy the spotlight because they allow us to automate optimizations and tailor code for diverse environments. Compilers in this context aren’t just the thing that makes your code run, it’s a super-power that takes your code and transforms it into something performant for browsers to execute through bundling strategies, dead-code elimination, injecting polyfills only when needed, and so on. The end goal is to make the delivered code smaller, faster, and widely compatible, but the real trick is to do so without sacrificing developer experience.

Disclaimer: Google is a RedMonk client.