Benchmarking Preact Signals Performance versus the React alternatives

Michael

This is the first post in a two part series on performant React state management

  1. Benchmarking Preact Signals versus the React alternatives
  2. Declarative Setup, Imperative Update: The Key to High-Performance UI Components

Preact released a state management library called Signals which allows for performant, fine-grained re-renders, in some cases bypassing the virtual dom entirely. Angular is in discussion to do the same. React is more hesitant, trying to avoid the developer cost of explicitly modelling data flows with the potential for compiler based optimisations closing the gap.

I'm always interested in novel state management concepts, especially when it comes to increasing the performance and power efficiency of our products. Since Electric UI handles all state management with the hardware device, we're in a good position to be able to provide the best of both worlds. We can provide high performance, ergonomic abstractions to our downstream developer users by taking on the high developer costs ourselves.

In this two-part blog series, we'll first take a look at how Preact Signals compares in its various forms with React's primitives. Then we'll build our own React based component primitive that can achieve similarly excellent performance, with almost any state management library, without monkey-patching React.

Benchmarks

A Text node was constructed and updated in various methods to form this benchmark.

The results are as follows:

Plot of all methods

The benchmarks were constructed from these methods:

  • A VanillaJS, regular Text node with its nodeValue property directly updated. This represents the lower bound for possible work done.

  • Preact's signals, used in Preact, using the 'optimised' path (the signal directly rendered in component).

  • Preact functional component, a useState hook updated externally.

  • Preact signal, unoptimised (signal value accessed in component).

  • Preact class component, updated externally via an emitter and a setState call.

  • React functional component, a useState hook updated externally.

  • React class component, updated externally via an emitter and a setState call.

  • React using Zustand with the provided hook.

  • React using the monkey-patched Preact Signals library.

More detailed benchmarking methodology is attached at the end of the post.

Benchmark Discussion

Even without using the Signals primitive, Preact is impressively performant.

With the optimised Signals primitive, text updates are within a hundred nanoseconds of a VanillaJS element.nodeValue assignment, the lower bound of work required.

The Preact Signals React library does not confer similar performance, even though it monkey-patches React. I found this quite surprising, I would only be willing to take on the maintenance risk of monkey patching something like React if it conferred significant performance improvements, or significantly better developer experience.

I don't think Preact Signals automatically triggering re-renders in React is a better developer experience. It's surprising within the expectations of a React component, and it doesn't give a performance increase. A usePreactSignal hook would be a less surprising API in React.

The human factors

One of the core features of React is a consistent top-down data flow. Components are a pure function of their props and state, with props set by their parents. This gives the ability to locally reason. Finding where state comes from is usually only a short search upwards in the tree. While context may cause things to update deeper in the tree, it's still top down, and the API encourages the use of this sparingly, for things like theme or locale changes. For often changing props, the 'prop-drilling' boilerplate can become a downside. Performance overhead comes in the form of re-renders through every component toward your destination, as well as re-renders of components that are untouched, but below the state change origin in the tree. This rendering cost is reduced with memoisation, but the prop comparisons still have cost, and there's a cognitive overhead to things like dependency arrays and the wrapping of components. Overall however, it is a remarkably simple paradigm.

State management libraries like Redux and Zustand allow for a separation of this data flow from the component tree into a separate state tree. The cognitive overhead of prop-drilling is traded for a separate place to look for state changes. However, state is usually centralised, and the state tree is usually orders of magnitude smaller than the component tree, resulting in a net reduction in cognitive overhead. The performance overhead of this is in the form of selector functions that run globally on every state change within the tree. While referential equality comparisons are 'cheap', it is unnecessary work. The performance gain is that re-renders can be fine-grained, targeting only the components that need to be updated. Often this will happen at the leaf nodes, meaning no children are unnecessarily re-rendered. When the nodes are in the 'middle' of the tree, you have similar re-render characteristics to the above case. This approach is in the middle of the local vs global reasoning scale. This is my preferred level of compromise and I've had it scale to big applications with good performance characteristics. Electric UIs general application state is modelled this way.

Libraries like Preact Signals, Recoil and Jotai go further, requiring explicit modelling of the data flow with no shape restrictions. There's a pressure for this shape to become a complex graph instead of a simple tree. The component tree may 'pull' data down from a data flow graph constructed with these primitives. Given that the data flow is perfectly modelled in the code, re-renders can in some cases bypass the rendering library all-together, not just fine-grained to the component level. The Text node or HTML attribute can be updated directly without invoking the Reconciler. The downside is the unwieldy nature of managing the state graph separate to the component tree. It's easy to construct an application where local reasoning is lost, state updates can happen from anywhere and flow anywhere. Electric UI's DataFlow library works this way, but the 'graphs' are separated by hardware MessageID, they're usually quite small and colocated with the components. This offers the best performance (and with charts doing 100k+ points a second, it's required), while the cognitive overhead isn't that bad. Updates are always triggered by hardware, a single source of truth, so that pressure to form a complex graph instead of a simple tree is reduced.

Realistically, the vast majority of applications should be throttling their state updates to the frame rate at most. Many frameworks are now batching updates to this level automatically, but it's important to do this before it reaches your renderer. Text is unreadable when updating at more than a few times a second. However if you're pumping data into a diagram or a chart, there are ways to set up your complex DOM elements declaratively with React then update imperatively from then on.

Best of both worlds

In the next post, we'll take a series of state management solutions and create a component that can produce fine-grained updates without needing to invoke the React Reconciler at all. With this technique you can gain an order of magnitude performance improvement, in line with the optimised native Preact signals experience, but in React. You can use state management libraries like Zustand to reduce the cognitive overhead while still getting the performance you need in performance sensitive areas.

Benchmark Methodology

I used the worst hardware I could get my hands on.

All benchmarks were completed on an early 2013 15" Macbook Pro with a 2.4 Ghz Quad-Core Intel Core i7 with 8GB of 1600 MHz DDR3. No synthetic CPU slowdown was used. Care was taken to avoid thermal throttling.

Benchmarks were done inside an Electron 23 instance, Chromium 110, NodeJS 18.12.1, V8 11. This was done to access high resolution (nanosecond) timing APIs that are crippled (via reduced resolution and artificial jitter) in all major browsers (for good security reasons, though it is inconvenient for benchmarking).

A shallow tree containing a single span node with text content was built, then updated in a hot loop with each benchmarked method.

The hot loop was run for 10 seconds to warm up, before measurements began.

The hot loop was run N times, then 2N times, then 3N times, etc, collecting total runtime for each inner loop. The slope of this line is the mean iteration time of the update function. N was roughly 500 to 10,000 in these tests, with faster methods having a larger N, and tests ran up to ~100N.

React calls were wrapped in flushSync to prevent automatic batching. Preact debounceRendering was set to a no-op for similar functionality.

React apps were constructed in CRA, Preact apps with Preact CLI, built for production then loaded into Electron.

Code for the benchmarks is available on GitHub.