avatar
Ryan Skinner @ryanskinner.com

What happens when you run React Server Components on a custom Rust runtime instead of Node.js? 4x faster rendering, 10k+ req/sec, sub-50ms P99 latency. Same JSX, same component patterns. Zero Rust knowledge needed. The runtime does the heavy lifting, you write the React.

aug 18, 2025, 8:01 pm • 75 8

Replies

avatar
Viktor Lázár @lazarv.dev

Excellent work on Rari! I don't want to spoil the fun and take away the achievement, but your benchmark is not a fair comparison. Rari is not using SSR. It is fetching the RSC components from the client-side, while the benchmark is measuring load times for the initial HTML.

aug 21, 2025, 11:14 am • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

Ah, thanks for flagging this, Viktor! You're right that we're measuring TTFB, which does appear to favor Rari's architecture. Both applications use RSC, but their delivery approaches differ significantly.

aug 21, 2025, 2:10 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

Rari's runtime delivers a smaller initial payload (~6KB vs ~61KB) with a different component streaming strategy, while Next.js provides more complete SSR HTML upfront. The 4x performance numbers accurately reflect server response times, but they don't tell the complete rendering story.

aug 21, 2025, 2:10 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

For applications prioritizing fully-rendered initial HTML without additional component fetches, Next.js's approach might be more advantageous. I'll add a note clarifying these architectural distinctions and their performance implications.

aug 21, 2025, 2:10 pm • 1 0 • view
avatar
Axel Möllerberg @axeberg.dev

That block with the stream_component is just… gorgeous 😍

aug 19, 2025, 12:19 am • 1 0 • view
avatar
Ryan Skinner @ryanskinner.com

ahhh thank you! that means a lot 🥲

aug 19, 2025, 1:15 am • 1 0 • view
avatar
Sam Selikoff @samselikoff.com

Cool stuff! Love seeing more frameworks exploring RSCs. How does this example work – I thought it was a fundamental part of the design of RSCs that client components can't import server components?

image
aug 20, 2025, 5:56 pm • 2 0 • view
avatar
Ryan Skinner @ryanskinner.com

While it appears that client components are directly importing server components, what's actually happening is a build-time transformation where these imports are converted to component wrappers.

aug 20, 2025, 6:15 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

When you write `import ServerTime from './ServerTime'` in a client component, Rari analyzes the `'use server'` directive during the build process and replaces the actual import with a `createServerComponentWrapper()` call that returns a React component proxy.

aug 20, 2025, 6:15 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

This maintains the core RSC principle that server components never execute on the client while providing a developer-friendly experience that feels like normal React composition.

aug 20, 2025, 6:15 pm • 0 0 • view
avatar
Sam Selikoff @samselikoff.com

I had a feeling it must be something like this – cool stuff! Curious to see how it feels, will have to give it a spin soon :)

aug 20, 2025, 6:34 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

Thanks Sam! Huge respect to you and the team, wouldn't have started tinkering with this without the groundwork laid by Next.js

aug 20, 2025, 7:58 pm • 0 0 • view
avatar
Sam Selikoff @samselikoff.com

What happens if you conditionally render the RSC based on the client's state value from useState?

aug 20, 2025, 6:34 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

Good call, this may be a footgun that'll bite me in the end, but when your useState values change and get passed as props to a server component, they should trigger fresh server execution every time.

aug 20, 2025, 7:57 pm • 0 0 • view
avatar
Sam Selikoff @samselikoff.com

How do they get passed as props? Do you automatically send them over from the client in the call to re-render the page? Also does this mean the RSCs re-render every time client components re-render? I honestly am not understanding how this could work haha

aug 21, 2025, 2:08 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

Yeah! Client state gets serialized and sent via fetch requests to the server. When useState values change and get passed as props to a server component, the framework detects this via a dependency on JSON.stringify(props).

aug 21, 2025, 2:55 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

It then makes a network request with the new props to render the component server-side. RSCs don't re-render on every client render—only when their received props actually change.

aug 21, 2025, 2:55 pm • 0 0 • view
avatar
Mark Malstrom @malstrom.me

I was also under this impression, but it turns out that not even the flagship implementation respects that rule

aug 20, 2025, 6:03 pm • 1 0 • view
avatar
Laurence Rowe @laurencerowe.bsky.social

This is cool! What’s the thinking behind implementing a custom runtime rather than just adapting the RSC stream to HTTP using the built in Deno server? Seems like that would be a fairly thin layer.

aug 18, 2025, 10:18 pm • 1 0 • view
avatar
Ryan Skinner @ryanskinner.com

That KVM isolation approach sounds really interesting! This was more of an experimentation with the deno_core crate and extensions, to see if I could swing it. The custom runtime approach ended up giving us sub-ms response times under load (~0.6ms) and ~15x concurrent throughput vs Next.js.

aug 18, 2025, 10:37 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

I initially reported it as 4x throughput, but I've removed some artificial limits and now it's hitting 15x

aug 18, 2025, 10:38 pm • 1 1 • view
avatar
Laurence Rowe @laurencerowe.bsky.social

The built in Deno.serve is mostly rust and very well optimised (~10 microseconds/req). Might be interesting to try and break out how much of your savings come from your rust bits vs the different architectural approach to next.js.

aug 18, 2025, 10:47 pm • 2 0 • view
avatar
Ryan Skinner @ryanskinner.com

Would be fascinating to A/B test: same architectural approach but with thin Deno.serve layer vs our custom runtime. Will keep this front of mind to try when I have a chance!

aug 18, 2025, 10:53 pm • 1 0 • view
avatar
Laurence Rowe @laurencerowe.bsky.social

I initially started with building directly on deno_runtime (provides more than deno_core) but abandoned it for the kvmserver approach because I wanted a general request->response pattern and doing that efficiently really requires access to the deno internals. That limitation doesn’t apply to you.

aug 18, 2025, 10:53 pm • 0 0 • view
avatar
Laurence Rowe @laurencerowe.bsky.social

Somewhat orthogonally I’ve been trying to get fast request isolation working JS and have spent the past couple of months working with the TinyKVM author on building KVMServer which runs my React server rendering benchmark with Deno in under a ms. (x86_64 only for now.) github.com/libriscv/kvm...

aug 18, 2025, 10:25 pm • 1 0 • view
avatar
mb21 @mb21.bsky.social

Cool demo for Rust! But surely in real-world projects, the performance bottleneck is in the user’s browser, not on the server where you can always rent more VMs?

aug 19, 2025, 5:24 am • 0 0 • view
avatar
Dan @d13z.bsky.social

Great work! There is something that is not clear to me. How do you do the math of req/sec and average response? 10.000 req/sec would mean that the average response was 0.1ms. At 4.23 of average response it would be hard to do more than 236 req/sec. So are you doing clustering as well?

aug 22, 2025, 9:44 am • 1 0 • view
avatar
Ryan Skinner @ryanskinner.com

No, we're not using clustering. The high req/sec is achieved through connection parallelism in the load testing tool, which uses 50 concurrent connections. This simulates multiple users accessing the server simultaneously, not server-side clustering.

aug 22, 2025, 1:04 pm • 0 0 • view
avatar
Dan @d13z.bsky.social

Ok but that is what doesn't add up in my mind. I'm not questioning how you initiate the requests, but how are you counting the responses. If render a page takes 5ms, and a second has 1000ms, then the maximum of req/sec you can do is 200. The question is how you achieve 10.000 req/sec?

aug 22, 2025, 2:35 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

We're counting total completed requests across all connections. Your calculation of 200 req/sec would be correct for a single connection processing requests one after another. However, our load test uses 50 concurrent connections, with each handling requests simultaneously.

aug 22, 2025, 2:46 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

With 4.23ms average latency, 50 parallel connections can theoretically handle about 11,800 req/sec (1000ms ÷ 4.23ms × 50), which aligns with our measured result of 10,586.4 req/sec.

aug 22, 2025, 2:46 pm • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

If you'd like to dig a bit deeper, the benchmarks can be found here:

aug 22, 2025, 2:52 pm • 0 0 • view
avatar
Dan @d13z.bsky.social

To achieve 10.000req/sec then the rendering time should be 0.1ms. Nonetheless the results are impressive.

aug 22, 2025, 2:37 pm • 0 0 • view
avatar
Weeknredneck @weeknredneck.bsky.social

Boy, I'm old.

aug 18, 2025, 11:20 pm • 0 0 • view
avatar
Tom Sherman @tom.sherman.is

Nice stuff. Are you using AsyncContext (AsyncLocalStorage) at all? I'm pretty sure that's where a lot of the slowness with Next.js comes from, they're using that heavily and it causes every async fn invocation to have a perf hit

aug 18, 2025, 9:10 pm • 2 0 • view
avatar
Eric Lee @sdust.dev

Can you explain more about this? Interested to hear why that’s a bottleneck

aug 19, 2025, 2:53 am • 0 0 • view
avatar
Ryan Skinner @ryanskinner.com

Thanks man! We don't use it at all—explicit context passing per component render instead of async propagation. No per-async-call overhead.

aug 18, 2025, 9:34 pm • 1 0 • view
avatar
Tom Sherman @tom.sherman.is

Not only for their dynamic IO features like cookies() etc. but they also have the framework code pretty well instrumented with otel traces

aug 18, 2025, 9:11 pm • 1 0 • view