Performance Benchmark: Axum vs. Elysia
In the ever-evolving world of high-performance computing, the quest for efficiency is relentless. With the growing trend of containerized applications, the performance of various frameworks in different environments has become a crucial point of discussion. Today, I’m examining two such frameworks: Axum, a modern web application framework of Rust, and Elysia, a ergonomic framework of Bun. This performance analysis will pit these frameworks against each other on two prominent hardware architectures: the Intel i9–12900k and the Apple M1 in the MacBook Air.
The Frameworks
- Axum: Ergonomic and modular web framework built with Tokio, Tower, and Hyper, Axum’s GitHub.
- Elysia: Ergonomic Framework for Humans, Elysia’s GitHub.
Test Environment
a. Hardware:
- Intel i9–12900k with Windows 11 and WSL2
- MacBook Air with Apple M1 chip with MacOS 14
b. Containers Tested:
- all container use Debian 11 slim image or distroless Debian 11
c. Metrics Covered:
- Average Latency (ms)
- Requests/sec
- Transfer/sec (MB)
Axum code
use axum::{extract::Path, routing::post, Router, Json};
use std::net::SocketAddr;
#[tokio::main]
async fn main() {
// Define your route
let app = Router::new()
.route("/bmi/:username", post(handle_request));
// Set up the server
let addr = SocketAddr::from(([0, 0, 0, 0], 3000));
println!("axum is running at http://{}", addr);
axum::Server::bind(&addr)
.serve(app.into_make_service())
.await
.unwrap();
}
async fn handle_request(Path(username): Path<String>, Json(data): Json<BmiData>) -> Json<BmiResponse> {
let bmi = calculate_bmi(data.weight, data.tall);
Json(BmiResponse { username, bmi })
}
fn calculate_bmi(weight: f64, tall: f64) -> f64 {
weight / (tall / 100.0).powi(2) // tall is in cm, so divide by 100 to convert to meters
}
#[derive(serde::Deserialize)]
struct BmiData {
weight: f64, // weight in kg
tall: f64, // height in cm
}
#[derive(serde::Serialize)]
struct BmiResponse {
username: String,
bmi: f64,
}
Elysia code
import { Elysia, t } from "elysia";
function calculateBMI(weight: number, tall: number): number {
// tall is in cm, so divide by 100 to convert to meters
return weight / Math.pow(tall / 100.0, 2);
}
const app = new Elysia()
.model({
BmiData: t.Object({
weight: t.Number(),
tall: t.Number(),
}),
BmiResponse: t.Object({
username: t.String(),
bmi: t.Number(),
}),
})
.post("/bmi/:username", ({ body, params }) => {
const bmi = calculateBMI(body.weight, body.tall);
return {
username: params.username,
bmi,
};
}, {
body: "BmiData",
response: "BmiResponse",
})
.listen(3000);
console.log(
`Elysia is running at http://${app.server?.hostname}:${app.server?.port}`
);
You can test this benchmark on your own machine by clone this git repo:
Results on Intel i9–12900k
Average Latency

- Axum API Container: 10.67ms
- Axum API Distroless Container: 10.77ms
- Elysia API Container: 3.14ms
- Elysia API Distroless Container: 3.50ms
Requests per Second

- Axum API Container: 208,873.24
- Axum API Distroless Container: 210,408.64
- Elysia API Container: 131,408.48
- Elysia API Distroless Container: 126,425.61
Transfer per Second

- Axum API Container: 30.28MB
- Axum API Distroless Container: 30.50MB
- Elysia API Container: 19.05MB
- Elysia API Distroless Container: 18.33MB
Results on MacBook Air M1
Average Latency

- Axum API Container: 9.28ms
- Axum API Distroless Container: 17.57ms
- Elysia API Container: 14.47ms
- Elysia API Distroless Container: 10.66ms
Requests per Second

- Axum API Container: 48,243.11
- Axum API Distroless Container: 47,268.10
- Elysia API Container: 27,753.24
- Elysia API Distroless Container: 37,817.45
Transfer per Second

- Axum API Container: 6.99MB
- Axum API Distroless Container: 6.85MB
- Elysia API Container: 4.02MB
- Elysia API Distroless Container: 5.48MB
Conclusion
- Average Latency: On the i9–12900K, Elysia demonstrated a significant advantage with its lower average latency compared to Axum. This indicates quicker response times from Elysia on this platform. On the MacBook Air M1, Axum’s latency increased even more, while Elysia’s performance slightly differed from its i9–12900K results.
- Requests/sec: Contrary to the latency results, Axum displayed a higher requests per second capacity on the i9–12900K, indicating its ability to handle a larger volume of requests concurrently than Elysia. This trend persisted on the MacBook Air M1, though with reduced throughput for both frameworks.
- Transfer/sec: The data transfer rates for both Axum and Elysia seemed to favor Axum on the i9–12900K. On the MacBook Air M1, the differences in transfer rates between the two were more pronounced, yet the trend remained consistent with the i9–12900K results.
Taking these observations into account, it’s evident that while Elysia offers faster response times, Axum may be better suited for scenarios requiring higher throughput and data transfer. Developers should consider the specific needs of their applications and the target hardware when choosing between these two frameworks.
Debunking Myths: Bun’s Single-Threaded Nature
There’s a common belief among some technologists that Bun operates exclusively on a single thread. This belief can lead to underestimations of the framework’s performance capabilities. However, it’s always vital to rely on empirical evidence before forming such conclusions.
From my recent tests conducted on the Intel i9–12900K, it’s evident that this notion of Bun being single-threaded is not entirely accurate. The performance graphs, especially the CPU utilization graphs, clearly show activity that goes beyond what a single thread can handle. This is indicative of concurrent processing, which challenges the widely held belief of Bun’s single-threaded nature.

Moreover, similar tests conducted on the MacBook Air M1 further confirm these findings. The results from the M1, known for its multi-core efficiency, further reinforce the idea that Bun is more than capable of handling multi-threaded operations efficiently.

For developers and tech enthusiasts, it’s paramount to understand the true capabilities of the frameworks and tools they work with. Misunderstandings can hinder the optimal use of these tools or even deter individuals from using them. As demonstrated, while Bun might be lightweight, it showcases the ability to leverage multi-threading effectively.