Hackers News

Bun 1.2 | Bun Blog

Bun is complete toolkit for building and testing full-stack JavaScript and TypeScript applications. If you’re new to Bun, you can learn more from the Bun 1.0 blog post.

Bun 1.2 is a huge update, and we’re excited to share it with you.

Here’s the tl;dr of what changed in Bun 1.2:

  • There’s a major update on Bun’s progress towards Node.js compatibility
  • Bun now has a built-in S3 object storage API: Bun.s3
  • Bun now has a built-in Postgres client: Bun.sql (with MySQL coming soon)
  • bun install now uses a text-based lockfile: bun.lock

We also made Express 3x faster in Bun.

Bun is designed as a drop-in replacement for Node.js.

In Bun 1.2, we started to run the Node.js test suite for every change we make to Bun. Since then, we’ve fixed thousands of bugs and the following Node.js modules now pass over 90% of their tests with Bun.

For each of these Node modules, Bun passes over 90% of the Node.js test suite.

Here’s how we did it.

How do you measure compatibility?

In Bun 1.2, we changed how we test and improve Bun’s compatibility with Node.js. Previously, we prioritized and fixed Node.js bugs as they were reported, usually from GitHub issues where someone tried to use an npm package that didn’t work in Bun.

While this fixed actual bugs real users ran into, it was too much of a “wack-a-mole” approach. It discouraged doing the large refactors necessary for us to have a shot at 100% Node.js compatibility.

That’s when we thought: what if we just run the Node.js test suite?

A screenshot of the Node.js test suite
There are so many tests in the Node.js repository, that the files can’t all be listed on GitHub.

Running Node.js tests in Bun

Node.js has thousands of test files in its repository, with most of them in the test/parallel directory. While it might seem simple enough to “just run” their tests, it’s more involved than you might think.

Internal APIs

For example, many tests rely on the internal implementation details of Node.js. In the following test, getnameinfo is stubbed to always error, to test the error handling of dns.lookupService().

test/parallel/test-dns-lookupService.js

const { internalBinding } = require("internal/test/binding");
const cares = internalBinding("cares_wrap");
const { UV_ENOENT } = internalBinding("uv");

cares.getnameinfo = () => UV_ENOENT;

To run this test in Bun, we had to replace the internal bindings with our own stubs.

test/parallel/test-dns-lookupService.js

Bun.dns.lookupService = (addr, port) => {
  const error = new Error(`getnameinfo ENOENT ${addr}`);
  error.code = "ENOENT";
  error.syscall = "getnameinfo";
  throw error;
};

Error messages

There are also Node.js tests that check the exact string of error messages. And while Node.js usually doesn’t change error messages, they don’t guarantee it won’t change between releases.

const common = require("../common");
const assert = require("assert");
const cp = require("child_process");

assert.throws(
  () => {
    cp.spawnSync(process.execPath, [__filename, "child"], { argv0: [] });
  },
  {
    code: "ERR_INVALID_ARG_TYPE",
    name: "TypeError",
    message:
      'The "options.argv0" property must be of type string.' +
      common.invalidArgTypeHelper([]),
  },
);

To work around this, we had to change the assertion logic in some tests to check the name and code, instead of the message. This is also the standard practice for checking error types in Node.js.

{
  code: "ERR_INVALID_ARG_TYPE",
  name: "TypeError",
   message:
     'The "options.argv0" property must be of type string.' +
     common.invalidArgTypeHelper([]),
},

While we do try to match the error messages of Node.js as much as possible, there are times where we want to provide a more helpful error message, as long as the name and code are the same.

Progress so far

We’ve ported thousands of files from the Node.js test suite to Bun. That means for every commit we make to Bun, we run the Node.js test suite to ensure compatibility.

A screenshot of Bun's CI where we run the Node.js test suite for every commit.
A screenshot of Bun’s CI where we run the Node.js test suite for every commit.

Every day, we are adding more and more passing Node.js tests to Bun, and we’re excited to share more progress on Node.js compatibility very soon.

In addition to fixing existing Node.js APIs, we’ve also added support for the following Node.js modules.

node:http2 server

You can now use node:http2 to create HTTP/2 servers. HTTP/2 is also necessary for gRPC servers, which are also now supported in Bun. Previously, there was only support for the HTTP/2 client.

import { createSecureServer } from "node:http2";
import { readFileSync } from "node:fs";

const server = createSecureServer({
  key: readFileSync("key.pem"),
  cert: readFileSync("cert.pem"),
});

server.on("stream", (stream, headers) => {
  stream.respond({
    ":status": 200,
    "content-type": "text/html; charset=utf-8",
  });
  stream.end("");
});

server.listen(3000);

In Bun 1.2, the HTTP/2 server is 2x faster than in Node.js. When we support new APIs to Bun, we spend a lot of time tuning performance to ensure that it not only works, but it’s also faster.

Benchmark of a “hello world” node:http2 server running in Bun 1.2 and Node.js 22.13.

node:dgram

You can now bind and connect to UDP sockets using node:dgram. UDP is a low-level unreliable messaging protocol, often used by telemetry providers and game engines.

import { createSocket } from "node:dgram";

const server = createSocket("udp4");
const client = createSocket("udp4");

server.on("listening", () => {
  const { port, address } = server.address();
  for (let i = 0; i < 10; i++) {
    client.send(`data ${i}`, port, address);
  }
  server.unref();
});

server.on("message", (data, { address, port }) => {
  console.log(`Received: data=${data} source=${address}:${port}`);
  client.unref();
});

server.bind();

This allows packages like DataDog’s dd-trace and @clickhouse/client to work in Bun 1.2.

node:cluster

You can use node:cluster to spawn multiple instances of Bun. This is often used to enable higher throughput by running tasks across multiple CPU cores.

Here’s an example of how you can create a multi-threaded HTTP server using cluster:

  • The primary worker spawns n child workers (usually equal to the number of CPU cores)
  • Each child worker listens on the same port (using reusePort)
  • Incoming HTTP requests are load balanced across the child workers
import cluster from "node:cluster";
import { createServer } from "node:http";
import { cpus } from "node:os";

if (cluster.isPrimary) {
  console.log(`Primary ${process.pid} is running`);

  // Start N workers for the number of CPUs
  for (let i = 0; i < cpus().length; i++) {
    cluster.fork();
  }

  cluster.on("exit", (worker, code, signal) => {
    console.log(`Worker ${worker.process.pid} exited`);
  });
} else {
  // Incoming requests are handled by the pool of workers
  // instead of the primary worker.
  createServer((req, res) => {
    res.writeHead(200);
    res.end(`Hello from worker ${process.pid}`);
  }).listen(3000);

  console.log(`Worker ${process.pid} started`);
}

Note that reusePort is only effective on Linux. On Windows and macOS, the operating system does not load balance HTTP connections as one would expect.

node:zlib

In Bun 1.2, we rewrote the entire node:zlib module from JavaScript to native code. This not only fixed a bunch of bugs, but it made it 2x faster than Bun 1.1.

Benchmark of inflateSync using node:zlib in Bun and Node.js.

We also added support for Brotli in node:zlib, which was missing in Bun 1.1.

import { brotliCompressSync, brotliDecompressSync } from "node:zlib";

const compressed = brotliCompressSync("Hello, world!");
compressed.toString("hex"); // "0b068048656c6c6f2c20776f726c642103"

const decompressed = brotliDecompressSync(compressed);
decompressed.toString("utf8"); // "Hello, world!"

C++ addons using V8 APIs

If you want to use C++ addons alongside your JavaScript code, the easiest way is to use N-API.

However, before N-API existed, some packages used the internal V8 C++ APIs in Node.js. What makes this complicated is that Node.js and Bun use different JavaScript engines: Node.js uses V8 (used by Chrome), and Bun uses JavaScriptCore (used by Safari).

Previously, npm packages like cpu-features, which rely on these V8 APIs, would not work in Bun.

require("cpu-features")();
dyld[94465]: missing symbol called
fish: Job 1, 'bun index.ts' terminated by signal SIGABRT (Abort)

To fix this, we undertook the unprecedented engineering effort of implementing V8’s public C++ API in JavaScriptCore, so these packages can “just work” in Bun. It’s so complicated and nerdy to explain, we wrote a 3-part blog series on how we supported the V8 APIs… without using V8.

In Bun 1.2, packages like cpu-features can be imported and just work.

$ bun index.ts
{
  arch: "aarch64",
  flags: {
    fp: true,
    asimd: true,
    // ...
  },
}

The V8 C++ APIs are very complicated to support, so most packages will still have missing features. We’re continuing to improve support, so packages like node-canvas@v2 and node-sqlite3 can work in the future.

node:v8

In addition to the V8 C++ APIs, we’ve also added support for heap snapshots using node:v8.

import { writeHeapSnapshot } from "node:v8";

// Writes a heap snapshot to the current working directory in the form:
// `Heap-{date}-{pid}.heapsnapshot`
writeHeapSnapshot();

In Bun 1.2, you can use getHeapSnapshot and writeHeapSnapshot to read and write V8 heap snapshots. This allows you to use Chrome DevTools to inspect the heap of Bun.

You can view a heap snapshot of Bun using Chrome DevTools.

Express is 3x faster

While compatibility is important for fixing bugs, it also helps us fix performance issues in Bun.

In Bun 1.2, the popular express framework can serve HTTP requests up to 3x faster than in Node.js. This was made possible by improving compatibility with node:http, and optimizing Bun’s HTTP server.

Bun aims to be a cloud-first JavaScript runtime. That means supporting all the tools and services you need to run a production application in the cloud.

Modern applications store files in object storage, instead of the local POSIX file system. When end-users upload a file attachment to a website, it’s not being stored on the server’s local disk, it’s being stored in a S3 bucket. Decoupling storage from compute prevents an entire class of reliability issues: low disk space, high p95 response times from busy I/O, and security issues with shared file storage.

S3 is the defacto-standard for object storage in the cloud. The S3 APIs are implemented by a variety of cloud services, including Amazon S3, Google Cloud Storage, Cloudflare R2, and dozens more.

That’s why Bun 1.2 adds built-in support for S3. You can read, write, and delete files from an S3 bucket using APIs that are compatible with Web standards like Blob.

Reading files from S3

You can use the new Bun.s3 API to access the default S3Client. The client provides a file() method that returns a lazy-reference to an S3 file, which is the same API as Bun’s File.

import { s3 } from "bun";

const file = s3.file("folder/my-file.txt");
// file instanceof Blob

const content = await file.text();
// or:
//   file.json()
//   file.arrayBuffer()
//   file.stream()

5x faster than Node.js

Bun’s S3 client is written in native code, instead of JavaScript. When you compare it to using packages like @aws-sdk/client-s3 with Node.js, it’s 5x faster at downloading files from a S3 bucket.

Left: Bun 1.2 with Bun.s3. Right: Node.js with @aws-sdk/client-s3.

Writing files to S3

You can use the write() method to upload a file to S3. It’s that simple:

import { s3 } from "bun";

const file = s3.file("folder/my-file.txt");

await file.write("hello s3!");
// or:
//   file.write(new Uint8Array([1, 2, 3]));
//   file.write(new Blob(["hello s3!"]));
//   file.write(new Response("hello s3!"));

For larger files, you can use the writer() method to obtain a file writer that does a multi-part upload, so you don’t have to worry about the details.

import { s3 } from "bun";

const file = s3.file("folder/my-file.txt");
const writer = file.writer();

for (let i = 0; i < 1000; i++) {
  writer.write(String(i).repeat(1024));
}

await writer.end();

Presigned URLs

When your production service needs to let users upload files to your server, it’s often more reliable for the user to upload directly to S3 instead of your server acting as an intermediary.

To make this work, you use the presign() method to generate a presigned URL for a file. This generates a URL with a signature that allows a user to securely upload that specific file to S3, without exposing your credentials or granting them unnecessary access to your bucket.

import { s3 } from "bun";

const url = s3.presign("folder/my-file.txt", {
  expiresIn: 3600, // 1 hour
  acl: "public-read",
});

Using Bun.serve()

Since Bun’s S3 APIs extend the File API, you can use Bun.serve() to serve S3 files over HTTP.

import { serve, s3 } from "bun";

serve({
  port: 3000,
  async fetch(request) {
    const { url } = request;
    const { pathname } = new URL(url);
    // ...
    if (pathname === "/favicon.ico") {
      const file = s3.file("assets/favicon.ico");
      return new Response(file);
    }
    // ...
  },
});

When you use new Response(s3.file(...)), instead of downloading the S3 file to your server and sending it back to the user, Bun redirects the user to the presigned URL for the S3 file.

Response (0 KB) {
  status: 302,
  headers: Headers {
    "location": "https://s3.amazonaws.com/my-bucket/assets/favicon.ico?...",
  },
  redirected: true,
}

This saves you memory, time, and the bandwidth cost of downloading the file to your server.

Using Bun.file()

If you want to access S3 files using the same code as the local file-system, you can reference them using the s3:// URL protocol. It’s the same concept as using file:// to reference local files.

import { file } from "bun";

async function createFile(url, content) {
  const fileObject = file(url);
  if (await fileObject.exists()) {
    return;
  }
  await fileObject.write(content);
}

await createFile("s3://folder/my-file.txt", "hello s3!");
await createFile("file://folder/my-file.txt", "hello posix!");

Using fetch()

You can even use fetch() to read, write, and delete files from S3.

// Upload to S3
await fetch("s3://folder/my-file.txt", {
  method: "PUT",
  body: "hello s3!",
});

// Download from S3
const response = await fetch("s3://folder/my-file.txt");
const content = await response.text(); // "hello s3!"

// Delete from S3
await fetch("s3://folder/my-file.txt", {
  method: "DELETE",
});

Using S3Client

When you import Bun.s3, it returns a default client that is configured using well-known environment variables, such as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

import { s3, S3Client } from "bun";
// s3 instanceof S3Client

You can also create your own S3Client, then set it as the default.

import { S3Client } from "bun";

const client = new S3Client({
  accessKeyId: "my-access-key-id",
  secretAccessKey: "my-secret-access-key",
  region: "auto",
  endpoint: "https://.r2.cloudflarestorage.com",
  bucket: "my-bucket",
});

// Sets the default client to be your custom client
Bun.s3 = client;

Just like object storage, another datastore that production applications often need is a SQL database.

Since the beginning, Bun has had a built-in SQLite client. SQLite is great for smaller applications and quick scripts, where you don’t want to worry about the hastle of setting up a production database.

In Bun 1.2, we’re expanding Bun’s support for SQL databases by introducing Bun.sql, a built-in SQL client with Postgres support. We also have a pull request to add MySQL support very soon.

Using Bun.sql

You can use Bun.sql to run SQL queries using tagged-template literals. This allows you to pass JavaScript values as parameters to your SQL queries.

Most importantly, it escapes strings and uses prepared statements for you to prevent SQL injection.

import { sql } from "bun";

const users = [
  { name: "Alice", age: 25 },
  { name: "Bob", age: 65 },
];

await sql`
  INSERT INTO users (name, age)
  VALUES ${sql(users)}
`;

Reading rows is just as easy. Results are returned as an array of objects, with the column name as the key.

import { sql } from "bun";

const seniorAge = 65;
const seniorUsers = await sql`
  SELECT name, age FROM users
  WHERE age >= ${seniorAge}
`;

console.log(seniorUsers); // [{ name: "Bob", age: 65 }]

50% faster than other clients

Bun.sql is written in native code with optimizations like:

  • Automatic prepared statements
  • Query pipelining
  • Binary wire protocol support
  • Connection pooling
  • Structure caching

Optimizations stack like buffs in World of Warcraft.

The result is that Bun.sql is up to 50% faster at reading rows than using the most popular Postgres clients with Node.js.

Migrate from postgres.js to Bun.sql

The Bun.sql APIs are inspired by the popular postgres.js package. This makes it easy to migrate your existing code to using Bun’s built-in SQL client.

  import { postgres } from "postgres";
  import { postgres } from "bun";

const sql = postgres({
  host: "localhost",
  port: 5432,
  database: "mydb",
  user: "...",
  password: "...",
});

const users = await sql`SELECT name, age FROM users LIMIT 1`;
console.log(users); // [{ name: "Alice", age: 25 }]

Bun is a npm-compatible package manager that makes it easy to install and update your node modules. You can use bun install to install dependencies, even if you’re using Node.js as a runtime.

Replace npm install with bun install

$ npm install
$ bun install

In Bun 1.2, we’ve made the biggest change yet to the package manager.

Problems with bun.lockb

Since the beginning, Bun has used a binary lockfile: bun.lockb.

Unlike other package managers that use text-based lockfiles, like JSON or YAML, a binary lockfile allowed us to make bun install almost 30x faster than npm.

However, we found that there were a lot of paper cuts when using a binary lockfile. First, you couldn’t view the contents of the lockfile on GitHub and other platforms. This sucked.

What happens if you receive a pull request from an external contributor that changes the bun.lockb file? Do you trust it? Probably not.

That’s also assuming there isn’t a merge conflict! Which for a binary lockfile, is almost impossible to resolve, aside from manually deleting the lockfiles and running bun install again.

This also made it hard for tools to read the lockfile. For example, dependency management tools like Dependabot would need an API to parse the lockfile, and we didn’t offer one.

Bun will continue to support bun.lockb for a long time. However, for all these reasons, we’ve decided to switch to a text-based lockfile as the default in Bun 1.2.

Introducing bun.lock

In Bun 1.2, we’re introducing a new, text-based lockfile: bun.lock.

You can migrate to the new lockfile by using the --save-text-lockfile flag.

bun install --save-text-lockfile

bun.lock is a JSONC file, which is JSON with added support for comments and trailing commas.

bun.lock

// bun.lock
{
  "lockfileVersion": 0,
  "packages": [
    ["express@4.21.2", /* ... */, "sha512-..."],
    ["body-parser@1.20.3", /* ... */],
    /* ... and more */
  ],
  "workspaces": { /* ... */ },
}

This makes it much easier to view diffs in pull requests, and trailing commas make it much less likely to cause merge conflicts.

For new projects without a lockfile, Bun will generate a new bun.lock file.

For existing projects with a bun.lockb file, Bun will continue to support the binary lockfile, without migration to the new lockfile. We will continue to support the binary lockfile for a long time, so you can continue to use commands, like bun add and bun update, and it will update your bun.lockb file.

bun install gets 30% faster

You might think that after we migrated to a text-based lockfile, bun install would be slower. Wrong!

Most software projects get slower as more features are added, Bun is not one of those projects. We spent a lot of time tuning and optimizing Bun, so we could make bun install even faster.

That’s why in Bun 1.2, bun install is 30% faster than Bun 1.1

JSONC support in package.json

Have you ever added something to your package.json and forgot why months later? Or wanted to explain to your teammates why a dependency needs a specific version? Or have you ever had a merge conflict in a package.json file due to a comma?

Often these problems are due to the fact that package.json is a JSON file, and that means you can’t use comments or trailing commas in it.

package.json

{
  "dependencies": {
    // this would cause a syntax error
    "express": "4.21.2"
  }
}

This is a bad experience. Modern tools like TypeScript allow for comments and trailing commas in their configuration files, tsconfig.json for example, and it’s great. We also asked the community on your thoughts, and it seemed that the status-quo needed to change.

In Bun 1.2, you can use comments and trailing commas in your package.json. It just works.

package.json

{
  "name": "app",
  "dependencies": {
    // We need 0.30.8 because of a bug in 0.30.9
    "drizzle-orm": "0.30.8", /* <- trailing comma */
  },
}

Since there are many tools that read package.json files, we’ve added support to require() or import() these files with comments and trailing commas. You don’t need to change your code.

const pkg = require("./package.json");
const {
  default: { name },
} = await import("./package.json");

Since this isn’t widely supported in the JavaScript ecosystem, we’d advice you to use this feature “at your own risk.” However, we think this is the right direction to go: to make things easier for you.

.npmrc support

In Bun 1.2, we added support for reading npm’s config file: .npmrc.

You can use .npmrc to configure your npm registry and configure scoped packages. This is often necessary for corporate environments, where you might need to authenticate to a private registry.

.npmrc

@my-company:registry=https://packages.my-company.com
@my-org:registry=https://packages.my-company.com/my-org

Bun will look for an .npmrc file in your project’s root directory, and in your home directory.

bun run --filter

You can now use bun run --filter to run a script in multiple workspaces at the same time.

This will run the dev script, concurrently, in all workspaces that match the glob pattern. It will also interleave the output of each script, so you can see the output of each workspace as it runs.

You can also pass multiple filters to --filter, and you can just use bun instead of bun run.

bun --filter 'api/*' --filter 'frontend/*' dev

bun outdated

You can now view which dependencies are out-of-date using bun outdated.

It will show a list of your package.json dependencies, and which versions are out-of-date. The “update” column shows the next semver-matching version, and the “latest” column shows the latest version.

If you notice there’s a specific dependency you want to update, you can use bun update.

bun update @typescript-eslint/parser # Updates to "7.18.0"
bun update @typescript-eslint/parser --latest # Updates to "8.2.0"

You can also filter which dependencies you want to check for updates. Just make sure to quote patterns, so your shell doesn’t expand them as glob patterns!

bun outdated "is-*" # check is-even, is-odd, etc.
bun outdated "@discordjs/*" # check @discordjs/voice, @discordjs/rest, etc.
bun outdated jquery --filter="foo" # check jquery in the `foo` workspace

bun publish

You can now publish npm packages using bun publish.

It’s a drop-in replacement for npm publish, and supports many of the same features like:

  • Reading .npmrc files for authentication.
  • Packing tarballs, accounting for .gitignore and .npmignore files in multiple directories.
  • OTP / Two-factor authentication.
  • Handling edgecases with package.json fields like bin, files, etc.
  • Handling missing README files carefully.

We’ve also added support for commands that are useful for publishing, like:

  • bun pm whoami, which prints your npm username.
  • bun pm pack, which creates an npm package tarball for publishing or installing locally.

bun patch

Sometimes, your dependencies have bugs or missing features. While you could fork the package, make your changes, and publish it — that’s a lot of work. What if you don’t want to maintain a fork?

In Bun 1.2, we’ve added support for patching dependencies. Here’s how it works:

  1. Run bun patch to patch a package.
  2. Edit the files in the node_modules/ directory.
  3. Run bun patch --commit to save your changes. That’s it!

Bun generates a .patch file with your changes in the patches/ directory, which is automatically applied on bun install. You can then commit the patch file to your repository, and share it with your team.

For example, you could create a patch to replace a dependency with your own code.

./patches/is-even@1.0.0.patch

diff --git a/index.js b/index.js
index 832d92223a9ec491364ee10dcbe3ad495446ab80..2a61f0dd2f476a4a30631c570e6c8d2d148d419a 100644
--- a/index.js
+++ b/index.js
@@ -1,14 +1 @@
- 'use strict';
-
- var isOdd = require('is-odd');
-
- module.exports = function isEven(i) {
-   return !isOdd(i);
- };
+ module.exports = (i) => (i % 2 === 0)

Bun clones the package from the node_modules directory with a fresh copy of itself. This allows you to safely make edits to files in the package’s directory without impacting shared file caches.

Easier to use

We’ve also made a bunch of small improvements to make bun install easier to use.

CA certificates

You can now configure CA certificates for bun install. This is useful when you need to install packages from your company’s private registry, or if you want to use self-signed certificate.

bunfig.toml

[install]
# The CA certificate as a string
ca = "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----"

# A path to a CA certificate file. The file can contain multiple certificates.
cafile = "path/to/cafile"

If you don’t want to change your bunfig.toml file, you can also use the --ca and --cafile flags.

bun install --cafile=/path/to/cafile

If you are using an existing .npmrc file, you can also configure CA certificates there.

.npmrc

cafile=/path/to/cafile
ca="..."

bundleDependencies support

You can now use bundleDependencies in your package.json.

package.json

{
  "bundleDependencies": ["is-even"]
}

These are dependencies that you expect to already exist in your node_modules folder, and are not installed like other dependencies.

bun add respects package.json indentation

We fixed a bug where bun add would not respect the spacing and indentation in your package.json. Bun will now preserve the indentation of your package.json, no matter how wacky it is.

package.json

// an intentionally wacky package.json
{
  "dependencies": {
              "is-even": "1.0.0",
              "is-odd": "1.0.0"
  }
}

--omit=dev|optional|peer support

Bun now supports the --omit flag with bun install, which allows you to omit dev, optional, or peer dependencies.

bun install --omit=dev # omit dev dependencies
bun install --omit=optional # omit optional dependencies
bun install --omit=peer # omit peer dependencies
bun install --omit=dev --omit=optional # omit dev and optional dependencies

Bun has a built-in test runner that makes it easy to write and run tests in JavaScript, TypeScript, and JSX. It supports many of the same APIs as Jest and Vitest, which includes the expect()-style APIs.

In Bun 1.2, we’ve made a lot of improvements to bun test.

JUnit support

To use bun test with CI/CD tools like Jenkins, CircleCI, and GitLab CI, you can use the --reporter option to output test results to a JUnit XML file.

bun test --reporter=junit --reporter-outfile=junit.xml

junit.xml

xml version="1.0" encoding="UTF-8"?>
<testsuites name="bun test" tests="1" assertions="1" failures="1" time="0.001">
  <testsuite name="index.test.ts" tests="1" assertions="1" failures="1" time="0.001">
    
  testsuite>
testsuites>

You can also enable JUnit reporting by adding the following to your bunfig.toml file.

bunfig.toml

[test.reporter]
junit = "junit.xml"

LCOV support

You can use bun test --coverage to generate a text-based coverage report of your tests.

In Bun 1.2, we added support for LCOV coverage reporting. LCOV is a standard format for code coverage reports, and is used by many tools like Codecov.

bun test --coverage --coverage-reporter=lcov

By default, this outputs a lcov.info coverage report file in the coverage directory. You can change the coverage directory with --coverage-dir.

If you want to always enable coverage reporting, you can add the following to your bunfig.toml file.

bunfig.toml

[test]
coverage = true
coverageReporter = ["lcov"]  # default ["text"]
coverageDir = "./path/to/folder"  # default "./coverage"

Inline snapshots

You can now use inline snapshots using expect().toMatchInlineSnapshot().

Unlike toMatchSnapshot(), which stores the snapshot in a separate file, toMatchInlineSnapshot() stores snapshots directly in the test file. This makes it easier see, and even change your snapshots.

First, write a test that uses toMatchInlineSnapshot().

snapshot.test.ts

import { expect, test } from "bun:test";

test("toMatchInlineSnapshot()", () => {
  expect(new Date()).toMatchInlineSnapshot();
});

Next, update the snapshot with bun test -u, which is short for --update-snapshots.

Then, voilà! Bun has updated the test file with your snapshot.

snapshot.test.ts

import { expect, test } from "bun:test";

test("toMatchInlineSnapshot()", () => {
   expect(new Date()).toMatchInlineSnapshot();
   expect(new Date()).toMatchInlineSnapshot(`2025-01-18T02:35:53.332Z`);
});

You can also use these matchers, which do a similar thing:

test.only()

You can use test.only() to run a single test, excluding all other tests. This is useful when you’re debugging a specific test, and don’t want to run the entire test suite.

import { test } from "bun:test";

test.only("test a", () => {
  /* Only run this test  */
});

test("test b", () => {
  /* Don't run this test */
});

Previously, for this to work in Bun, you had to use the --only flag.

This was annoying, you’d usually forget to do it, and test runners like Jest don’t need it! In Bun 1.2, we’ve made this “just work”, without the need for flags.

New expect() matchers

In Bun 1.2, we added a bunch of matchers to the expect() API. These are the same matchers that are implemented by Jest, Vitest, or the jest-extended library.

You can use toContainValue() and derivatives to check if an object contains a value.

const object = new Set(["bun", "node", "npm"]);

expect(object).toContainValue("bun");
expect(object).toContainValues(["bun", "node"]);
expect(object).toContainAllValues(["bun", "node", "npm"]);
expect(object).not.toContainAnyValues(["done"]);

Or, use toContainKey() and derivatives to check if an object contains a key.

const object = new Map([
  ["bun", "1.2.0"],
  ["node", "22.13.0"],
  ["npm", "9.1.2"],
]);

expect(object).toContainKey("bun");
expect(object).toContainKeys(["bun", "node"]);
expect(object).toContainAllKeys(["bun", "node", "npm"]);
expect(object).not.toContainAnyKeys(["done"]);

You can also use toHaveReturned() and derivatives to check if a mocked function has returned a value.

import { jest, test, expect } from "bun:test";

test("toHaveReturned()", () => {
  const mock = jest.fn(() => "foo");
  mock();
  expect(mock).toHaveReturned();
  mock();
  expect(mock).toHaveReturnedTimes(2);
});

Custom error messages

We’ve also added support for custom error messages using expect().

You can now pass a string as the second argument to expect(), which will be used as the error message. This is useful when you want to document what the assertion is checking.

example.test.ts

import { test, expect } from 'bun:test';

test("custom error message", () => {
  expect(0.1 + 0.2).toBe(0.3);
  expect(0.1 + 0.2, "Floating point has precision error").toBe(0.3);
});
1 | import { test, expect } from 'bun:test';
2 |
3 | test("custom error message", () => {
4 |   expect(0.1 + 0.2, "Floating point has precision error").toBe(0.3);
                                                              ^
error: expect(received).toBe(expected)
error: Floating point has precision error

Expected: 0.3
Received: 0.30000000000000004

jest.setTimeout()

You can now use Jest’s setTimeout() API to change the default timeout for tests in the current scope or module, instead of setting the timeout for each test.

jest.setTimeout(60 * 1000); // 1 minute

test("do something that takes a long time", async () => {
  await Bun.sleep(Infinity);
});

You can also import setDefaultTimeout() from Bun’s test APIs, which does the same thing. We chose a different name to avoid confusion with the global setTimeout() function.

import { setDefaultTimeout } from "bun:test";

setDefaultTimeout(60 * 1000); // 1 minute

Bun is a JavaScript and TypeScript bundler, transpiler, and minifier that can be used to bundle code for the browser, Node.js, and other platforms.

HTML imports

In Bun 1.2, we’ve added support for HTML imports. This allows you to replace your entire frontend toolchain with a single import statement.

To get started, pass an HTML import to the static option in Bun.serve:

import homepage from "./index.html";

Bun.serve({
  static: {
    "/": homepage,
  },

  async fetch(req) {
    // ... api requests
  },
});

When you make a request to /, Bun automatically bundles the

admin

The realistic wildlife fine art paintings and prints of Jacquie Vaux begin with a deep appreciation of wildlife and the environment. Jacquie Vaux grew up in the Pacific Northwest, soon developed an appreciation for nature by observing the native wildlife of the area. Encouraged by her grandmother, she began painting the creatures she loves and has continued for the past four decades. Now a resident of Ft. Collins, CO she is an avid hiker, but always carries her camera, and is ready to capture a nature or wildlife image, to use as a reference for her fine art paintings.

Related Articles

Leave a Reply

Check Also
Close