· javascript · 16 min read
Seven cool JavaScript libraries You should know about
Small, focused libraries I found useful. Each has a clear job and a payoff you'll feel in the first use.
Neciu Dan
Hi there, it's Dan, a technical co-founder of an ed-tech startup, host of Señors at Scale - a podcast for Senior Engineers, Organizer of ReactJS Barcelona meetup, international speaker and Staff Software Engineer, I'm here to share insights on combining
technology and education to solve real problems.
I write about startup challenges, tech innovations, and the Frontend Development.
Subscribe to join me on this journey of transforming education through technology. Want to discuss
Tech, Frontend or Startup life? Let's connect.
Every few months, I discover an indispensable library—often via Reddit, conferences, or the ReactJS Barcelona meetup.
To clarify: I am not associated with any of these projects in any way.
Here’s a scan of seven libraries and their main purposes so you can quickly find what you need:
Knip – Find and remove dead code and unused dependencies in JavaScript and TypeScript projects, immediately freeing up space and reducing bundle size after your first run.
Nuqs – Manage and sync URL state in React apps with a single hook, ensuring users can share or refresh pages without losing their app state.
ts-pattern – Add exhaustive, type-safe pattern matching and type narrowing to TypeScript, helping catch missing cases before they become bugs.
Orval – Generate fully typed API clients and hooks from your OpenAPI specification, minimizing manual coding and keeping your code and schema in sync.
Zod – Build reusable, type-safe validation schemas with seamless runtime checks, catching invalid data early and simplifying complex validations.
Biome – Replace ESLint and Prettier with a faster, unified linter and formatter, instantly speeding up your development feedback loop.
Ofetch – Simplify HTTP requests and error handling with a lightweight fetch wrapper, reducing boilerplate and making network code easy to follow from the start.
Now that we’ve reviewed the libraries at a glance, let’s dive deeper into what each does, where they shine, and their trade-offs.
Knip
You don’t know how much dead code is in your repo. I promise.
Run this once on a project you’ve worked on for more than a year:
npx knip
It finds unused files, exports, dependencies, and devDependencies in package.json, quickly revealing what you can safely remove. The first time I ran it on a startup codebase, it flagged 71 files. One common gotcha is with importing everything from a barrel file.
A barrel file is an index.ts that re-exports everything from a folder, so you can write import { Button, Card, Modal } from '@/components' instead of three separate import paths.
You import one thing from @/components but the bundler thinks you might import everything from @/components, so it keeps everything alive.
Tree shaking helps in theory, but adds extra lookups during hard reloads on the dev server.
The Knip config is one file:
{
"$schema": "https://unpkg.com/knip@latest/schema.json",
"entry": ["src/main.ts", "src/pages/**/*.tsx"],
"project": ["src/**/*.{ts,tsx}"]
}
You tell it where your real entry points are in the app, and anything not reachable from those entripoints is probably dead code.
The flag I like most is --production. It scopes the analysis to your production code only, ignoring tests and storybook stories so you can get a list of code used only by tests.
If the only thing using a piece of code is its test file, you can probably delete both.
Run it in CI with --reporter compact and fail the build when something new is flagged to make sure dead code doesnt hit your production build:
# .github/workflows/lint.yml
- run: npx knip --reporter compact
After a few weeks, the codebase stops accumulating dead weight by default.
Knip works best for single-package projects and may encounter quirks in complex monorepos or non-standard file structures. Occasionally, it reports false positives when files are loaded dynamically or via custom loaders that are not visible in static imports.
Review the output before mass-deleting, especially in larger or legacy codebases.
Nuqs
With Knip covered, let’s switch focus to another common headache: managing URL state in React apps.
Most React apps solve this four times: once with useState, once with URLSearchParams, once with a custom hook and once by giving up and dumping filter state into a global store like Zustand or Redux.
Nuqs collapses all four into one hook that reads and writes the URL.
With this, you get instant state syncing between React and the URL, which fixes common bugs and makes sharing and refreshing seamless.
import { useQueryState, parseAsInteger, parseAsString } from 'nuqs';
function ProductList() {
const [page, setPage] = useQueryState('page', parseAsInteger.withDefault(1));
const [search, setSearch] = useQueryState('q', parseAsString.withDefault(''));
return (
<>
<input value={search} onChange={e => setSearch(e.target.value)} />
<button onClick={() => setPage(page + 1)}>Next</button>
</>
);
}
That’s the entire thing. The URL becomes ?q=shoes&page=2, refreshing the page keeps the state, sharing the link works, the back button works, and you didn’t write a single line of synchronization code.
Nuqs ships parsers for numbers, booleans, dates, JSON, and arrays.
It also has parsers for string enums, meaning a string value constrained to a fixed list like 'asc' | 'desc'.
It supports throttling, batching, server-side rendering, and shallow updates (changing the URL without re-running data fetchers or causing a route transition).
It works with Next.js (app and pages routers), React SPA, Remix, React Router, and TanStack Router.
Quick tip: If you’re using Nuqs with Next.js server-side rendering (SSR), watch out for places where the URL query might not be available during the initial server render.
For consistent hydration between server and client, use guards to avoid accessing query state during SSR or provide sensible fallbacks. In multi-router setups, double-check you’re using the right router context so state sync doesn’t break.
These small details save a lot of debugging time later.
What Nuqs gets right is treating the URL as the source of truth. Most hand-rolled solutions treat React state as the source of truth and try to sync the URL on the side, which is where the bugs come from.
Use it for filters, sort orders, pagination, modal state, tab state, anything that should survive a refresh and be shareable. Don’t use it for things that don’t belong in the URL, like form drafts. You don’t want every keystroke writing to the address bar, and you don’t want users sharing a half-filled form by accident.
ts-pattern
Having tackled URL state, let’s turn to type narrowing. Switch statements don’t narrow well. If/else chains don’t narrow at all unless you’re disciplined about discriminated unions, and even then, the branches read like a tax form.
A quick vocabulary stop. Narrowing is when TypeScript figures out which specific type a variable is at a given point in your code. If you have string | undefined and you check if (x !== undefined), TypeScript narrows x to string inside the if block. The narrower the type, the more help you get from autocomplete and error checking.
A discriminated union is a type made of variants where one field tells you which variant you’re in. The Result type below is a discriminated union: every variant has a status field, and the value of status decides what other fields exist.
ts-pattern gives you exhaustive pattern matching that the type checker actually understands, making it impossible to miss a case and dramatically reducing bugs when your union types grow.
import { match, P } from 'ts-pattern';
type Result =
| { status: 'loading' }
| { status: 'error'; error: Error }
| { status: 'success'; data: User };
function render(result: Result) {
return match(result)
.with({ status: 'loading' }, () => 'Loading...')
.with({ status: 'error' }, ({ error }) => `Error: ${error.message}`)
.with({ status: 'success' }, ({ data }) => `Hello, ${data.name}`)
.exhaustive();
}
Inside each .with branch, the value is limited to the matching variant, so destructuring { error } or { data } works without a type guard. TypeScript already knows which branch you’re in.
.exhaustive() is what earns the install. If you add a new variant to the Result union and forget to handle it, TypeScript fails the build.
The same safety in a switch statement requires writing default: const _exhaustive: never = result at the bottom of every switch. (The trick is that assigning to type never only compiles when the value’s type has been narrowed to never, which only happens after every variant has been handled. So if you add a new variant, the never line breaks.) Nobody actually writes this. ts-pattern gives you the same safety from a method call you can’t forget.
The P namespace handles patterns. P.string, P.number, P.array(P.string), P.union(...), P.when(predicate). You can match on shapes, not just on a discriminator field.
Say action is some event payload, like { type: 'click', payload: { x: 100, y: 200 } }. You can match against the nested shape directly:
match(action)
.with({ type: 'click', payload: { x: P.number, y: P.number } }, ({ payload }) => {
// payload.x and payload.y are narrowed to a number
})
.otherwise(() => null);
I reach for it most often when handling reducer actions, parsing API responses with multiple shapes, and rendering UI states. Anywhere you’d write a switch and forget to handle a case six months later.
Small library, and you stop having to remember the never-trick.
Orval
With pattern matching explored, now let’s talk about connecting your frontend to your backend with confidence. An OpenAPI schema is a JSON or YAML file that describes every endpoint your backend exposes, including the request and response shapes for each. (It used to be called Swagger.) If your backend team is shipping one, you can do something with it.
Most teams don’t. They write the API client by hand, type the responses by hand, and watch the spec drift apart from the code over the next year.
The spec says getUser returns { name, email }; six months later, the backend added phoneNumber and nobody updated the frontend types. That’s schema drift.
Orval reads the OpenAPI spec and generates a typed client, ensuring your frontend always matches your API. With React Query (or SWR, or Vue Query) hooks, Zod schemas for runtime validation, and MSW mocks for tests, you instantly eliminate the risk of schema drift and manual syncing errors.
// orval.config.ts
export default {
api: {
input: './openapi.json',
output: {
mode: 'tags-split', // one file per OpenAPI tag (a tag groups related endpoints, like "users" or "orders")
target: './src/api',
client: 'react-query',
schemas: './src/api/schemas',
mock: true
},
},
};
You run npx orval and get hooks like useGetUser(id) and useCreateUser() already wired to React Query, with response types inferred from the schema, optional Zod validation, and mock servers ready to plug into Storybook.
The whole flow becomes:
- Backend updates the OpenAPI spec
- Frontend runs
orval - TypeScript breaks at every call site that needs an update
The schema drift bug is no longer possible. You can’t ship a frontend that calls a renamed endpoint or expects a removed field, because the build fails before the PR opens.
I’ve written about my own codegen pipeline elsewhere on this blog. The short version: my backend types flow through a chain of generators (Prisma to Zod to OpenAPI to Orval) and end up as React Query hooks that the frontend imports.
The whole CRUD layer is generated. The hand-written code is just the business logic.
If you have an OpenAPI spec, you have a generated client. Your backend can be Java, Go, Python, or whatever the team agreed on three years ago. Orval just needs the JSON.
Zod
Finally, let’s address data validation—a challenge every team faces. Validation is the thing every team writes badly until they install Zod, which provides reliable schemas, quick type inference, and instant runtime checks to prevent invalid data from entering your app.
import { z } from 'zod';
const UserSchema = z.object({
name: z.string().min(1),
email: z.email(),
age: z.number().int().min(13).optional(),
});
type User = z.infer<typeof UserSchema>;
const result = UserSchema.safeParse(input);
if (!result.success) {
console.log(result.error.format());
} else {
result.data; // typed as User
}
The schema is the type. You don’t write the interface and the validator separately and pray they stay in sync; you write the schema once and infer the type.
(In Zod 4, string format helpers like email, url, and uuid are top-level functions; z.email() rather than the chained z.string().email(). The chained form still works, but is deprecated.)
Zod is the standard now. React Hook Form includes a Zod resolver (a bridge between your validation schema and the form library), so the form gets per-field validation for free. tRPC uses Zod. Astro uses Zod for content collections. Server Actions in Next.js validate with Zod.
The ecosystem treats Zod schemas as a portable description of “what shape is this thing.”
Two patterns I use constantly.
Parse at the edge. Every API response, every form input, every localStorage read goes through Schema.parse(input) the moment it enters the application. Inside the app, you trust the types because the parse already happened. No more “this could theoretically be undefined” guards scattered through business logic.
Reuse schemas as type definitions. The same UserSchema validates the form, types the API response, and describes the database row. One source of truth for what a user looks like. When you need a variant, say the create form doesn’t include id, you can derive it: UserSchema.omit({ id: true }).
Zod isn’t perfect. The bundle size is larger than you’d expect, and a smaller, modern alternative exists in Valibot.
Here are the main differences to keep in mind:
Zod has a much larger ecosystem, with integrations in tools like React Hook Form, tRPC, and Astro, and it’s become the default choice for most TypeScript projects.
Valibot, on the other hand, is built with modern JavaScript, focused on minimalism and performance, and often comes in at 3-5x smaller in bundle size thanks to aggressive tree-shaking, but its ecosystem and plugin support aren’t yet as deep as Zod’s.
Biome
ESLint plus Prettier: 14 config files and a 90-second lint step. Biome is one config file and a sub-second lint step.
npm install --save-dev --save-exact @biomejs/biome
npx biome init
That’s the install. The init command writes a biome.json file with sensible defaults. You run biome check to lint and format together; you run biome check --write to auto-fix everything safely.
It’s written in Rust. The speed difference is the main reason to install it: Biome benchmarks at 10 to 100 times faster than ESLint+Prettier on real codebases, so format-on-save feels instant, and CI lint steps that took minutes finish in seconds.
Biome 2.x closed most of the historical gaps. It ships useExhaustiveDependencies, their port of react-hooks/exhaustive-deps.
It has type-aware linting, meaning rules that understand TypeScript types without calling the TypeScript compiler each time, which keeps the lint step fast.
It groups rules into domains (react, next, solid, test) that auto-enable based on what’s in your package.json: if you have react as a dependency, the React rules turn on automatically.
It also has a plugin system based on GritQL, a small query language for matching code patterns. You write a pattern for what bad code looks like, and Biome flags it whenever it matches. That covers the case that used to keep teams on ESLint: custom workplace rules.
The remaining trade-off is plugin ecosystem coverage. ESLint has thousands of community plugins for niche frameworks; Biome has its built-in rules and the GritQL plugin layer for custom ones. If you depend on a specific community plugin that hasn’t been ported, you’ll either run both linters in parallel or write the rule yourself in GritQL.
If you are considering migrating from ESLint and Prettier to Biome, here are a few tips for a smooth transition:
- Start by running Biome in check mode alongside your current tools, so you can compare linting and formatting output without removing ESLint or Prettier right away.
- Review your existing ESLint plugins and custom rules. For any plugins not yet available in Biome, check if similar built-in rules exist or see if you can replicate the behavior with a simple GritQL rule.
- Porting custom rules is often straightforward with GritQL. Start with your most important or frequently triggered rules, and write basic patterns to enforce them.
- For complex edge cases, consider keeping ESLint temporarily for just those rules, running both lint steps in CI, and removing ESLint entirely once you are satisfied with coverage.
- Remember to update any CI or pre-commit hooks to use Biome’s commands instead of the older tools, and share the new workflow with your team.
This checklist helps minimize friction so you get the speed and tooling benefits of Biome without losing the code quality checks you rely on.
For new projects, Biome is now the default choice. Biome also formats, so Prettier comes out, and Biome goes in. The formatting is identical for all practical purposes, and you’ve consolidated two tools into one.
Ofetch
Here’s the version of fetch everyone writes by hand:
const res = await fetch('/api/users', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: 'John' }),
});
if (!res.ok) {
throw new Error(`Request failed: ${res.status}`);
}
const data = await res.json();
Six lines for one POST. Multiply by every API call in your app, and you end up writing a wrapper utility to deduplicate the boilerplate. Then somebody adds retry logic in PR #847. Then the error handling drifts slightly in three places.
Ofetch collapses it.
import { ofetch } from 'ofetch';
const data = await ofetch('/api/users', {
method: 'POST',
body: { name: 'John' },
});
It auto-stringifies the body, sets the JSON headers, parses the response, and throws on any non-2xx response (so 4xx and 5xx errors become exceptions instead of values you have to remember to check). The thrown error is a FetchError with the parsed error body on error.data:
import { ofetch, FetchError } from 'ofetch';
try {
const data = await ofetch('/api/users', { method: 'POST', body: payload });
} catch (err) {
if (err instanceof FetchError) {
console.log(err.status, err.data); // server's error response, already parsed
}
}
The features you’d otherwise build yourself:
Auto retry. You pass retry: 3 and it retries failed requests on configurable status codes. POST/PUT/PATCH default to zero retries (no surprise side effects); GET defaults to one.
Timeouts. timeout: 3000 aborts after 3 seconds. Native fetch requires you to wire up an AbortController (the browser API for canceling fetch requests) by hand: create the controller, pass its signal to fetch, set a setTimeout that calls controller.abort(), and clear the timeout if the request finishes first. ofetch does all of that for you.
Base URL. You create an instance with ofetch.create({ baseURL: '/api', headers: { Authorization: ... } }) and every call inherits the config.
Interceptors. onRequest, onResponse, onResponseError. The shape Axios users expect, without the Axios. (One quirk: ofetch normalizes options.headers to a Headers instance inside interceptors, so you call .set() on it, not headers.Authorization = ....)
const api = ofetch.create({
baseURL: '/api',
retry: 2,
timeout: 10000,
onRequest({ options }) {
options.headers.set('Authorization', `Bearer ${getToken()}`);
},
onResponseError({ response }) {
if (response.status === 401) redirectToLogin();
},
});
const user = await api<User>('/users/me');
Works in browsers, Node, Workers, and Bun. Around 5kb gzipped.
The pairing with the rest of the list is the bonus. Orval generates the call sites and the Zod schemas from your OpenAPI spec; ofetch handles the HTTP transport with retries and timeouts; Zod validates anything that doesn’t come through Orval’s pipe. The hand-written code is just the business logic.
To see how these tools work together in practice, imagine fetching user data in a React app:
- The backend publishes an updated OpenAPI spec describing the user endpoints.
- You run Orval, which generates an API client with a useGetUser hook and corresponding Zod schemas.
- In your app, call useGetUser(id), which internally uses ofetch to perform the HTTP request, applying its retries and timeout policies.
- The response from the server is automatically validated against the Zod schema that Orval generated.
- If the response is invalid, the Zod check fails, and you catch the error early.
For edge cases—such as when you need to call an endpoint that isn’t in the OpenAPI spec—you can use ofetch directly and hand-validate with your own Zod schemas:
import { ofetch } from ‘ofetch’; import { z } from ‘zod’;
const CustomDataSchema = z.object({ foo: z.string(), bar: z.number() });
const data = await ofetch(‘/api/custom’); const validated = CustomDataSchema.parse(data);
This is what you get: type safety and runtime validation for every request, minimal manual code, and a setup where backend spec changes result in instant, type-correct updates on the frontend.
This tight integration saves hours of debugging and makes keeping your client in sync with your API nearly automatic.
Discover more from The Neciu Dan Newsletter
A weekly column on Tech & Education, startup building and occasional hot takes.
Over 1,000 subscribers