⚡ LIVE From Lizard to Wizard workshop · April 27 – May 2 Pick your date →

· javascript · 29 min read

What's actually new in JavaScript (and what's coming next)

ES2025 is out, ES2026 is close. Here is the new feature of Javascript we can use today, what is coming next and how we can get our AI friends to use these new features

Neciu Dan

Neciu Dan

Hi there, it's Dan, a technical co-founder of an ed-tech startup, host of Señors at Scale - a podcast for Senior Engineers, Organizer of ReactJS Barcelona meetup, international speaker and Staff Software Engineer, I'm here to share insights on combining technology and education to solve real problems.

I write about startup challenges, tech innovations, and the Frontend Development. Subscribe to join me on this journey of transforming education through technology. Want to discuss Tech, Frontend or Startup life? Let's connect.

Share:
What's actually new in JavaScript (and what's coming next)

ES2025 shipped in June, ES2026 is mostly locked in, and some of what’s landing is going to change how I write JavaScript day to day.

Not everything. But the using keyword, Temporal, iterator helpers, and the new Set methods are real improvements to the language.

Before diving into these new features, let me provide some context I wish I’d had when starting out.

Who decides what goes in JavaScript

Every browser ships its own JavaScript engine. V8 in Chrome, JavaScriptCore in Safari, SpiderMonkey in Firefox.

Each one is a separate codebase written by a different team. So why does Array.prototype.map behave the same way in all of them?

Why does async/await work identically whether you’re debugging in Chrome or Safari?

Because they’re all implementing the same specification: ECMAScript.

JavaScript, the language, is governed by the ECMAScript specification, maintained by the TC39 committee. The committee sits within Ecma International, the same standards body that publishes C#‘s spec (ECMA-334) and the JSON data interchange format (ECMA-404).

TC39 includes delegates from all the major browser vendors (Google, Apple, Mozilla, Microsoft), as well as companies like Bloomberg, Igalia, and Intel, and individual invited experts.

They meet roughly every two months and decide by consensus, which in practice means nobody objects strongly enough to veto.

Every proposal goes through a process.

Think of it like a funnel: anyone can drop an idea in at the top, and only a small percentage makes it out the bottom into the actual language.

  • Stage 0 (strawperson): “someone had an idea.” No commitment from the committee.
  • Stage 1: the committee agrees the problem is worth solving.
  • Stage 2: there’s a rough design written in spec language.
  • Stage 2.7 (added in 2024): the design is approved in principle, tests are being written. Sits between 2 and 3.
  • Stage 3: the design is complete, browsers can start implementing.
  • Stage 4: two independent implementations exist, the shared test suite (Test262, which every major browser runs against) passes, the feature is ready to ship.

Once a proposal reaches Stage 4, it’s merged into the living ECMAScript spec immediately, and it appears in the next yearly snapshot. The committee produces a candidate draft on February 1, branches the spec in March, and submits it to the Ecma General Assembly for ratification in July.

That’s why things that sound new in June were actually already shipping in your browser months ago. By the time a feature hits the official spec, you can probably use it in production.

ES2025: what actually made it in

The 129th Ecma General Assembly approved ECMAScript 2025 on June 25, 2025. It’s the 16th edition. What follows is what landed, roughly in order of how much I care about it.

Iterator helpers

Status: Shipped in ES2025. Available in Chrome 122+, Node 22+, Firefox 131+, Safari 18.4+.

This is the most exciting ES2025 addition for me.

An iterator is an object that produces values one at a time, on demand. It has a single method, .next(), which returns the next value each time you call it.

The reason iterators exist at all is that not everything you want to loop over is an array.

  • Map stores keys and values in a hash table.
  • Set stores unique items in an internal structure.
  • NodeList is a live view into the DOM.
  • The generator hasn’t computed its values yet and might never compute all of them.

None of these are flat arrays in memory, but you still want to write for (const x of thing) and have it just work.

Iterators are the uniform protocol that makes that possible. Any object can say “here’s how to walk my values one at a time” by implementing .next(), and the rest of the language (for…of, spread, destructuring) knows how to consume it.

That’s why you can spread a Set into an array, destructure a Map’s entries, and loop over DOM query results, even though none of them are arrays.

Every time you write:

for (const item of someArray) { ... }
for (const [key, value] of someMap) { ... }
for (const node of document.querySelectorAll('.card')) { ... }

const copy = [...someSet];
const merged = [...arr1, ...arr2];

…JavaScript is quietly creating an iterator for you and pulling values from it. The for...of loop and the spread operator both work on anything that’s “iterable,” and behind the scenes, they’re all just calling .next() in a loop.

The other reason iterators matter: they’re lazy.

An array holds all its values in memory right now, while an iterator computes the next value only when you ask for it.

That distinction doesn’t matter for small collections, but for a huge dataset (a million-row CSV, a paginated API stream, an infinite sequence), it might mean your app doesn’t break or freeze while iterating over the huge number of elements.

You can also build your own iterator with a generator function (declared with function*). A generator pauses at every yield and resumes the next time you ask for a value:

function* naturalNumbers() {
  let n = 1;
  while (true) yield n++;
}

Calling naturalNumbers() gives you an iterator that produces 1, 2, 3, ... forever, one value at a time.

If you had created a normal function and called it the code while (true), it would hang your browser if it ran eagerly; it doesn’t, because generators only run when you pull from them.

So iterators are everywhere in the language, and the laziness is the whole point. The problem is what you can do with one once you have it.

Arrays have .map(), .filter(), .reduce(), .flatMap(), the whole toolkit. Iterators have .next(). That’s it.

The moment you want to transform an iterator, your only option is to convert it to an array first:

const visibleCards = Array.from(document.querySelectorAll('.card'))
  .filter(el => !el.classList.contains('hidden'))
  .map(el => el.dataset.id);

This works, but it has two costs.

First, you allocated a whole intermediate array just so you could call array methods on it. For a hundred DOM nodes, that’s nothing. For a hundred thousand rows out of a CSV parser, you’re materializing the whole file in memory before you filter a single row.

Second, this stops working entirely the moment the iterator is infinite or streaming. Array.from tries to exhaust the iterator before returning. If you give it naturalNumbers(), the tab locks up forever.

So for anything streaming or infinite, you were forced to skip the array methods and write the loop by hand.

Before (get the first ten even squares from an infinite sequence):

const firstTenEvenSquares = [];
for (const n of naturalNumbers()) {
  if (n % 2 === 0) {
    firstTenEvenSquares.push(n * n);
    if (firstTenEvenSquares.length === 10) break;
  }
}

ES2025 moves those methods onto the iterator itself.

After:

const firstTenEvenSquares = naturalNumbers()
  .filter(n => n % 2 === 0)
  .map(n => n * n)
  .take(10)
  .toArray();

The reason this works on an infinite iterator is that iterator helpers are lazy. .filter() doesn’t pull every value from naturalNumbers(); it returns a new iterator that pulls one value at a time as you ask for it. .take(10) stops asking after ten, which means everything upstream stops producing. Nothing ever tries to fully enumerate naturalNumbers(), so the infinity never becomes a problem.

These are the full set of methods on Iterator.prototype: .map(), .filter(), .take(), .drop(), .flatMap(), .reduce(), .forEach(), .some(), .every(), .find(), and .toArray().

For iterables that aren’t already iterators (like a NodeList or a custom iterable class), there’s a new global Iterator class with a static method Iterator.from(x) that wraps them. The DOM case becomes:

const visibleCards = Iterator.from(document.querySelectorAll('.card'))
  .filter(el => !el.classList.contains('hidden'))
  .map(el => el.dataset.id)
  .toArray();

Where this pays off hardest is streaming data. Log files, CSV rows, anything you read a chunk at a time.

// Process a huge log file, keep the first 100 errors, and stop reading after.
const errors = logFileLines()
  .filter(line => line.includes('ERROR'))
  .take(100)
  .toArray();

One small problem that you need to know: only the sync helpers are shipped in ES2025. The async version (.map, .filter, .take on async iterables, plus Iterator.prototype.toAsync() to convert a sync iterator into an async one) is a separate proposal still at Stage 2.

So for anything async (streaming fetch, LLM token streams, async generators), you’re still writing for await...of loops for now.

Set methods

Status: Shipped in ES2025. Available in every major browser and Node 22+.

Sets now provide common set operations found in other languages.

Before (intersection, the DIY way):

const frontEnd = new Set(['HTML', 'CSS', 'JavaScript', 'React']);
const backEnd = new Set(['Node.js', 'JavaScript', 'SQL', 'React']);

// Manual intersection
const shared = new Set();
for (const tech of frontEnd) {
  if (backEnd.has(tech)) shared.add(tech);
}
// Or reach for lodash: _.intersection([...frontEnd], [...backEnd])

After:

frontEnd.union(backEnd);
// Set(6) { 'HTML', 'CSS', 'JavaScript', 'React', 'Node.js', 'SQL' }

frontEnd.intersection(backEnd);
// Set(2) { 'JavaScript', 'React' }

frontEnd.difference(backEnd);
// Set(2) { 'HTML', 'CSS' }

frontEnd.symmetricDifference(backEnd);
// Set(4) { 'HTML', 'CSS', 'Node.js', 'SQL' }

frontEnd.isSubsetOf(backEnd);     // false
frontEnd.isSupersetOf(backEnd);   // false
frontEnd.isDisjointFrom(backEnd); // false

Two notes on the semantics. The methods are non-mutating; they return a new Set rather than modifying the receiver.

Also, the argument doesn’t have to be an actual Set. It just has to be “set-like,” meaning it has a numeric size property, a .has() method, and a .keys() method that returns an iterator.

A Map qualifies; so does a custom LRUCache class; so does anything you’ve built with those three properties. The receiver (the this) must be a real Set, but the argument is more flexible.

This is why the proposal took years to land; the committee went back and forth on exactly which protocol to require.

JSON modules

Status: Shipped in ES2025. Available in Chrome 123+, Node 22+, Firefox 133+, Safari 17.4+.

JSON files can now be imported as modules using a native syntax, the same way you import JavaScript.

Before:

// Option A: rely on your bundler's magic import
import config from './config.json';
// Works in Webpack, Vite, Rollup, but is non-standard.
// Breaks if you try to run this file in a plain browser or Node without a bundler.

// Option B: fetch at runtime
const config = await fetch('./config.json').then(r => r.json());

After:

import config from './config.json' with { type: 'json' };

// Or dynamically
const translations = await import('./translations.json', {
  with: { type: 'json' }
});

The with { type: 'json' } part is required, and it’s called an import attribute.

The attribute tells the module loader, “this is a JSON module, refuse to load it if the server responds with a different MIME type.”

Without the with attribute, a compromised CDN could serve something pretending to be JSON but containing executable code.

Promise.try

Status: Shipped in ES2025. Available in Chrome 128+, Node 22+, Firefox 134+, Safari 18.2+.

Fun Fact: this one’s been in the Bluebird library for over a decade. ES2025 is where it finally becomes standard.

You might be calling a function that might be sync, might be async, and might throw before it decides. You want all three outcomes to flow through the same error-handling path.

Before:

// Does thirdParty.doThing() throw? Return a value? Return a promise? Who knows.
try {
  const result = thirdParty.doThing();
  // If it returned a promise, we need to handle it
  Promise.resolve(result)
    .then(r => processResult(r))
    .catch(err => handleAnyFailure(err));
} catch (err) {
  // Sync throws skip the promise chain entirely, so we need this too
  handleAnyFailure(err);
}

Two error handlers, and you have to remember both. The common workaround was Promise.resolve().then(() => thirdParty.doThing()), which routes everything through the promise chain, but it introduces an extra “tick” of delay (the function runs on the next microtask, not right now).

After:

Promise.try(() => thirdParty.doThing())
  .then(result => processResult(result))
  .catch(err => handleAnyFailure(err));

Sync throws, async rejections, and plain return values all flow through the same .then/.catch.

And unlike the Promise.resolve().then(...) workaround, Promise.try runs its callback synchronously when possible; it only switches to async if the callback itself returns a promise.

If you’ve never cared about “microtask ticks” before, don’t start now; just know that Promise.try is the cleanest way to take an unknown-shape function and get a predictable promise out of it.

RegExp.escape

Status: Shipped in ES2025. Available in Chrome 136+, Node 24+, Firefox 134+, Safari 18.2+.

Another Fun Fact: this was first proposed 15 years ago.

If you build a regex from user-controlled input, special regex characters (., *, +, (, [, ?, and friends) get interpreted instead of matched literally. So a user searching for "file.txt" would also match "fileAtxt" and "file!txt" because . means “any character.”

Before (the copy-paste-from-Stack-Overflow escape function):

function escapeRegex(str) {
  return str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
}
const userInput = 'file.txt';
const pattern = new RegExp(escapeRegex(userInput));

Every codebase had its own version of this, and most had subtle bugs (missing metacharacters, poor handling of special edge cases).

After:

const userInput = 'file.txt';
const pattern = new RegExp(RegExp.escape(userInput));
// Safely matches the literal string "file.txt"

One subtle detail: RegExp.escape("foo.bar") doesn’t return "foo\\.bar" as you might expect.

It returns "\\x66oo\\.bar"; the leading character is always hex-escaped.

That’s deliberate; it prevents the escaped string from being interpreted as part of a larger regex construct if you embed it in the middle of another pattern.

You don’t need to worry about the exact output; just know the function is paranoid about edge cases, so you don’t have to be.

Float16Array

Status: Shipped in ES2025. Available in Chrome 135+, Node 24+, Firefox 133+, Safari 18.2+.

A new typed array for 16-bit floating-point numbers. Half the memory of Float32Array.

If you’re writing TensorFlow.js, shaders for WebGPU, or working with HDF5/NetCDF data formats, this is useful; those ecosystems all standardized on float16 for storage and GPU transfer.

For most web code, you’ll never touch it. (I don’t)

Also in ES2025

Status: Both shipped in ES2025. Available in every major browser and Node.

  • Intl.DurationFormat: language-aware formatting of Temporal.Duration values (“2 hours, 15 minutes” in whatever locale you need). Pairs directly with Temporal when that lands.
  • Intl.Locale info accessors: weekInfo, hourCycles, getCalendars, and friends for pulling locale metadata like “what day does the week start on here” without having to ship your own lookup table.

If you ship i18n, these matter more than everything else in this list combined.

ES2026: what’s landing next

ES2026 isn’t final yet, but several proposals have already reached Stage 4, meaning they’ve been merged into the living spec. The annual snapshot ends in March 2026, so the final list is very close to being set.

The using keyword

Status: Landing in ES2026. Already shipping in Chrome 134+, Node 24+, Deno 2.0+. Firefox implementation in flight. TypeScript 5.2+ understands the syntax.

If iterator helpers are the ES2025 feature I’m most excited about, using is the ES2026 one. If you’ve written Python, you know what’s coming: it’s the with keyword, finally in JavaScript.

If you open a resource that needs cleanup (a file handle, a database connection), you have to remember to close it. Forget the cleanup, and you leak memory, file descriptors, or database connections until your process dies.

Before:

// Node.js: database transaction that must commit or rollback
async function transferMoney(from, to, amount) {
  const tx = await db.beginTransaction();
  try {
    await tx.debit(from, amount);
    await tx.credit(to, amount);
    await tx.commit();
  } catch (err) {
    await tx.rollback();
    throw err;
  } finally {
    await tx.release(); // must always happen
  }
}

In a long function, the setup and cleanup end up far apart, and it’s easy to forget one. You acquire the resource at the top of the function, scroll down to the finally block hoping the cleanup is there, and scroll back up to continue reading.

After:

async function transferMoney(from, to, amount) {
  await using tx = await db.beginTransaction();
  // tx.release() happens automatically when the scope exits,
  // whether by return, throw, or normal completion.
  await tx.debit(from, amount);
  await tx.credit(to, amount);
  await tx.commit();
}

The cleanup moves to the declaration. When the function returns (or throws), the transaction gets released. No finally block to forget.

How it works under the hood: the resource needs to implement a [Symbol.dispose]() method for sync cleanup, or [Symbol.asyncDispose]() for async cleanup (used with await using).

Symbols are a primitive type JavaScript uses to create “special” property keys that won’t clash with regular string property names. These two are new, well-known symbols added specifically for using. Library authors add these methods; you just use using and it works.

One thing to clear up: using is a language feature, not a Node-only thing. It works in browsers too, anywhere you have a resource that needs cleanup. AbortController and locks from the Web Locks API are the obvious examples.

Does this help React? Not directly. React’s cleanup model (the return function from useEffect) already solves the same problem for component lifecycle.

But anywhere else in your stack (server handlers, build scripts, CLI tools), using will be how cleanup looks.

Be aware that multiple using declarations in the same scope dispose in reverse order, like a LIFO stack. Open A, then B, then C, and they close C, B, A.

This matches how you’d manually nest try/finally blocks.

Temporal

Status: Landing in ES2026. Reached Stage 4 in March 2026. Firefox has shipped it; Chrome lands in V8 soon; Safari is roughly half done. Two production-ready polyfills available today: temporal-polyfill and @js-temporal/polyfill.

The long-promised replacement for Date is finally real.

If you’ve ever done date math in JavaScript, you know Date has problems.

Mutable instances, broken timezone handling, month numbering that starts at zero while day numbering starts at one, and parsing that’s undefined behavior across engines. Budibase developer Sam Rose built a quiz at jsdate.wtf that exploits Date’s inconsistencies; the answers differ between Firefox and Chrome.

Take a problem I hit last year: I’m in London, I have a meeting with a colleague in Sydney next Thursday at 9 AM their time, and I need to know what that lands as on my calendar.

Before (a truly bad afternoon):

// Step 1: What's "next Thursday"?
const today = new Date();
const daysUntilThursday = (4 - today.getDay() + 7) % 7 || 7;
const nextThursday = new Date(today);
nextThursday.setDate(today.getDate() + daysUntilThursday);

// Step 2: Set it to 9 AM Sydney time
// ...but JavaScript's Date has no idea what Sydney is, so you reach for a library
// (moment-timezone or date-fns-tz or luxon) or do manual offset math with
// `toLocaleString` hacks.

Most real codebases just install Moment or date-fns at this point and move on.

After with Temporal:

// Parse the meeting directly with its timezone annotation
const meeting = Temporal.ZonedDateTime.from(
  '2026-04-23T09:00[Australia/Sydney]'
);

// Convert to London time
const inLondon = meeting.withTimeZone('Europe/London');
inLondon.toString();
// "2026-04-23T00:00:00+01:00[Europe/London]"

Temporal understands ISO 8601 strings directly, including the [Australia/Sydney] timezone annotation.

Temporal has three main types covering the three ways we actually use dates: PlainDate (just a date, no time), PlainTime (just a time, no date), and ZonedDateTime (a specific moment in a specific zone).

You never have to guess whether a value is UTC or local; the type tells you.

There are three more for the edge cases: PlainDateTime for dates with a time but no zone, Instant for an absolute moment, and PlainYearMonth/PlainMonthDay for partial dates like birthdays.

Date arithmetic goes through .since(), .until(), .add(), and .subtract():

const birthday = Temporal.PlainDate.from('1993-10-26');
const today = Temporal.Now.plainDateISO();
const age = today.since(birthday, { largestUnit: 'years' });
age.toString(); // "P32Y5M24D"
age.years;      // 32

The bundle-savings pitch is real but depends on what you use today.

Swapping Moment.js for Temporal saves you around 40KB gzipped because Moment doesn’t tree-shake. Against a modern date-fns setup with tree-shaking, you might only save a few KB.

The bigger win is platform-level: browsers ship Temporal once, and every page benefits without paying the bundle cost.

Import defer

Status: Landing in ES2026. At Stage 3, very likely Stage 4 in time for the final ES2026 snapshot. TypeScript 5.9 supports the syntax; Babel, Webpack, and Esbuild do too. V8 and JavaScriptCore implementations in flight.

Another performance lever. When you import a module, it’s “evaluated” immediately, meaning its top-level code runs, even if you never end up calling anything from it.

If heavy.js has a console.log('loading heavy') at the top, that runs at import time, before your app has even started rendering.

For deep module graphs, that’s a lot of wasted startup time. You eagerly pay for every dependency, whether you use them or not.

import defer lets you import a module’s namespace without evaluating the module until you actually read a property off it.

Before (everything evaluates on import):

// Evaluates heavy.js immediately, even if rarelyCalled() is never called.
import * as heavyModule from './heavy.js';

function rarelyCalled() {
  return heavyModule.doExpensiveThing();
}

After:

import defer * as heavyModule from './heavy.js';

// heavy.js has been loaded (the file is fetched, parsed) but not executed.
// Any top-level code in heavy.js hasn't run yet.

function rarelyCalled() {
  // The moment we read heavyModule.doExpensiveThing,
  // heavy.js and its dependencies execute.
  return heavyModule.doExpensiveThing();
}

Two important restrictions.

First, you can only use the namespace form (import defer * as x).

Named imports (import defer { foo } from ...) and default imports are not allowed, because the namespace object is the proxy that triggers evaluation.

If you always write import { foo } from './thing', using import defer means switching to import defer * as thing and then writing thing.foo at the call site.

Second, modules that use top-level await can’t be deferred; if await is involved, you’re back to dynamic import().

Don’t confuse this with dynamic import(), which returns a promise and forces every caller to be async. import defer keeps everything synchronous.

The namespace is a proxy; touching any property synchronously triggers the module’s evaluation.

TC39 co-chair Rob Palmer, who works on the Bloomberg terminal, described the motivation as enabling free addition of imports to large applications without worrying about the cold-start cost of a module you might never use.

Math.sumPrecise

Status: Landing in ES2026. At Stage 4. Chrome 137+, Firefox has it, Safari, and Node rolling out.

JavaScript can’t add 0.1 + 0.2 correctly. Everyone knows this.

What’s worse: summing a long array of floats with .reduce((a, b) => a + b) accumulates error with every step.

Before:

// Realistic case: summing many small floats (like cents in a cart total)
const cents = Array(10000).fill(0.1);
cents.reduce((a, b) => a + b);  // 1000.0000000001588 (drift of ~1.6e-10)

// Catastrophic cancellation case
const values = [1e20, 1, -1e20];
values.reduce((a, b) => a + b); // 0 (the 1 got lost mid-sum)

After:

Math.sumPrecise(cents);   // 1000
Math.sumPrecise(values);  // 1

Math.sumPrecise uses Shewchuk’s algorithm, which tracks intermediate errors and corrects for them.

The first case is the one most people actually run into: thousands of small floats where the drift shows up in the 12th decimal place. The second is the textbook case where 1e20 + 1 === 1e20 in float64, so the 1 is silently discarded when you reach the next addition.

Uint8Array base64 and hex

Status: Landing in ES2026. At Stage 4. Shipping in every major browser already.

I never understood why this wasn’t in the language.

If you want to turn bytes into a base64 string, the built-in btoa only works on strings (not byte arrays), chokes on non-Latin1 characters, and has no hex equivalent.

Before:

// base64 from Uint8Array, the DIY way
function toBase64(bytes) {
  let binary = '';
  for (const byte of bytes) binary += String.fromCharCode(byte);
  return btoa(binary);
}

// hex from Uint8Array
function toHex(bytes) {
  return [...bytes].map(b => b.toString(16).padStart(2, '0')).join('');
}

Every codebase had these as one-off utility functions. Or you pulled in a dependency.

After:

const bytes = new Uint8Array([72, 101, 108, 108, 111]);

bytes.toBase64();     // "SGVsbG8="
bytes.toHex();        // "48656c6c6f"

Uint8Array.fromBase64("SGVsbG8=");
Uint8Array.fromHex("48656c6c6f");

Every project that touches crypto, file uploads, or WebCrypto has some version of these utilities buried in a helpers file. Now they’re in the language.

Error.isError

Status: Landing in ES2026. At Stage 4. Available in Chrome 135+, Firefox 134+, Safari 18.4+, Node 24+.

The instanceof Error check is unreliable across realms.

A realm is an isolated JavaScript execution context; each iframe, Web Worker, Service Worker, and Node vm module has its own realm, with its own copy of built-ins like Error, Array, and Object.

An error created in one realm isn’t instanceof Error in another, because the two realms have different Error constructors that happen to share a name.

Before:

// Library code trying to classify a caught value
function handleError(maybeError) {
  if (maybeError instanceof Error) {
    // Works for same-realm errors
    logger.error(maybeError.message);
  } else {
    // Oops: an error from a Worker or iframe lands here even though it IS an Error
    logger.error('Unknown value thrown:', maybeError);
  }
}

Library authors have been writing duck-typed fallback checks (typeof x.message === 'string' && typeof x.stack === 'string') for years.

After:

function handleError(maybeError) {
  if (Error.isError(maybeError)) {
    logger.error(maybeError.message);
  } else {
    logger.error('Unknown value thrown:', maybeError);
  }
}

Error.isError(new Error('oops'));                  // true
Error.isError({ message: 'looks like an error' }); // false (not a real Error)
Error.isError(errorFromWorker);                    // true (the realm thing)

If you’ve written library code that catches errors and tries to decide whether to log them, rethrow them, or wrap them, you’ve hit this.

Iterator.concat

Status: Landing in ES2026. At Stage 4. Shipping in Chrome and Node; other engines rolling out.

Chains iterators into one. Useful when you have multiple generators or iterables you want to consume as a single stream.

Before:

function* first() { yield 1; yield 2; }
function* second() { yield 3; yield 4; }

function* chained() {
  yield* first();
  yield* second();
}

for (const n of chained()) console.log(n); // 1, 2, 3, 4

After:

const all = Iterator.concat(first(), second());
for (const n of all) console.log(n); // 1, 2, 3, 4

Array has had .concat() forever. Now iterators have it too, without the generator wrapper.

Map.getOrInsert (Upsert)

Status: Landing in ES2026. Reached Stage 4 at the January 2026 TC39 meeting. Chrome and Node implementations in progress.

Every time I write this pattern, I think “there should be a method for this.”

Before:

// Counting word occurrences
const counts = new Map();
for (const word of words) {
  if (!counts.has(word)) counts.set(word, 0);
  counts.set(word, counts.get(word) + 1);
}

// Caching expensive lookups
function getUser(id) {
  if (!cache.has(id)) {
    cache.set(id, expensiveDatabaseLookup(id));
  }
  return cache.get(id);
}

After:

const counts = new Map();
for (const word of words) {
  counts.set(word, counts.getOrInsert(word, 0) + 1);
}

// With a factory function for expensive defaults
function getUser(id) {
  return cache.getOrInsertComputed(id, () => expensiveDatabaseLookup(id));
}

The methods are on both Map and WeakMap.

The proposal went through a few names (emplace, upsert) before settling on getOrInsert and getOrInsertComputed.

Array.fromAsync

Status: Landing in ES2026. At Stage 4. Already shipping in every major browser and Node.

The async sibling of Array.from. Collects an async iterable into an array.

Before:

async function* fetchPages() {
  let url = '/api/items?page=1';
  while (url) {
    const res = await fetch(url);
    const data = await res.json();
    yield* data.items;
    url = data.nextPage;
  }
}

// Manual loop to collect
const allItems = [];
for await (const item of fetchPages()) {
  allItems.push(item);
}

After:

const allItems = await Array.fromAsync(fetchPages());

JSON.parse with source text

Status: Landing in ES2026. At Stage 4. Shipping in Chrome, Node, and Firefox.

JSON.parse loses information for big numbers because it converts everything to a JavaScript number (float64).

Parse 999999999999999999 and you get 1000000000000000000; parse a quintillion, and you get the same value back.

Before:

// Precision loss, no way to recover
const big = JSON.parse('{"id": 999999999999999999}');
big.id; // 1000000000000000000 (!!)

If you wanted precise numeric handling, you had to install a library like json-bigint that replaced JSON.parse entirely.

After:

The reviver function now receives a context argument with the raw source text for each value, so you can read the original characters and decide how to convert them yourself:

const parsed = JSON.parse(text, (key, value, context) => {
  if (typeof value === 'number' && !Number.isSafeInteger(value)) {
    return BigInt(context.source); // exact string as it appeared in JSON
  }
  return value;
});

If you’ve ever installed json-bigint or written your own JSON.parse wrapper for precise numeric handling, this is what replaces it.

What didn’t make it

Some of the most-requested features are still not in.

Decorators are still Stage 3 and have been since 2022. They’re used everywhere through TypeScript and Babel transpilers, but the native spec keeps running into edge cases around class field ordering and metadata. You can use decorators today in TypeScript 5+, but they’re not language-native yet.

Records and Tuples (deeply immutable primitive-like data structures) stalled out and were effectively withdrawn; a replacement proposal called Composites is working through committee but is much smaller in scope.

Pipeline operator (|>) has been “nearly Stage 2” for years. The debate about whether to use % as a placeholder or topic-style binding keeps the proposal on ice.

Pattern matching is at Stage 1 and unlikely to land before ES2027 at the earliest.

Async iterator helpers (.map, .filter, .take, .toArray on async iterables, plus Iterator.prototype.toAsync() to convert a sync iterator to async) are at Stage 2. They’re the same shape as the sync helpers that shipped in ES2025, just awaitable. Until they land, any async source (streaming fetch, LLM token stream, async generator) still needs for await...of. This is the one I’m watching most closely — it’s the piece that makes the LLM streaming example from earlier actually work today.

Iterator.range (a lazy numeric range iterator, so you could write Iterator.range(1, 100) instead of manually building a generator) is also at Stage 2 and has been there for a while. People keep asking; don’t hold your breath.

AsyncContext (propagating context across async boundaries, similar to Node’s AsyncLocalStorage) is at Stage 2 but has huge momentum from tracing and observability tool vendors. Keep an eye on it.

For AI

If you’re using an AI coding assistant (Claude Code, Copilot, Cursor, pick one), you should know the models are trained on years of JavaScript code written before any of this shipped.

So you ask for a function that sums floats, and you get .reduce((a, b) => a + b). Anything involving dates uses new Date() and a lodash dependency because Temporal wasn’t in the training set. NodeLists get spread into arrays; cleanup becomes try/finally.

None of this is exactly wrong, but it’s the 2022 answer to a 2026 problem.

I noticed it in my own Claude Code sessions over the last few weeks. I’d ask for a utility, get back working code, and catch myself thinking “this would be two lines with getOrInsert” or “this is the old Moment pattern, Temporal makes this trivial.” The model’s training cutoff was before ES2025 shipped, so it writes what it learned, and what it learned is three to five years out of date.

If you use Claude Code

I’ve packaged an “ES2025/ES2026 preferences” skill you can install in two commands.

It’s part of the react-tips-skill plugin, which gives Claude a lookup table of “if the code does X the old way, suggest Y the new way.”

Add the marketplace and install the plugin:

/plugin marketplace add Cst2989/react-tips-skill
/plugin install react-tips@neciudan.dev

Once installed, the modern-js skill activates automatically whenever Claude is writing or reviewing JavaScript. You can also invoke it directly with /react-tips:modern-js.

The skill forces Claude to check its output against a list of modern alternatives before finalizing code. So when you ask it to “count word occurrences in an array,” instead of the usual map.has(word) ? map.set(word, map.get(word) + 1) : map.set(word, 1) dance, it reaches for map.getOrInsert(word, 0) + 1.

For other AI tools

If you’re not using Claude Code, you can still use the same approach. The core of the skill is a markdown file that encodes the lookup table as instructions. A condensed version you can drop into .cursorrules, Copilot instructions, or any system prompt:

# Modern JavaScript preferences (ES2025/ES2026)

When writing JavaScript, prefer the following newer APIs over their
older equivalents. Check every function you write against this list.
before finalizing.

## Iterators and collections

- Iterating a large/infinite sequence?
  → Use Iterator.prototype methods (.map, .filter, .take, .drop,
    .toArray) Instead of converting to an array first.
- Wrapping a NodeList, Set, or Map to use array methods?
  → Iterator.from(x).map(...) instead of [...x].map(...) or
    Array.from(x).map(...).
- Set intersection, union, difference?
  → a.intersection(b), a.union(b), a.difference(b).
  → Never write a manual loop or reach for lodash.
- Concatenating iterators?
  → Iterator.concat(a, b) instead of a nested yield* generator.
- Counting occurrences in a Map, or caching expensive lookups?
  → map.getOrInsert(key, default) or
    map.getOrInsertComputed(key, () => compute()).
  → Never write: if (!map.has(k)) map.set(k, v).

## Dates and times

- Any date/time operation more complex than Date.now()?
  → Use Temporal (Temporal.PlainDate, Temporal.ZonedDateTime, etc.).
  → Never reach for moment.js, date-fns, or luxon for new code.
- Parsing a date with timezone?
  → Temporal.ZonedDateTime.from('2026-06-15T09:00[America/New_York]').
- Computing age or duration?
  → someDate.since(otherDate, { largestUnit: 'years' }).

## Promises and async

- Calling a function that might be sync or async and might throw?
  → Promise.try(() => fn()) instead of new Promise(r => r(fn()))
    or Promise.resolve().then(fn).
- Collecting an async iterable into an array?
  → await Array.fromAsync(asyncIter) instead of for-await-push loop.

## Resource cleanup

- Opening a resource that needs cleanup (transaction, file handle,
  lock, subscription)?
  → using handle = openResource(); (for sync cleanup)
  → await using handle = await openResource(); (for async)
  → The resource must implement [Symbol.dispose] or
    [Symbol.asyncDispose].
  → Never write try/finally for cleanup when using works.

## Errors

- Checking if a caught value is an Error?
  → Error.isError(x) instead of x instanceof Error.
  → instanceof is unreliable across realms (Workers, iframes, vm).

## Numbers

- Summing an array of floats?
  → Math.sumPrecise(values) instead of values.reduce((a, b) => a + b).
  → Especially for financial values or long arrays.
- Encoding/decoding bytes?
  → bytes.toBase64(), bytes.toHex(), Uint8Array.fromBase64(str).
  → Never use btoa/atob for byte arrays; they only work on strings.

## Regular expressions

- Building a regex from user-controlled input?
  → new RegExp(RegExp.escape(input)) instead of a custom escape fn.

## Modules

- Importing JSON?
  → import data from './data.json' with { type: 'json' }.
  → Never use fetch for bundle-time JSON.
- Importing a large module that's rarely used in the current path?
  → import defer * as heavy from './heavy.js'.
  → Works only with namespace imports, not named or default.

## Rules

- NEVER suggest moment.js for new code. Suggest Temporal.
- NEVER write instanceof Error in library code. Use Error.isError.
- NEVER write try/finally for cleanup when using works.
- NEVER write a manual for a for-await-of loop just to collect into an
  array; use Array.fromAsync.
- ALWAYS check if the user's runtime supports these features before
  suggesting them; if they don't, suggest a polyfill.

You can test whether it’s working by asking your AI to “write a function that counts word occurrences in an array” or “compute someone’s age from their birthday.” Without the skill, you’ll almost certainly get the old patterns. With it, you should get Map.getOrInsert and Temporal.PlainDate.

The skill doesn’t force the AI to use these APIs when the runtime doesn’t support them; it just makes them the first option the model considers, instead of the last.

References

🏆 SOLD OUT IN SINGAPORE · ATHENS · LONDON

From Lizard to Wizard

4 hours. Algorithms, system design, security, observability, AI.
The workshop that turns mid engineers into senior ones.

€250 4-HOUR INTENSIVE
Pick your date →

Spots are vanishing. Don't be the one who waited.

    Share:

    🎙️ Latest Podcast Episodes

    Dive deeper with conversations from senior engineers about scaling applications, teams, and careers.

    Database Performance at Scale with Tyler Benfield
    Episode 30
    58 minutes

    Señors @ Scale host Neciu Dan sits down with Tyler Benfield, Staff Software Engineer at Prisma, to go deep on database performance. Tyler's path into databases started at Penske Racing, writing trackside software for NASCAR pit stops, and eventually led him into query optimization, connection pooling, and building Prisma Postgres from scratch. From the most common ORM anti-patterns to scaling Postgres on bare metal with memory snapshots, this is the database conversation most frontend developers never get.

    📖 Read Takeaways
    Open Source at Scale with Corbin Crutchley
    Episode 29
    52 minutes

    Señors @ Scale host Neciu Dan sits down with Corbin Crutchley — lead maintainer of TanStack Form, Microsoft MVP, VP of Engineering, and author of a free book that teaches React, Angular, and Vue simultaneously — to dig into what it actually means to maintain a library that gets a million downloads a week. Corbin covers the origin of TanStack Form, why versioning is a social contract, what nearly made him quit open source, and the surprisingly non-technical path that got him into a VP role.

    📖 Read Takeaways
    PostCSS, AutoPrefixer & Open Source at Scale with Andrey Sitnik
    Episode 28
    58 minutes

    Señors @ Scale host Neciu Dan sits down with Andrey Sitnik — creator of PostCSS, AutoPrefixer, and Browserslist, and Lead Engineer at Evil Martians — to explore how one developer became responsible for 0.7% of all npm downloads. Andrey shares the discrimination story that drove AutoPrefixer, the open pledge that forced PostCSS 8 to ship, and why the Mythical Man-Month applies directly to LLM agent coordination.

    📖 Read Takeaways
    React Server Components at Scale with Aurora Scharff
    Episode 27
    52 minutes

    Señors @ Scale host Neciu Dan sits down with Aurora Scharff — Senior Consultant at Creon Consulting, Microsoft MVP in Web Technologies, and React Certifications Lead at certificates.dev — to explore the real mental model shift required to understand React Server Components. Aurora shares her path from Robotics to frontend, what it was like building a controller UI for Boston Dynamics' Spot robot dog in React, and why the ecosystem finally feels like it's stabilizing.

    📖 Read Takeaways
    Back to Blog