Debouncing in React Router v7

While hanging out in the Remix discord, I happened to see Bryan Ross(@rossipedia on discord and @rossipedia.com on bsky) share the technique I'm going to cover below. All credit for this concept should go to him. Clever guy, that one!

Prefer video? Check out the video version of this on YouTube.

Sometimes you have a user interaction that can trigger multiple requests in a very fast sequence and you only care about the latest one. For example, you might have a table of records and want to provide an input for the user to filter the data but that filtering needs to occur in the backend. Every keystroke would initiate new requests to fetch the filtered data.

export async function loader({request}: Route.LoaderArgs) {
    const url = new URL(request.url)
    const q = url.searchParams.get('q')
    return {
        contacts: await fetchContacts(q)
    }
}

function Contacts({loaderData}: Route.ComponentProps) {
    const submit = useSubmit()
    return (
        <div>
            <Form onChange={e => submit(e.currentTarget)}>
                <input name="q" />
            </Form>
            <ContactsTable data={loaderData.contacts} />
        </div>
    )
}

You would see something like this as the user typed "Dallas".

GET ...?q=D
GET ...?q=Da
GET ...?q=Dal
GET ...?q=Dall
GET ...?q=Dalla
GET ...?q=Dallas

React Router will cancel all but the latest in-flight requests so you don't have to manage any race conditions yourself which is awesome. Buuuut, this is kind of only half the story. Those cancelled requests are only cancelled from the frontend's perspective. They were still sent and will still trigger your loader(or action, depending on form method) which is a waste of your backend resources and could be unnecessarily stressing your system.

It would be much better to introduce a slight delay so we can wait for the user to stop typing and then initiate the request to fetch new data. This delaying before doing something is known as "debouncing".

useEffect?

You might be tempted to maintain, and debounce, a state value and reach for useEffect to initiate requests.

function Contacts({loaderData}: Route.ComponentProps) {
  const [filter, setFilter] = useState<string>();
  const debouncedFilter = useDebounce(filter, 500);
  const submit = useSubmit()

  useEffect(() => {
    if (typeof debouncedFilter === 'string') {
      submit({q: debouncedFilter})
    }
  }, [submit, debouncedFilter])

  return (
    <div>
      <Form>
        <input name="q" onChange={(e) => setFilter(e.target.value)} />
      </Form>
      <ContactsTable data={loaderData} />
    </div>
  );
}

We've introduced four hooks and one being the dreaded useEffect for data fetching which is a react cardinal sin. I fear I've just sent a shiver up David Khourshid's spine by merely mentioning this foul possibility. There must be a better way!

clientLoader to the rescue

With React Router, we can take advantage of the fact that it uses the web platform with AbortSignals and Request objects. We can do that in the clientLoader and clientAction functions

If defined, these will be called in the browser instead of your loader and action function. They can act as a gate where you determine whether the request should make it through to their server counterparts.

Let's introduce a clientLoader in our code.

// this runs on the server
export async function loader({request}: Route.LoaderArgs) {
    const url = new URL(request.url)
    const q = url.searchParams.get('q')
    return {
        contacts: await fetchContacts(q)
    }
}

// this runs in the browser
export async function clientLoader({
  request,
  serverLoader,
}: Route.ClientLoaderArgs) {

  // intercept the request here and conditionally call serverLoader()

  return await serverLoader(); // <-- this initiates the request to your loader()
}

function Contacts({loaderData}: Route.ComponentProps) {
    const submit = useSubmit()
    return (
        <div>
            <Form onChange={e => submit(e.currentTarget)}>
                <input name="q" />
            </Form>
            <ContactsTable data={loaderData.contacts} />
        </div>
    )
}

Now we just need some logic in the clientLoader to determine whether serverLoader should get called. Since React Router uses AbortSignals, we can write a new function that uses those. It's going to return a Promise that resolves after a specified amount of time unless we detect that the request was cancelled(the signal was aborted).

// ...

function requestNotCancelled(request: Request, ms: number) {
  const { signal } = request
  return new Promise((resolve, reject) => {
    // If the signal is aborted by the time it reaches this, reject
    if (signal.aborted) {
      reject(signal.reason);
      return;
    }

    // Schedule the resolve function to be called in
    // the future a certain number of milliseconds 
    const timeoutId = setTimeout(resolve, ms);

    // Listen for the abort event. If it fires, reject
    signal.addEventListener(
      "abort",
      () => {
        clearTimeout(timeoutId);
        reject(signal.reason);
      },
      { once: true },
    );
  });
}

export async function clientLoader({
  request,
  serverLoader,
}: Route.ClientLoaderArgs) {
  await requestNotCancelled(request, 400);
  // If we make it passed here, the promise from requestNotCancelled 
  // wasn't rejected. It's been 400 ms and the request has
  // not been aborted. Proceed to initiate a request
  // to the loader.
  return await serverLoader();
}

// ...

We're now debouncing the requests to the loader based on the request signal and we don't have to muddy up our UI component with a bunch of event listeners and hooks. 🎉

Bryan's implementation had a different name and slightly different api.

export async function clientLoader({
  request,
  serverLoader,
}: Route.ClientLoaderArgs) {
  await abortableTimeout(400, request.signal);
  return await serverLoader();
}

You get the idea though so you can run with the concept however you like.


If you like this type of stuff, you can drop your email in the box below and I'll shoot you an email whenever I drop more posts and videos. ✌️

Sending logs to the browser from actions and loaders

One of the inherent benefits of Single Page Apps(SPAs) I took for granted was the logging that comes out of the box via the Network tab in the console. Every api request happens in the browser and you can easily inspect those which is super handy.

Once you move to something like Remix/React Router 7, you become painfully aware that you've lost access to that since most of your api requests are now happening on the server in actions and loaders. This is a Major Bummer 🫡 when coming from the SPA world. Instead, you have to manually log those in your loaders and actions and inspect those where your logs go during development: commonly your terminal or a file.

This kinda stinks for a couple of reasons:

  • Even though I can console.log in my backend code and UI code in the same file, I have to see in them in two different places.
  • The backend logs are strings and can be really hard to inspect any data being logged.

These things make me sad. Let's change that.

Prefer video? Check out the video version of this on YouTube.

The strategy

When I'm developing my app, I want both the backend and any frontend logs to be in the same place. A place I can monitor as I poke around in my app. Naturally the browser's console is the ideal destination.

We need a way to send logs from the backend to the browser in realtime. Enter Server-sent events, a close cousin of WebSockets. While WebSockets support sending and receiving messages to your backend, server-sent events only support sending messages from your backend which is exactly what we need.

Backend setup

We need a few pieces in place. First, we'll create a singleton store so services we create will persist across hot module reloads and we're always guaranteed to get the same instances everywhere we ask for them.

// app/service/singleton.server.ts

// yoinked from https://github.com/jenseng/abuse-the-platform/blob/main/app/utils/singleton.ts

export function singleton<T>(name: string, valueFactory: () => T): T {
  const yolo = global as any;
  yolo.__singletons ??= {};
  yolo.__singletons[name] ??= valueFactory();
  return yolo.__singletons[name];
}

Next, we'll create an instance of a node EventEmitter. We'll attach an event listener to this and use it in our SSE(server-sent events) connection to send messages to the browser.

// app/server/emitter.server.ts
import { EventEmitter } from "node:events";
import { singleton } from "~/service/singleton.server";

export let emitter = singleton("emitter", () => new EventEmitter());

And now we'll use the popular winston logger package which let's you direct logs to all sorts of places: console, file, http endpoint, ...event emitter? We'll need to implement a custom "transport" for that.

// app/service/logger.server.ts
import winston, { Logger } from "winston";
import Transport from "winston-transport";
import { singleton } from "~/service/singleton.server";
import { emitter } from "~/service/emitter.server";

class EmitterTransport extends Transport {
  log(info: any, callback: () => void) {
    setImmediate(() => {
      this.emit("logged", info);
    });

    let eventName = "log";

    emitter.emit(eventName, JSON.stringify(info));

    callback();
  }
}

export let logger: Logger = singleton("logger", () => {
  const instance = winston.createLogger({
    level: process.env.NODE_ENV !== "production" ? "debug" : "info",
    transports: [new winston.transports.Console()]
  });

  if (process.env.NODE_ENV !== "production") {
    instance.add(new EmitterTransport());
  }
  return instance;
});

With this setup, any logged messages will go to the terminal and be emitted as JSON stringified messages during development. Notice the "log" event name as the first argument to emitter.emit(). This event name is arbitrary but it will need to match up to the event name in our SSE connection.

Side note: winston is completely optional. You could just emitter.emit() directly in your actions and loaders. Leveraging a logging library just makes a lot of sense here because that's exactly what we're doing, logging!

We'll implement the connection now by leveraging Sergio Xalambrí's awesome remix-utils package which makes creating an SSE connection a breeze. We're going to do this in a /logs route.

// app/routes/logs.ts
import { eventStream } from "remix-utils/sse/server";
import { emitter } from "~/service/emitter.server";
import type { Route } from "./+types/logs";

export async function loader({ request }: Route.LoaderArgs) {
  if (process.env.NODE_ENV !== "development") {
    throw new Error("SSE logging is only for development, silly!");
  }
  let eventName = "log"; // <- this must match emitter.emit(eventName, ...)

  return eventStream(
    request.signal,
    (send) => {
      let handle = (data: string) => {
        send({ event: eventName, data });
      };

      emitter.on(eventName, handle);

      return () => {
        emitter.off(eventName, handle);
      };
    },
    {
      // Tip: You need this if using Nginx as a reverse proxy
      headers: { "X-Accel-Buffering": "no" },
    },
  );
}

(You don't even have to install the complete remix-utils package if you don't want to. You could copy/paste the single file from your repo and use it that way.)

When the browser connects to this via the EventSource object, it keeps the connection alive so any log event will be sent over any open connections.

The last step in the backend is to actually make use of the logger which could look something like this.

// app/routes/contact.tsx
import { hrtime } from 'node:process'
import { logger } from '~/service/logger.server'
import type { Route } from './+types/route'

export async function loader({params}): Route.LoaderArgs {
  let url = `https://address-book.com/api/contacts/${params.id}` // <- not real
  let [res, ms] = await time(async () => {
      const response = await fetch(url)
      return {
        status: response.status,
        data: await response.json()
      }
  })
  logger.info('fetch contact', {
    method: 'GET',
    url,
    status: res.status,
    response: res.data,
    ms
  })
  return data
}

// simple utility for tracking how long a function takes
async function time<T>(cb: () => T) {
  let start = hrtime.bigint();
  let value = await cb();
  let end = hrtime.bigint();
  let ms = Number(end - start) / 1_000_000;
  return [value, ms] as const;
}

Frontend setup

Now let's turn our attention to the frontend. It's finally time to establish our SSE connection in our root route.

// app/root.tsx

// ...

export async function loader() {
  return {
    sseLogging: process.env.NODE_ENV === "development",
  };
}

export default function App({ loaderData }: Route.ComponentProps) {
  let { sseLogging } = loaderData;
  useEffect(() => {
    // only establish a connection if sseLogging is turned on
    if (!sseLogging) return;
    const source = new EventSource("/logs");
    const handler = (e: MessageEvent) => {
      try {
        // attempt to parse the incoming message as json
        console.log(JSON.parse(e.data));
      } catch (err) {
        // otherwise log it as is
        console.log(e.data);
      }
    };
    let eventName = "log" // <- must match the event name we use in the backend
    source.addEventListener(eventName, handler);
    return () => {
      source.removeEventListener(eventName, handler);
      source.close();
    };
  }, [sseLogging]);

  return <Outlet />;
}

// ...

We're establishing the connection in a useEffect , listening for logs, attempting to parse them as json and log that, otherwise, log the message string as is.

Why not useEventSource???

"Whoa, whoa, whoa... you're using eventStream from remix-utils and that package comes with useEventSource. Why the heck aren't you using that???"

Great question! I tried. useEventSource stores the latest message in a useState and it's up to you to "listen" to it in a useEffect if you're not just rendering the latest message. Here's what it would look like.

let message = useEventSource("/logs", { event: "log" });

useEffect(() => {
  if (!message) return
  try {
    console.log(JSON.parse(message));
  } catch (err) {
    console.log(message);
  }
}, [message])

The problem I ran into is that because the message is being stored in a useState, the messages are bound to react's rendering and update lifecycle. If I logged multiple messages back to back in a loader some of them would get swallowed because (I believe) react is debouncing them and trying to be efficient for rendering, which isn't what we're doing here. That's just not going to work for this use-case.

Success!

In any case, we can now do a victory dance! 🕺 If we have the app loaded and navigate to a route that has a loader where we're doing some logging we should see those in the browser console. 😎


If you completely reload the browser tab you'll see the log message show up but immediately get wiped because the loader for the next request runs, sends the SSE message over the current connection, but then Chrome will clear your console. You can somewhat get around this if you click the ⚙️ icon in the console window and select Preserve log to retain logs between tab refreshes.



I say "somewhat" because any objects logged will show up as "Object" in the console and it looks like you can expand it but that memory is gone. It's like a ghost object just there to tease you. 👻

Going try hard in Chromium based browsers

This is already incredibly useful but we can do better if we're using a Chromium based browser. The console has the ability for you to add custom object formatters. If you click the ⚙️ icon in the dev tools window(the other cog icon in the screenshot above), you can toggle this feature on.

A custom formatter is an instance of a class that has up to three methods: header(obj), hasBody(obj), and body(obj). Each object can have a "header" and then optionally have the ability to be expanded. Think about when you log an object with several properties. The console doesn't automatically expand the object. It shows you the "header", a truncated representation of the object and then you have the ability to expand its "body" to see all of the properties.

Let's write our own formatter for api requests/responses we make in our actions and loaders. We need a way to distinguish that our custom formatter should apply to these logs so let's introduce a __type property that's set to "api". This is a completely arbitrary way for us to identify these log objects in our formatter.

  logger.info('create todo', {
    __type: 'api',
    method: 'POST',
    url,
    status: res.status,
    payload,
    response: res.data,
    ms
  })

And now let's look for that when implementing a custom formatter.

// app/dev/ApiObjectFormatter.js

class ApiObjectFormatter {
  header(obj) {
    if (obj.__type !== "api") {
      return null;
    }
    const method = obj.method.toUpperCase();
    const methodColor = {
      POST: "blue",
      PUT: "orange",
      PATCH: "salmon",
      GET: "green",
      DELETE: "red",
    }[method];
    const status = obj.status;
    const isOkay = status >= 200 && status < 300;
    const color = isOkay ? "green" : "red";
    return [
      "div",
      {},
      [
        "span",
        {
          style: `color: ${methodColor}; font-weight: bold;`,
        },
        `[${obj.method.toUpperCase()}]`,
      ],
      ["span", {}, ` ${obj.url}`],
      ["span", { style: `color: ${color};` }, ` [${obj.status}]`],
      ["span", { style: `color: slategrey;` }, ` (${obj.ms} ms)`],
    ];
  }

  hasBody(obj) {
    if (obj.__type !== "api") {
      return null;
    }
    return obj.response || obj.payload || obj.message;
  }

  body(obj) {
    const responseRef = ["object", { object: obj.response }];
    const requestRef = ["object", { object: obj.payload }];

    return [
      "div",
      { style: "padding-left: 20px; padding-top: 5px;" },
      obj.message
        ? [
            "div",
            {},
            ["span", { style: "font-weight: bold;" }, "Message: "],
            ["span", {}, obj.message],
          ]
        : null,
      obj.payload
        ? [
            "div",
            {},
            ["span", { style: "font-weight: bold;" }, "Payload: "],
            ["span", {}, requestRef],
          ]
        : null,
      obj.response
        ? [
            "div",
            {},
            ["span", { style: "font-weight: bold;" }, "Response: "],
            ["span", {}, responseRef],
          ]
        : null,
    ].filter(Boolean);
  }
}

window.devtoolsFormatters = window.devtoolsFormatters || [];
window.devtoolsFormatters.push(new ApiObjectFormatter());

Kinda funky, eh? It reminds me of what react looks like without JSX. You define a hierarchy of elements. Each element is represented by an array where the first item is the html tag, the second item is an object of styles, and the third is the children(which can be more elements or object references).

Next, we need a way to actually have this run in the browser. Let's revisit our root route.


// app/root.tsx
import ApiObjectFormatter from "~/dev/ApiObjectFormatter.js?raw";

export async function loader() {
  return {
    sseLogging: process.env.NODE_ENV === "development",
    customObjectFormatters:
      process.env.NODE_ENV === "development" ? [ApiObjectFormatter] : [],
  };
}

export default function App({ loaderData }: Route.ComponentProps) {
  const { sseLogging, customObjectFormatters } = loaderData;
  // ...
  return (
    <>
      {customObjectFormatters.length > 0 ? (
        <script
          dangerouslySetInnerHTML={{
            __html: customObjectFormatters.join("\n"),
          }}
        />
      ) : null}
      <Outlet />
    </>
  );
}

You might have noticed that we wrote the ApiObjectFormatter in javascript, not typescript. This is so we can use vite's ?raw suffix on the import and get it as a string. Once we have it as a string we can optionally send it to the browser based on any condition we come up with in our loader. We won't ever be bloating our client bundles with this console formatting code. We also don't have to go through the hassle of putting this into a Chrome extension to distribute to our team. It can be completely custom to your app's needs, live within the repo, and just works out of the box(if they've got custom formatters turned on). 🔥

So what's it look like? Check it out.

Pretty dang slick. As we click around in our app we can see what's happening behind the scenes right where it's most convenient. We have the most important bits in the header and can expand to see the body which includes the request payload and json response.

You might have also noticed that customObjectFormatters is an array. You can go nuts here adding custom formatters that:

  • format messages differently for the various log levels(info, warn, error, etc)
  • format database queries (results and timing)
  • format domain events being fired. Ex. {event: 'user-registered'}

The sky's the limit.

console.log() stigma

The console has some pretty handy utilities that I think a lot of people aren't aware of and I think that's partly due to the stigma around even making using of console.log() on the frontend. These days everyone has eslint rules that complain about console logging to stop you from appearing unprofessional by accidentally shipping those. I get that. But, I also think there's a lot of missed opportunity for cool stuff to be explored, especially if you're triggering logs from your backend in a controlled manner during development.

What about production?

What we did here is meant for development. Theoretically, you could use this idea in production but you would definitely need to make some changes:

  • Enable the /logs endpoint for production and only allow specific users(admins)
  • Subscribe to an eventName that is tied to the current user rather than a super generic log event name. Ex. log.user.123
  • return that eventName from the root loader to be used in the EventSource listener
  • Include the user id in all of your logs
  • Update the EmitterTransport to use the user specific eventName(log.user.123) based on the user data that comes with the logs

There's a lot that could go wrong here, so I'm not recommending this but I did want to address the idea before anyone puts this in prod without thinking too much about it.

Repo

Here's the repo where you can see all of this wired up. dadamssg/address-book#sse-logging


Alight, hope you found this useful and got your wheels turning. If you like this type of stuff, you can drop your email in the box below and I'll send you a message whenever I write more posts and publish more videos. ✌️

Pasting into multiple fields at once

I recently came across a situation where I wanted to extract YouTube video data into a form to feed to an LLM to extract some information. I wanted to collect the video title, description, and transcript. I wrote a simple Chrome extension to extract this information from the current video in the tab and copy it as a json object to the clipboard.

let title = await getVideoTitle();
let description = await getVideoDescription();
let transcript = await getVideoTranscript();

let data = JSON.stringify({title, description, transcript})

await navigator.clipboard.writeText(data);

Each of these async functions uses css selectors and document.querySelector to get a handle on the appropriate elements, click them if needed, and then essentially get the text via element.textContent. The end result is that I have a json object copied to my clipboard.

I have a simple form in my Remix app to submit this data. It looks something like this:

<Form method="post">
    <Label>Title</Label>
    <Input name="title" />

    <Label>Description</Label>
    <Input name="description" />

    <Label>Transcript</Label>
    <Input name="transcript" />

    <Button>Submit</Button>
</Form>

Now let's get fancy.

We're going to add an onPaste to fill out this form all at once from our json data.


function YouTubeForm() { let getFormInput = (name: string): HTMLInputElement => { return document.querySelector(`[name="${name}"]`)!; }; let handlePaste: ClipboardEventHandler<HTMLInputElement> = (e) => { let pasted = e.clipboardData.getData("text"); try { let data = JSON.parse(pasted); getFormInput("title").value = data.title; getFormInput("description").value = data.description; getFormInput("transcript").value = data.transcript; // prevents the json from being pasted into the input e.preventDefault(); } catch (e) { // ignore and paste normally } }; return ( <Form method="post"> <Label>Title</Label> <Input name="title" onPaste={handlePaste} /> <Label>Description</Label> <Input name="description" onPaste={handlePaste} /> <Label>Transcript</Label> <Input name="transcript" onPaste={handlePaste} /> <Button>Submit</Button> </Form> ) }

Boom! Now we have a pretty slick and easy way to fill out our form in one go that seems a little magical to those aren't aware of what's happening behind the scenes. You can check out a simple codesandbox demo of this idea here.

This combo of using a Chrome extension to copy the data and then this paste event to fill out all the fields at once has saved me a ton of time. If you don't want the hassle of creating a Chrome extension you can go the simpler route and create a Chrome Snippet. It's more cumbersome to use since you have to pop open the dev tools, locate your snippet, and then click to run it but it'll get the job done.

Hope you enjoyed this and got your wheels turning on how and where you can implement this concept to benefit you and/or your app's users.