React Router error reporting from scratch

When using framework mode, React Router allows you to export a handleError function in your server entry point, entry.server.tsx . If you don't see this file, you can run npx react-router reveal to have one created(a default is provided behind the scenes).

If you look at the docs you can see how to implement a handleError function.

// entry.server.tsx
import { type HandleErrorFunction } from "react-router";

// ...

export const handleError: HandleErrorFunction = (
  error,
  { request }
) => {
  // React Router may abort some interrupted requests, don't log those
  if (!request.signal.aborted) {
    myReportError(error);

    // make sure to still log the error so you can see it
    console.error(error);
  }
};

The docs end there, leaving it up to you to figure out what the myReportError function should look like though.

Sentry is a popular application monitoring platform that you can be used here but maybe you don't want to rely on a 3rd party service for this or maybe your company has strict limitations that don't allow for this. There's several reasons why Sentry(or another platform) can't be used but heck, we should be able to figure something out, right? Error reporting is kind of important. I want to know about any errors getting thrown in my apps.

Disclaimer

What we're going to build is obviously not as fully featured as what various monitoring platforms provide. I'm also intentionally going to prioritize straightforwardness over optimization for clarity's sake. My goal is to equip you with a concept to further iterate on and adapt to your specific needs. This is also strictly for errors that occur on the server, not in the browser. That'll have to be another post.

Alright, let's go.

Setup

We're going to be using the completed React Router tutorial like I have in my previous posts. You can grab that here if you'd like to follow along.

Wrecking havoc

Let's start with intentionally introducing some code that will throw an error:

// app/routes/contact.tsx

export async function loader({ params }: Route.LoaderArgs) {
  const contact = await getContact(params.contactId);
  if (!contact) {
    throw new Response("Not Found", { status: 404 });
  }
  params.read(); // <- there is no `read` method on params!
  return { contact };
}

If we start the app in dev mode(npm run dev) and go to this route we'll see this:

This is pretty nice or React Router to provide for us out of the box. We can see the code that's causing the error as well as the file, line number, and character on that line: app/routes/contact.tsx:11:10. This same error is logged in the console too.

There's a problem though. This is development. What does this look like after the app has been build for production? Let's try that with npm run build && npm start and visiting the page. Now we get a less helpful error page in the browser but that's a good thing. We don't want to expose that stuff to our users.

However, now the error logged in the console is a lot less useful.

We can still see the params.read is not a function error message but now the error is originating from build/server/index.js:519:10. If we look at the built code we can see where it's coming from on line 519.

The server code has all been bundled up into this single file though. We don't have a good way to know where exactly this is in our source code. That's a pretty critical piece of information we just don't have.

Source maps to the rescue

This problem is exactly what source maps are for. These generated files help map built code to source code. You have to explicitly tell vite to generate them in your vite.config.ts file.

import { reactRouter } from "@react-router/dev/vite";
import { defineConfig } from "vite";

export default defineConfig({
  build: {
    sourcemap: true,
  },
  plugins: [reactRouter()],
});

Now when we build, npm run build, we get a warning message.

  ⚠️  Source maps are enabled in production
     This makes your server code publicly
     visible in the browser. This is highly
     discouraged! If you insist, ensure that
     you are using environment variables for
     secrets and not hard-coding them in
     your source code.

This is important but we'll come back to in a bit. Aside from the warning message, we also get <filename>.js.map files generated for every built js file. These are the source maps.

These files consist of json. If you copy/paste the contents of the build/server/index.js.map and paste it into a browser console, you can more easily see the structure and what exactly is inside.

It contains function names, sources files, the actual source code in sourcesContent, and the wizardry that correlates the built code to the source code in the mappings.

When we turned source maps on, it means we're generating source maps for both the server code and the client code. This means that your client source code would be fully exposed. If you prefer that to not happen we can take care of that by modifying our package.json.

  "scripts": {
    "build": "cross-env NODE_ENV=production react-router build && npm run rm-client-source-maps",
    "dev": "react-router dev",
    "start": "cross-env NODE_ENV=production react-router-serve ./build/server/index.js",
    "typecheck": "react-router typegen && tsc",
    "rm-client-source-maps": "find build/client -type f -name '*.map' -delete"
  },

We've modified the build script to also run a new rm-client-source-maps script that will do exactly as the name implies. Now our client source code won't be exposed for all to see.

source-map-support

The default React Router npm start script uses the @react-router/serve package which uses express behind the scenes. You can see what exactly this is doing here. Notice that the first thing it does is setting up source map support. If you've ejected from the default and using your own express implementation, be sure to add this same setup by installing the source-map-support package and bringing that snippet over.

With this in place, your stack traces will look more like how they do doing development where the files and lines reference your source code. πŸŽ‰

Implementing handleError

Now, let's take a stab at writing a handleError implementation.

import nodemailer from "nodemailer";
import type { SendMailOptions } from "nodemailer";

// ...

export const handleError: HandleErrorFunction = (error, args) => {
  const { request, context, params } = args;
  // React Router may abort some interrupted requests, don't log those
  if (request.signal.aborted) return;

  let message = "Unknown Error.";
  let stack: Array<string> = [];

  if (error instanceof Error) {
    const lines = error.stack?.split("\n") ?? [];
    message = lines[0]; // Error message line
    stack = lines.slice(1); // Actual stack lines
  }

  // build up an object with all of the relevant details
  const payload = {
    error: {
      message,
      stack,
    },
    request: {
      method: request.method,
      url: request.url,
    },
    params,
    context: {
      // values from context, maybe?
    },
    env: {
      // values from process.env, maybe?
    }
  };

  sendMail({
    from: process.env.MAILER_SEND_ERRORS_FROM,
    to: process.env.MAILER_SEND_ERRORS_TO,
    subject: 'MyApp - ERROR',
    html: `<pre><code>${JSON.stringify(payload, null, 2)}</pre></code>`,
  }).catch(console.error)

  // make sure to still log the error so you can see it
  console.error(error);
};

function sendMail(options: SendMailOptions) {
  const transporter = nodemailer.createTransport({
    host: process.env.MAILER_HOST ?? "",
    port: Number(process.env.MAILER_PORT),
    secure: process.env.MAILER_SECURE === "true",
    ignoreTLS: process.env.MAILER_IGNORE_TLS === "true",
    auth: {
      user: process.env.MAILER_USER ?? "",
      pass: process.env.MAILER_PASSWORD ?? "",
    },
  });

  return transporter.sendMail(options);
}

This assumes you have all those process.env.[field] values defined and configured correctly but we're just sending an email with JSON containing all the important bits about what happened. You could totally pretty this up in the html instead of sending raw JSON but you get the idea. Instead of sending an email, you could post to slack, telegram, api endpoint, etc. Whatever works for your situation.

This a huge step up from not having any sort of error reporting.

Can we do better?

The main problem with this is the fact that we only have the source code file and line number. You'd need to have the same code that was built in front of you to look at the surrounding lines to really get the context around this error. It'd be pretty nice if we could get that in the message we're sending out...

Custom server

To do what we're after we need to not use the default server so we can opt out of the source map setup. This is definitely counterintuitive but the issue is that if we keep the current source map setup we only have the stack entries with the source code file names and line numbers. If we deploy our app with the source code we could easily get the source code lines around the stack entries. However, you typically don't ship your source code to prod, only the built assets.

Think of it this way... If the following line is what we have access to in the handleError function

at loader$1 (file:///Users/davidadams/code/address-book/app/routes/contact.tsx:11:10

but we don't have the corresponding source code to inspect where this is running, well then that's as much information as we can get.

What we need is the originally unhelpful version.

at loader$1 (file:///Users/davidadams/code/address-book/build/server/index.js:541:10)

We can manually combine this information with the source map(that will get deployed where this is running) to not only get the source code line that the error originated from but also the lines around it.

Using the node-custom-server example

Luckily, React Router has an example that we can use to implement a custom server(which is still express). Find that here. There are a number of changes we need to make.

  • npm install compression morgan express @type/express
  • Copy the server.js to the root of our project.
  • Copy the server/app.ts file into our project.
  • Update our vite.config.ts to reference the server/app.ts file
import { reactRouter } from "@react-router/dev/vite";
import { defineConfig } from "vite";

export default defineConfig(({ isSsrBuild }) => ({
  build: {
    sourcemap: true,
    rollupOptions: isSsrBuild
      ? {
          input: "./server/app.ts",
        }
      : undefined,
  },
  plugins: [reactRouter()],
}));
  • Finally, update our package.json scripts.
  "scripts": {
    "build": "cross-env NODE_ENV=production react-router build && npm run rm-client-source-maps",
    "dev": "cross-env NODE_ENV=development node server.js",
    "start": "node server.js",
    "typecheck": "react-router typegen && tsc",
    "rm-client-source-maps": "find build/client -type f -name '*.map' -delete"
  },

With any luck, you'll be able to npm run dev , npm run build, and npm start like you normally would and it will all work the same.

Now we're generating source maps and not having the source-map-support do its install to fix our stack traces to reference our source code. We have our blank canvas.

Manually using source maps

To make use of source maps ourselves we need to install the source-map package.

npm install source-map

We can use the SourceMapConsumer class to get back to where we were:

// ...
import fs from "node:fs";
import { SourceMapConsumer } from "source-map";

export const handleError: HandleErrorFunction = async (error, args) => {
  const { request, context, params } = args;
  // React Router may abort some interrupted requests, don't log those
  if (request.signal.aborted) return;

  let message = "Unknown Error.";
  const processedStack: Array<string> = [];

  if (error instanceof Error) {
    const lines = error.stack?.split("\n") ?? [];
    message = lines[0]; // Error message line
    const stack = lines.slice(1); // Actual stack lines

    for (const line of stack) {
      // parse the file path, line number, and column number from the line
      const match = line.match(/at .+ \((.+):(\d+):(\d+)\)/);
      if (!match) {
        processedStack.push(line);
        continue;
      }
      const [_, filePath, lineNum, colNum] = match;

      // only process our own built files, not vendor files
      if (!filePath.includes("/build/server/")) {
        processedStack.push(line);
        continue;
      }

      // get the source map contents
      const sourceMapFile = `${filePath}.map`.replace("file:", "");
      const sourceMap = fs.readFileSync(sourceMapFile).toString();

      // use the source map
      await SourceMapConsumer.with(sourceMap, null, async (consumer) => {
        // get the source code position
        const position = consumer.originalPositionFor({
          line: parseInt(lineNum),
          column: parseInt(colNum),
        });

        // if the source cannot be found, add the original line
        if (!position.source) {
          processedStack.push(line);
          return;
        }

        // we found the source. Use it!
        processedStack.push(
          `    at ${position.source}:${position.line}:${position.column}`,
        );
      });
    }
  }

  const payload = {
    error: {
      message,
      stack: processedStack,
    },
    // ...
  };

  // ...

};

The above code builds up a processedStack which are the stack lines where lines to our built code get translated to their source code counterparts. If we JSON stringified the payload and logged it this is what we get:

{
  "error": {
    "message": "TypeError: params.read is not a function",
    "stack": [
      "    at ../../../app/routes/contact.tsx:11:9",
      "    at async callRouteHandler (/Users/davidadams/code/address-book/node_modules/react-router/dist/development/index.js:8628:16)",
      // ...
    ]
  },
  // ...
}

Notice the first item in the stack array. It contains a reference to our source code. Perfect.

Now we can focus on getting the source at that line and a number of lines around it so we can immediately get a better sense of where this code is and what it's doing.

The goal

In our scenario, there's only a single line to our source code. However, there could be several lines that reference your source code if an error is thrown deep in multiple function calls. For example, say we had an error here instead of our directly in our loader:

// app/data.ts
export async function getContact(id: string) {
  id = id.uppercase(); // <- it's .toUpperCase(), not .uppercase()!
  return fakeContacts.get(id);
}

Our payload would look like this:

 {
    "error": {
        "message": "TypeError: id.uppercase is not a function",
        "stack": [
            "    at ../../../app/data.ts:82:10",
            "    at ../../../app/routes/contact.tsx:7:24",
            // ...
        ]
    },

There's a trail of lines in our source code. It would be super helpful if we could get the source code around those lines. Let's come up with a typescript type to design what we want.

type AppStackLine = {
  filename: string;      // ../../../app/data.ts
  lineNumber: number;    // 82
  columnNumber: number;  // 10
  sourceCodeContext: Array<{
    lineNumber: number;
    code: string;
  }>;
};

For every entry in the stack that references our source code, we want the details of where the stack trace line is in the source code but also the source code lines itself. Each line will be represented by an object with the source code line number and the code itself.

We can write a function that when given a source map position, will provide us with an AppStackLine that contains the source code around that position. This function takes a linesAround property to make it easy to adjust just how much source code we want to see.

// entry.server.ts

// ...

function getAppStackLine({
  consumer,
  position,
  linesAround,
}: {
  consumer: SourceMapConsumer;
  position: NullableMappedPosition;
  linesAround: number;
}) {
  if (!position.source || !position.line || !position.column) return null;

  // get the source code
  const sourceCode = consumer.sourceContentFor(position.source);
  if (!sourceCode) return null;

  // create an AppStackLine
  const appStackLine: AppStackLine = {
    filename: position.source,
    lineNumber: position.line,
    columnNumber: position.column,
    sourceCodeContext: [],
  };

  // split the source code file into an array of lines
  const lines = sourceCode.split("\n");

  // get the index of the line that showed up in the stack trace
  const targetLineIndex = position.line - 1;

  // calculate the range of lines to include
  const startLine = Math.max(0, targetLineIndex - linesAround);
  const endLine = Math.min(lines.length - 1, targetLineIndex + linesAround);

  // get the source code lines
  for (let i = startLine; i <= endLine; i++) {
    appStackLine.sourceCodeContext.push({
      lineNumber: i + 1,
      code: lines[i],
    });
  }

  return appStackLine;
}

Let's incorporate that into our handleError function.

// entry.server.ts
// ...
import { type NullableMappedPosition, SourceMapConsumer } from "source-map";

export const handleError: HandleErrorFunction = async (error, args) => {
  const { request, context, params } = args;
  // React Router may abort some interrupted requests, don't log those
  if (request.signal.aborted) return;

  let message = "Unknown Error.";
  const processedStack: Array<string> = [];
  const appStackLines: Array<AppStackLine> = [];

  if (error instanceof Error) {
    const lines = error.stack?.split("\n") ?? [];
    message = lines[0]; // Error message line
    const stack = lines.slice(1); // Actual stack lines

    for (const line of stack) {
      // parse the file path, line number, and column number from the line
      const match = line.match(/at .+ \((.+):(\d+):(\d+)\)/);
      if (!match) {
        processedStack.push(line);
        continue;
      }
      const [_, filePath, lineNum, colNum] = match;

      // only process our own built files, not vendor files
      if (!filePath.includes("/build/server/")) {
        processedStack.push(line);
        continue;
      }

      // get the source map contents
      const sourceMapFile = `${filePath}.map`.replace("file:", "");
      const sourceMap = fs.readFileSync(sourceMapFile).toString();

      // use the source map
      await SourceMapConsumer.with(sourceMap, null, async (consumer) => {
        // get the source code position
        const position = consumer.originalPositionFor({
          line: parseInt(lineNum),
          column: parseInt(colNum),
        });

        // if the source cannot be found, add the original line
        if (!position.source) {
          processedStack.push(line);
          return;
        }

        processedStack.push(
          `    at ${position.source}:${position.line}:${position.column}`,
        );

        const appStackLine = getAppStackLine({
          consumer,
          position,
          linesAround: 4,
        });
        if (appStackLine) {
          appStackLines.push(appStackLine);
        }
      });
    }
  }

  // ...

};

Now if we log the JSON stringified appStackLines we can see the relevant source code.

[
    {
        "filename": "../../../app/data.ts",
        "lineNumber": 82,
        "columnNumber": 10,
        "sourceCodeContext": [
            {
                "lineNumber": 78,
                "code": "  return contact;"
            },
            {
                "lineNumber": 79,
                "code": "}"
            },
            {
                "lineNumber": 80,
                "code": ""
            },
            {
                "lineNumber": 81,
                "code": "export async function getContact(id: string) {"
            },
            {
                "lineNumber": 82,
                "code": "  id = id.uppercase();"
            },
            {
                "lineNumber": 83,
                "code": "  return fakeContacts.get(id);"
            },
            {
                "lineNumber": 84,
                "code": "}"
            },
            {
                "lineNumber": 85,
                "code": ""
            },
            {
                "lineNumber": 86,
                "code": "export async function updateContact(id: string, updates: ContactMutation) {"
            }
        ]
    },
    {
        "filename": "../../../app/routes/contact.tsx",
        "lineNumber": 7,
        "columnNumber": 24,
        "sourceCodeContext": [
            {
                "lineNumber": 3,
                "code": ""
            },
            {
                "lineNumber": 4,
                "code": "import { type ContactRecord, getContact, updateContact } from \"../data\";"
            },
            {
                "lineNumber": 5,
                "code": ""
            },
            {
                "lineNumber": 6,
                "code": "export async function loader({ params }: Route.LoaderArgs) {"
            },
            {
                "lineNumber": 7,
                "code": "  const contact = await getContact(params.contactId);"
            },
            {
                "lineNumber": 8,
                "code": "  if (!contact) {"
            },
            {
                "lineNumber": 9,
                "code": "    throw new Response(\"Not Found\", { status: 404 });"
            },
            {
                "lineNumber": 10,
                "code": "  }"
            },
            {
                "lineNumber": 11,
                "code": "  return { contact };"
            }
        ]
    }
]

All that's left to do is put into an email(or slack, telegram, etc). Here's a very crude html email we can throw together with the payload and appStackLines we've built up.

// entry.server.tsx

// ...

  // start building up html snippets. the first being the error message
  const snippets = [`<pre><code>${payload.error.message}</code></pre>`];

  const commentColor = "#ababab";

  // for every app stack line...
  for (const appStackLine of appStackLines) {

    // create a code block  
    let snippet = `<pre><code>`;

    // that starts with the file:lineNumber:columnNumber
    snippet += `<span style="color: ${commentColor};">// ${appStackLine.filename}:${appStackLine.lineNumber}:${appStackLine.columnNumber}</span>\n`;

    // then, for every source code line...
    for (const line of appStackLine.sourceCodeContext) {
      // determine if we should highlight it based on the line
      // number matching the stack trace line number
      const color =
        line.lineNumber === appStackLine.lineNumber ? "red" : "black";
      // append the source code line itself
      const lineNumberSpan = `<span style="color: ${commentColor};">${line.lineNumber}</span>`;
      snippet += `<span style="color: ${color};">${lineNumberSpan}  ${line.code}</span>\n`;
    }
    snippet += `</pre></code>`;

    snippets.push(snippet);
  }
  // include the payload object we put together earlier
  snippets.push(`<pre><code>${JSON.stringify(payload, null, 2)}</code></pre>`);

  sendMail({
    from: process.env.MAILER_SEND_ERRORS_FROM,
    to: process.env.MAILER_SEND_ERRORS_TO,
    subject: "App - ERROR",
    html: snippets.join("<hr />"),
  }).catch(console.error);

// ...

Bonus: set up a dummy smtp server

If you're not afraid of docker, run this to spin up a local dummy smtp server that will catch every outgoing email instead them actually being sent to their recipients.

docker run --rm -it -p 5000:80 -p 2525:25 rnwood/smtp4dev

Run npm install dotenv and then add the following line to your server/app.ts file.

// server/app.ts
// ...
import "dotenv/config";

// ...

Then create a .env file in the root the project with the following contents. Don't forget to add this to your .gitignore file.

MAILER_SEND_ERRORS_FROM="[email protected]"
MAILER_SEND_ERRORS_TO="[email protected]"
MAILER_HOST="localhost"
MAILER_PORT=2525
MAILER_USER="user"
MAILER_PASSWORD="password"
MAILER_SECURE="false"
MAILER_IGNORE_TLS="true"

This will spin up an inbox at http://localhost:5000 you can go to to see all caught emails. Super handy for local development when emails are involved.

Now if you npm run build and npm start the "production" app and go to the route that triggers the error you'll get sent this email.

We've got our actual code and the lines that are in the stack trace in red. We're now instantly able to tell where errors are coming from. πŸŽ‰

Disclaimer again

This code is not optimized. Every line in our stack trace that comes from our source code is rereading in the source map file and reinitializing a new source map consumer. I'll leave optimization up to you and Claude. πŸ€–

I can't use fs, halp!

Some platforms, like Cloudflare's workers don't provide fs support so you wouldn't be able to read the contents of the source map file.

What you can do instead is make uploading the source maps to somewhere your app can retrieve part of your deployment process. For example, you could upload them to a bucket in R2/S3 and retrieve them from their instead of the filesystem.

Standalone error reporting app

You could also go as far as standing up a dedicated error reporting app(like Sentry) where instead of doing all this processing directly in handleError, you could POST the error message and stack trace to this separate app where all the logic we've implemented here would live. This would be ideal if you're running a lot of apps but prefer not either implement the same thing over and over again or don't want your apps to be burdened with handling this.

Fix bugs faster

I hope you've learned something and you see that what we've covered can be applied elsewhere and isn't too React Router specific. Go forth, create an ErrorBoundary that says you've been notified about the error and actually mean it now. πŸ‘


You can find the full source code here. If you want to run it, just be sure to copy the example.env to .env in the root of the project.


Let me know if you've learned something on X at @dadamssg. Would love to know if this was helpful.

Also be sure to sign up below to be notified of any new stuff I publish. ✌️

Debouncing in React Router v7

While hanging out in the Remix discord, I happened to see Bryan Ross(@rossipedia on discord and @rossipedia.com on bsky) share the technique I'm going to cover below. All credit for this concept should go to him. Clever guy, that one!

Prefer video? Check out the video version of this on YouTube.

Sometimes you have a user interaction that can trigger multiple requests in a very fast sequence and you only care about the latest one. For example, you might have a table of records and want to provide an input for the user to filter the data but that filtering needs to occur in the backend. Every keystroke would initiate new requests to fetch the filtered data.

export async function loader({request}: Route.LoaderArgs) {
    const url = new URL(request.url)
    const q = url.searchParams.get('q')
    return {
        contacts: await fetchContacts(q)
    }
}

function Contacts({loaderData}: Route.ComponentProps) {
    const submit = useSubmit()
    return (
        <div>
            <Form onChange={e => submit(e.currentTarget)}>
                <input name="q" />
            </Form>
            <ContactsTable data={loaderData.contacts} />
        </div>
    )
}

You would see something like this as the user typed "Dallas".

GET ...?q=D
GET ...?q=Da
GET ...?q=Dal
GET ...?q=Dall
GET ...?q=Dalla
GET ...?q=Dallas

React Router will cancel all but the latest in-flight requests so you don't have to manage any race conditions yourself which is awesome. Buuuut, this is kind of only half the story. Those cancelled requests are only cancelled from the frontend's perspective. They were still sent and will still trigger your loader(or action, depending on form method) which is a waste of your backend resources and could be unnecessarily stressing your system.

It would be much better to introduce a slight delay so we can wait for the user to stop typing and then initiate the request to fetch new data. This delaying before doing something is known as "debouncing".

useEffect?

You might be tempted to maintain, and debounce, a state value and reach for useEffect to initiate requests.

function Contacts({loaderData}: Route.ComponentProps) {
  const [filter, setFilter] = useState<string>();
  const debouncedFilter = useDebounce(filter, 500);
  const submit = useSubmit()

  useEffect(() => {
    if (typeof debouncedFilter === 'string') {
      submit({q: debouncedFilter})
    }
  }, [submit, debouncedFilter])

  return (
    <div>
      <Form>
        <input name="q" onChange={(e) => setFilter(e.target.value)} />
      </Form>
      <ContactsTable data={loaderData} />
    </div>
  );
}

We've introduced four hooks and one being the dreaded useEffect for data fetching which is a react cardinal sin. I fear I've just sent a shiver up David Khourshid's spine by merely mentioning this foul possibility. There must be a better way!

clientLoader to the rescue

With React Router, we can take advantage of the fact that it uses the web platform with AbortSignals and Request objects. We can do that in the clientLoader and clientAction functions

If defined, these will be called in the browser instead of your loader and action function. They can act as a gate where you determine whether the request should make it through to their server counterparts.

Let's introduce a clientLoader in our code.

// this runs on the server
export async function loader({request}: Route.LoaderArgs) {
    const url = new URL(request.url)
    const q = url.searchParams.get('q')
    return {
        contacts: await fetchContacts(q)
    }
}

// this runs in the browser
export async function clientLoader({
  request,
  serverLoader,
}: Route.ClientLoaderArgs) {

  // intercept the request here and conditionally call serverLoader()

  return await serverLoader(); // <-- this initiates the request to your loader()
}

function Contacts({loaderData}: Route.ComponentProps) {
    const submit = useSubmit()
    return (
        <div>
            <Form onChange={e => submit(e.currentTarget)}>
                <input name="q" />
            </Form>
            <ContactsTable data={loaderData.contacts} />
        </div>
    )
}

Now we just need some logic in the clientLoader to determine whether serverLoader should get called. Since React Router uses AbortSignals, we can write a new function that uses those. It's going to return a Promise that resolves after a specified amount of time unless we detect that the request was cancelled(the signal was aborted).

// ...

function requestNotCancelled(request: Request, ms: number) {
  const { signal } = request
  return new Promise((resolve, reject) => {
    // If the signal is aborted by the time it reaches this, reject
    if (signal.aborted) {
      reject(signal.reason);
      return;
    }

    // Schedule the resolve function to be called in
    // the future a certain number of milliseconds 
    const timeoutId = setTimeout(resolve, ms);

    // Listen for the abort event. If it fires, reject
    signal.addEventListener(
      "abort",
      () => {
        clearTimeout(timeoutId);
        reject(signal.reason);
      },
      { once: true },
    );
  });
}

export async function clientLoader({
  request,
  serverLoader,
}: Route.ClientLoaderArgs) {
  await requestNotCancelled(request, 400);
  // If we make it passed here, the promise from requestNotCancelled 
  // wasn't rejected. It's been 400 ms and the request has
  // not been aborted. Proceed to initiate a request
  // to the loader.
  return await serverLoader();
}

// ...

We're now debouncing the requests to the loader based on the request signal and we don't have to muddy up our UI component with a bunch of event listeners and hooks. πŸŽ‰

Bryan's implementation had a different name and slightly different api.

export async function clientLoader({
  request,
  serverLoader,
}: Route.ClientLoaderArgs) {
  await abortableTimeout(400, request.signal);
  return await serverLoader();
}

You get the idea though so you can run with the concept however you like.


If you like this type of stuff, you can drop your email in the box below and I'll shoot you an email whenever I drop more posts and videos. ✌️

Sending logs to the browser from actions and loaders

One of the inherent benefits of Single Page Apps(SPAs) I took for granted was the logging that comes out of the box via the Network tab in the console. Every api request happens in the browser and you can easily inspect those which is super handy.

Once you move to something like Remix/React Router 7, you become painfully aware that you've lost access to that since most of your api requests are now happening on the server in actions and loaders. This is a Major Bummer 🫑 when coming from the SPA world. Instead, you have to manually log those in your loaders and actions and inspect those where your logs go during development: commonly your terminal or a file.

This kinda stinks for a couple of reasons:

  • Even though I can console.log in my backend code and UI code in the same file, I have to see in them in two different places.
  • The backend logs are strings and can be really hard to inspect any data being logged.

These things make me sad. Let's change that.

Prefer video? Check out the video version of this on YouTube.

The strategy

When I'm developing my app, I want both the backend and any frontend logs to be in the same place. A place I can monitor as I poke around in my app. Naturally the browser's console is the ideal destination.

We need a way to send logs from the backend to the browser in realtime. Enter Server-sent events, a close cousin of WebSockets. While WebSockets support sending and receiving messages to your backend, server-sent events only support sending messages from your backend which is exactly what we need.

Backend setup

We need a few pieces in place. First, we'll create a singleton store so services we create will persist across hot module reloads and we're always guaranteed to get the same instances everywhere we ask for them.

// app/service/singleton.server.ts

// yoinked from https://github.com/jenseng/abuse-the-platform/blob/main/app/utils/singleton.ts

export function singleton<T>(name: string, valueFactory: () => T): T {
  const yolo = global as any;
  yolo.__singletons ??= {};
  yolo.__singletons[name] ??= valueFactory();
  return yolo.__singletons[name];
}

Next, we'll create an instance of a node EventEmitter. We'll attach an event listener to this and use it in our SSE(server-sent events) connection to send messages to the browser.

// app/server/emitter.server.ts
import { EventEmitter } from "node:events";
import { singleton } from "~/service/singleton.server";

export let emitter = singleton("emitter", () => new EventEmitter());

And now we'll use the popular winston logger package which let's you direct logs to all sorts of places: console, file, http endpoint, ...event emitter? We'll need to implement a custom "transport" for that.

// app/service/logger.server.ts
import winston, { Logger } from "winston";
import Transport from "winston-transport";
import { singleton } from "~/service/singleton.server";
import { emitter } from "~/service/emitter.server";

class EmitterTransport extends Transport {
  log(info: any, callback: () => void) {
    setImmediate(() => {
      this.emit("logged", info);
    });

    let eventName = "log";

    emitter.emit(eventName, JSON.stringify(info));

    callback();
  }
}

export let logger: Logger = singleton("logger", () => {
  const instance = winston.createLogger({
    level: process.env.NODE_ENV !== "production" ? "debug" : "info",
    transports: [new winston.transports.Console()]
  });

  if (process.env.NODE_ENV !== "production") {
    instance.add(new EmitterTransport());
  }
  return instance;
});

With this setup, any logged messages will go to the terminal and be emitted as JSON stringified messages during development. Notice the "log" event name as the first argument to emitter.emit(). This event name is arbitrary but it will need to match up to the event name in our SSE connection.

Side note: winston is completely optional. You could just emitter.emit() directly in your actions and loaders. Leveraging a logging library just makes a lot of sense here because that's exactly what we're doing, logging!

We'll implement the connection now by leveraging Sergio XalambrΓ­'s awesome remix-utils package which makes creating an SSE connection a breeze. We're going to do this in a /logs route.

// app/routes/logs.ts
import { eventStream } from "remix-utils/sse/server";
import { emitter } from "~/service/emitter.server";
import type { Route } from "./+types/logs";

export async function loader({ request }: Route.LoaderArgs) {
  if (process.env.NODE_ENV !== "development") {
    throw new Error("SSE logging is only for development, silly!");
  }
  let eventName = "log"; // <- this must match emitter.emit(eventName, ...)

  return eventStream(
    request.signal,
    (send) => {
      let handle = (data: string) => {
        send({ event: eventName, data });
      };

      emitter.on(eventName, handle);

      return () => {
        emitter.off(eventName, handle);
      };
    },
    {
      // Tip: You need this if using Nginx as a reverse proxy
      headers: { "X-Accel-Buffering": "no" },
    },
  );
}

(You don't even have to install the complete remix-utils package if you don't want to. You could copy/paste the single file from your repo and use it that way.)

When the browser connects to this via the EventSource object, it keeps the connection alive so any log event will be sent over any open connections.

The last step in the backend is to actually make use of the logger which could look something like this.

// app/routes/contact.tsx
import { hrtime } from 'node:process'
import { logger } from '~/service/logger.server'
import type { Route } from './+types/route'

export async function loader({params}): Route.LoaderArgs {
  let url = `https://address-book.com/api/contacts/${params.id}` // <- not real
  let [res, ms] = await time(async () => {
      const response = await fetch(url)
      return {
        status: response.status,
        data: await response.json()
      }
  })
  logger.info('fetch contact', {
    method: 'GET',
    url,
    status: res.status,
    response: res.data,
    ms
  })
  return data
}

// simple utility for tracking how long a function takes
async function time<T>(cb: () => T) {
  let start = hrtime.bigint();
  let value = await cb();
  let end = hrtime.bigint();
  let ms = Number(end - start) / 1_000_000;
  return [value, ms] as const;
}

Frontend setup

Now let's turn our attention to the frontend. It's finally time to establish our SSE connection in our root route.

// app/root.tsx

// ...

export async function loader() {
  return {
    sseLogging: process.env.NODE_ENV === "development",
  };
}

export default function App({ loaderData }: Route.ComponentProps) {
  let { sseLogging } = loaderData;
  useEffect(() => {
    // only establish a connection if sseLogging is turned on
    if (!sseLogging) return;
    const source = new EventSource("/logs");
    const handler = (e: MessageEvent) => {
      try {
        // attempt to parse the incoming message as json
        console.log(JSON.parse(e.data));
      } catch (err) {
        // otherwise log it as is
        console.log(e.data);
      }
    };
    let eventName = "log" // <- must match the event name we use in the backend
    source.addEventListener(eventName, handler);
    return () => {
      source.removeEventListener(eventName, handler);
      source.close();
    };
  }, [sseLogging]);

  return <Outlet />;
}

// ...

We're establishing the connection in a useEffect , listening for logs, attempting to parse them as json and log that, otherwise, log the message string as is.

Why not useEventSource???

"Whoa, whoa, whoa... you're using eventStream from remix-utils and that package comes with useEventSource. Why the heck aren't you using that???"

Great question! I tried. useEventSource stores the latest message in a useState and it's up to you to "listen" to it in a useEffect if you're not just rendering the latest message. Here's what it would look like.

let message = useEventSource("/logs", { event: "log" });

useEffect(() => {
  if (!message) return
  try {
    console.log(JSON.parse(message));
  } catch (err) {
    console.log(message);
  }
}, [message])

The problem I ran into is that because the message is being stored in a useState, the messages are bound to react's rendering and update lifecycle. If I logged multiple messages back to back in a loader some of them would get swallowed because (I believe) react is debouncing them and trying to be efficient for rendering, which isn't what we're doing here. That's just not going to work for this use-case.

Success!

In any case, we can now do a victory dance! πŸ•Ί If we have the app loaded and navigate to a route that has a loader where we're doing some logging we should see those in the browser console. 😎


If you completely reload the browser tab you'll see the log message show up but immediately get wiped because the loader for the next request runs, sends the SSE message over the current connection, but then Chrome will clear your console. You can somewhat get around this if you click the βš™οΈ icon in the console window and select Preserve log to retain logs between tab refreshes.



I say "somewhat" because any objects logged will show up as "Object" in the console and it looks like you can expand it but that memory is gone. It's like a ghost object just there to tease you. πŸ‘»

Going try hard in Chromium based browsers

This is already incredibly useful but we can do better if we're using a Chromium based browser. The console has the ability for you to add custom object formatters. If you click the βš™οΈ icon in the dev tools window(the other cog icon in the screenshot above), you can toggle this feature on.

A custom formatter is an instance of a class that has up to three methods: header(obj), hasBody(obj), and body(obj). Each object can have a "header" and then optionally have the ability to be expanded. Think about when you log an object with several properties. The console doesn't automatically expand the object. It shows you the "header", a truncated representation of the object and then you have the ability to expand its "body" to see all of the properties.

Let's write our own formatter for api requests/responses we make in our actions and loaders. We need a way to distinguish that our custom formatter should apply to these logs so let's introduce a __type property that's set to "api". This is a completely arbitrary way for us to identify these log objects in our formatter.

  logger.info('create todo', {
    __type: 'api',
    method: 'POST',
    url,
    status: res.status,
    payload,
    response: res.data,
    ms
  })

And now let's look for that when implementing a custom formatter.

// app/dev/ApiObjectFormatter.js

class ApiObjectFormatter {
  header(obj) {
    if (obj.__type !== "api") {
      return null;
    }
    const method = obj.method.toUpperCase();
    const methodColor = {
      POST: "blue",
      PUT: "orange",
      PATCH: "salmon",
      GET: "green",
      DELETE: "red",
    }[method];
    const status = obj.status;
    const isOkay = status >= 200 && status < 300;
    const color = isOkay ? "green" : "red";
    return [
      "div",
      {},
      [
        "span",
        {
          style: `color: ${methodColor}; font-weight: bold;`,
        },
        `[${obj.method.toUpperCase()}]`,
      ],
      ["span", {}, ` ${obj.url}`],
      ["span", { style: `color: ${color};` }, ` [${obj.status}]`],
      ["span", { style: `color: slategrey;` }, ` (${obj.ms} ms)`],
    ];
  }

  hasBody(obj) {
    if (obj.__type !== "api") {
      return null;
    }
    return obj.response || obj.payload || obj.message;
  }

  body(obj) {
    const responseRef = ["object", { object: obj.response }];
    const requestRef = ["object", { object: obj.payload }];

    return [
      "div",
      { style: "padding-left: 20px; padding-top: 5px;" },
      obj.message
        ? [
            "div",
            {},
            ["span", { style: "font-weight: bold;" }, "Message: "],
            ["span", {}, obj.message],
          ]
        : null,
      obj.payload
        ? [
            "div",
            {},
            ["span", { style: "font-weight: bold;" }, "Payload: "],
            ["span", {}, requestRef],
          ]
        : null,
      obj.response
        ? [
            "div",
            {},
            ["span", { style: "font-weight: bold;" }, "Response: "],
            ["span", {}, responseRef],
          ]
        : null,
    ].filter(Boolean);
  }
}

window.devtoolsFormatters = window.devtoolsFormatters || [];
window.devtoolsFormatters.push(new ApiObjectFormatter());

Kinda funky, eh? It reminds me of what react looks like without JSX. You define a hierarchy of elements. Each element is represented by an array where the first item is the html tag, the second item is an object of styles, and the third is the children(which can be more elements or object references).

Next, we need a way to actually have this run in the browser. Let's revisit our root route.


// app/root.tsx
import ApiObjectFormatter from "~/dev/ApiObjectFormatter.js?raw";

export async function loader() {
  return {
    sseLogging: process.env.NODE_ENV === "development",
    customObjectFormatters:
      process.env.NODE_ENV === "development" ? [ApiObjectFormatter] : [],
  };
}

export default function App({ loaderData }: Route.ComponentProps) {
  const { sseLogging, customObjectFormatters } = loaderData;
  // ...
  return (
    <>
      {customObjectFormatters.length > 0 ? (
        <script
          dangerouslySetInnerHTML={{
            __html: customObjectFormatters.join("\n"),
          }}
        />
      ) : null}
      <Outlet />
    </>
  );
}

You might have noticed that we wrote the ApiObjectFormatter in javascript, not typescript. This is so we can use vite's ?raw suffix on the import and get it as a string. Once we have it as a string we can optionally send it to the browser based on any condition we come up with in our loader. We won't ever be bloating our client bundles with this console formatting code. We also don't have to go through the hassle of putting this into a Chrome extension to distribute to our team. It can be completely custom to your app's needs, live within the repo, and just works out of the box(if they've got custom formatters turned on). πŸ”₯

So what's it look like? Check it out.

Pretty dang slick. As we click around in our app we can see what's happening behind the scenes right where it's most convenient. We have the most important bits in the header and can expand to see the body which includes the request payload and json response.

You might have also noticed that customObjectFormatters is an array. You can go nuts here adding custom formatters that:

  • format messages differently for the various log levels(info, warn, error, etc)
  • format database queries (results and timing)
  • format domain events being fired. Ex. {event: 'user-registered'}

The sky's the limit.

console.log() stigma

The console has some pretty handy utilities that I think a lot of people aren't aware of and I think that's partly due to the stigma around even making using of console.log() on the frontend. These days everyone has eslint rules that complain about console logging to stop you from appearing unprofessional by accidentally shipping those. I get that. But, I also think there's a lot of missed opportunity for cool stuff to be explored, especially if you're triggering logs from your backend in a controlled manner during development.

What about production?

What we did here is meant for development. Theoretically, you could use this idea in production but you would definitely need to make some changes:

  • Enable the /logs endpoint for production and only allow specific users(admins)
  • Subscribe to an eventName that is tied to the current user rather than a super generic log event name. Ex. log.user.123
  • return that eventName from the root loader to be used in the EventSource listener
  • Include the user id in all of your logs
  • Update the EmitterTransport to use the user specific eventName(log.user.123) based on the user data that comes with the logs

There's a lot that could go wrong here, so I'm not recommending this but I did want to address the idea before anyone puts this in prod without thinking too much about it.

Repo

Here's the repo where you can see all of this wired up. dadamssg/address-book#sse-logging


Alight, hope you found this useful and got your wheels turning. If you like this type of stuff, you can drop your email in the box below and I'll send you a message whenever I write more posts and publish more videos. ✌️