Docker app hosting

Like a ton of other people I was recently inspired by listening to Lex Fridman interview Peter Levels. Hearing him speak about the current state of development made me want to simplify.

I was trying out fly.io for an app that I was working on but it was yet another platform to learn, will probably fall out of fashion, and was kinda pricey for what I wanted to use it for. I have already messed around with:

  • heroku
  • netlify
  • vercel
  • others

And those are great for high availability and horizontal scaling but....I don't have those problems and those platforms can get expensive quick compared to having your own VPS and running your apps on it. Granted, I'm familiar with servers and docker. If you're not then by all means use those services. They're great for that. However, since I have that experience, use a wide range of technologies for different apps, then docker with my VPS makes a lot of sense to me. It also helps that it's relatively easy to find Dockerfiles for various frameworks.

Why Docker?

I prefer to not mess around with server packages and updates if I don't have to. It scares me doing that stuff in production. Using docker means I can easily run apps that need varying versions of php, node, python, etc and I don't have to worry about getting all those pieces in place directly on my server. If I just used php and no framework like Peter then I probably wouldn't bother with docker.

Let's go

I just had to figure out how to come up with a github action workflow to deploy to my server. Here's what I came up with. This does require some server set up(below).

The workflow deletes all but the last three images stored on github and on my server so I'm not paying for those on github and not chewing through disk space on my server.

Server setup

First, create a github user account with docker permissions on your server:

sudo useradd -m github
sudo usermod -aG docker github

And created a ssh key/pair for the github user:

# change to github user
sudo su - github

# generate ssh key/pair
ssh-keygen -t rsa -b 4096 -C "[email protected]"

Next, copy the contents of /home/github/.ssh/id_rsa.pub into /home/github/.ssh/authorized_keys and fix the permissions if necessary:

chmod 600 /home/github/.ssh/authorized_keys

Next create the required repository secrets here:

https://github.com/<your-username>/<repository-name>/settings/secrets/actions

The following secrets should be created:

SERVER_HOST=<your servers hostname or ip>
SERVER_SSH_KEY=<the contents of the private key: /home/github/.ssh/id_rsa>
SERVER_USERNAME=github

The deploy script also assumes your container needs env vars set and those should be created in a /home/github/yourappsname.env file. It also assumes the container uses port 3000 and should forward to the same port on the host.

You can then add nginx in front of it if you've configured a domain's DNS to point to your server. Create an A record if using an ip as the value and the domain as the name. Ex: yourappsname.com.

# /etc/nginx/conf.d/yourappsname.conf
server {
    listen 80;
    server_name yourappsname.com www.yourappsname.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Reload nginx:

nginx -s reload

And then use certbot to add an ssl certificate:

certbot --nginx -d yourappsname.com -d www.yourappsname.com

Result

Now I have a pretty good template to follow to deploy whatever apps I decide to build and not be charged a crazy amount by hosting providers. The only hosting I'm using is a fixed price Hetzner VPS I can resize if/when I need to.

Next steps

Postgres has been my database of choice but I think I want to simplify to just using sqlite and see how far that gets me. A lot of the things I want to build are hobby projects with potential to make money. I don't want to pay for fancy db hosting for these apps. There are some database providers that have free plans but I haven't come across any that offer free backups and that's something I do want. I plan to use docker volumes to store the sqlite database on my VPS host and then start with cron-based backups and then use litestream if I ever feel the need to go down that path.

React router's data utilities are awkward in SPA's

I've been having some FOMO with Remix and Next.js lately. The company I work for was contracted to develop the frontend UI for an internal application of a large public business. I lead that development. I've been in this position for this same app for years now. They wanted a single page app that they could host and serve statically. They have people on their side doing the development of the api's that the frontend will be consuming. We started the app using react router v3 and have upgraded it to v6 now. We've also gone through an evolution of using redux/redux-thunk and now react-query for our data fetching.

I've recently been given the opportunity to start the UI development of a new application and although the requirements are the same in that it needs to be a SPA, I was excited to use react router's new data utilities: loaders, actions, and fetchers...and then I realized just how awkward these things are in a single page application.

In this post I'm going to describe what I find awkward and then describe the approach I've decided to use on this new application. I hope this helps other devs and comes across as constructive criticism and useful feedback for the Remix team. I have a ton of respect for those folks. I love react router and am very grateful for the work that's been put into it.

Route loaders

I dig route loaders. The idea that the router can fetch each active route's data in parallel to avoid waterfalls makes a lot of sense to me. You fetch the minimal amount of data for each route here. If certain data isn't immediately integral to a particular route, then you can go ahead and render it and fetch the other data after the route component is rendered. Beautiful.

Dominik(aka tkdodo) has an excellent post on how to incorporate react query and react router. I highly recommend giving that a read. queryClient.ensureQueryData() FTW!

Fetchers

This is where things start to get weird. Fetchers are used to make requests to apis without triggering navigations. For example, fetching options for autocomplete/combobox inputs would be an appropriate use case for a fetcher. It's similar to react-query's useQuery and useMutation hooks.

What I have found awkward is that you must go through the router to use these. React router wants to be able to invalidate these requests after successful action submission. That doesn't sound so bad but let's look at what's required to make a fetcher work.

Let's pretend we're implementing a combobox where a list of cities populate as options as the user types. At a minimum this is what our router could look like:

So what's awkward with this?

First off, we're defining a sort of "virtual" route. It doesn't really exist. You can't go to /cities in the browser and get that data because this is a single page app. This works as expected with Remix. You can go to /cities and get that data back. With react router you get an error about the route not returning an element. This makes sense. React router has no way to actually return a Response to the browser with the Content-Type: application/json header set.

Second, we can't just make requests directly from our route element to the endpoint that we need. We have to go through the fetcher api which means using fetcher.load() or fetcher.submit() which means we are beholden to either using search params or FormData. I'd guess the vast majority of the api's people are consuming in react router apps are using json or graphql. We have the mental overhead of conceptualizing two requests: one to the router and one to the api endpoint we actually care about. In the example above, we use a contains search param because that's the simplest way to pass the data. However, the api endpint we're using uses json. We have to juggle this request through react router using different content types to make it work.

Third, if you're using react router's actions you must implement the /cities route's shouldRevalidate if you don't want the router to retrigger this after a successful action:

Fourth, if you have some data you want to fetch that's not crucial at initial route render, the recommendation is to use useEffect to trigger that.

The is less than ideal because it triggers the fetch twice when being run in strict mode. When you're hacking on your app, you'll see two network requests due to this. Is it really a problem? The fetcher will appropriately handle the responses and give you the response from the second request but I think this makes it harder to find legitimate bugs, where your code could be errantly making duplicate requests. That said, it is a recommended approach seen in the reactjs docs.

You also have to reach for useEffect when the fetcher.data lands like to do things like focus an input or set some state based on that data.

There are just too many odd nuances and indirections with fetchers for me to want to use them.

defer

We might be encouraged to use the defer utility in the loader. The concept is pretty slick. It really let's you pull the lever up and down on the data you need right away vs a bit later. However, it comes with some rough edges as well.

If you're using typescript and were previously casting the loader data like this:

That's now broken. defer returns an instance of DeferredData but in your component, what gets returned is an object that contains promises. You end up having to add some more indirection:

Additionally, to use this promise you'll need to use the Await render prop component which, like fetchers, doesn't come with any onSuccess or onError callbacks to make use of.

The issues I've discussed here with defer and Await aren't unique to react router/SPA's but I thought worth mentioning.

Actions

With react router and Remix, we're told to "use the platform". This translates to using search params and FormData and leaning into uncontrolled form inputs so you don't have to manage form state and it supports progressively enhanced forms...if you're using Remix. Not so much with react router since there is no server component to handle form submissions.

Are progressively enhanced forms good? Absolutely...in certain contexts. When I'm using just react router to build a private internal app for ~50 users all located in a single major city, progressively enhanced forms are just not relevant. If you're building apps for millions of users that are all over the world being used by all sorts of devices, that's a different story. I have never worked on one of those applications. I fully understand that using just react router eliminates progressively enhanced forms. I don't want to be beholden to its implications in the apps I'm building. Thanks, but I'll opt out of the hard mode that is adhering to progressively enhanced form restrictions but not actually progressively enhanced.

So what's my beef with its "implications"? Just like fetcher to load data, you are forced to use "the platform"'s apis: URLSearchParams and FormData. This means you're encouraged again to go through an awkward layer but this time it's html and FormData. I realize this sounds kinda ludicrous. Html is awkward? Yes, using it as the source of truth is. Hear me out.

If we're going to use the recommended approach, a form component could like something like this:

Even though my form elements are mimicking different data types(string, boolean, number), FormData doesn't care. Everything you formData.get() out is a string. We're responsible for coalescing that back into the appropriate types for our api endpoint.

Also notice that since we're modeling our data in markup we have no type safety there. Hopefully you, or your editor, will catch typos such as the fact that the Favorite Number input's name is misspelled.

This gets even more awkward if you're modeling objects and arrays.

The "easiest" way I've found to do this is to use a special input naming syntax, employees[1][name], and then use a package like qs to parse it out from request.text(), not request.formData().

The sweet spot for me

I find using react-query's useQuery and useMutation hooks where I am "suppose" to reach for a fetcher much more straightforward and productive.

Replacing fetcher.load with useMutation

Here is a rewrite of the combobox example using useMutation. It might seem odd to use useMutation instead of useQuery here because we're fetching data and not actually "mutating" anything but useMutation is a utility that we can call on demand, just like fetcher.load().

I think this is an improvement.

  • The router doesn't need to know about this request.
  • I don't have to bounce around my codebase from the router back to my component to make this work.
  • There's no virtual route that i have to redirect this through.
  • If i need to do something when the data lands, I can use the onSuccess callback from the query/mutation. There are no useEffects involved. My requests don't get duplicated in strict mode.

Replacing fetcher.submit and actions with useMutation

The primary reason json apis became so ubiquitous is because it's such an improvement over URLSearchParams and FormData for modeling data. We are not limited to Record<string, string>. I much prefer modeling my payloads with javascript/typescript where it's far easier to debug and catch errors instead of it being hidden in the html markup. When is the last time you wished an api supported application/x-www-form-urlencoded btw?

Let's rewrite some code to use useMutation and model the data using typescript.

Whoah, that's way more code! There are definitely some tradeoffs here. We are switching from uncontrolled to controlled inputs. This is going to add more lines of code. However, if your form needs to hide/display certain inputs or make some required based on the values of other inputs, you're going to need to track those in state anyway though. Then you end up with some controlled and some uncontrolled inputs and then you're stuck with FormData to get the data from the form. I like consistency and I have these requirements for the forms I work on incredibly often. 95% of the time I'm going to end up having to make the form behave differently based on its values. If most of your forms consist of only a handful of string values and the data you're modeling is flat, then I can understand the attraction of uncontrolled inputs. For me and the types of apps I work on it's just easier to start with fully controlled inputs in the first place to better model my nested data structures that contain lots of inputs of various types.

Even though we've added more lines of code, we get type safety around our form state. We have a much better handle on that state and the validity of the payload that will get sent to the api in our component code. We are modeling the data with various data types in javascript and mapping the values into the form. Previously, we were modeling the data in markup. The source of truth was the html. That gets much more awkward when you're modeling complex nested data in your forms.

  • Side note: I wrote a pretty minimal form state hook that I need to publish as a package. It's similar to useFormik but far simpler in terms of code and features. It has served me well though. Feel free to steal.

This leads to way easier debugging. Have you ever tried to console.log(new FormData(document.getElementById('my-form')))? Have fun with that one. I would much rather use console.log(form.currentValues) and see my source of truth in the form of a plain old javascript object.

Disclosure: Leaders in the react space disagree with this view so take what I'm advocating with a grain of salt but do note that most of these individuals are heavily invested in supporting progressively enhanced forms and server side react. Not every app needs those things. Am I glad they exist? Definitely.

Additionally, our action/mutation code isn't tied to the router. React router takes a nuclear approach to data invalidation by default. After a successful action, it triggers all the current route loaders and fetchers. The reason being because that's how the web works without javascript. You submit a form, the browser navigates to the form's action, the server processes the request and then has to fetch every piece of data for the page to produce the page/response it returns. I think one of the reasons SPA's got so popular is because we know better about which data actually needs to be revalidated/refetched after these submissions.

React router actions and react-query's mutations take complete opposite approaches to this. With react-router, everything is invalidated by default. You have to configure the conditions for each loader to not be revalidated. React query mutations are completely isolated by default. You have to use the onSuccess callback to do your invalidations of router loaders or queries. I find react query mutations' explicit, opt-in approach much easier to reason about. After a mutation if I need to refresh the current route's data, I can explicitly use the revalidator. If I need to invalidate react-query queries, I can use the queryClient.

Also consider the fact that we could have a single action/mutation that needs to be used from various places in the app and need to behave differently. It's unclear where some of this logic should go. Should it be accounted for in the action? or do we need to useEffect in the component?

Whereas with useMutation we can share a bare minimum bit of code and make it behave differently based on specific use cases without touching the core submission logic.

What I hoped to see from fetchers

I know there is no way to make a native html form submit json but json is part of "the platform". JSON.stringify(), JSON.parse(), and Response.json() are all part of the web platform. When I'm using react router, I don't want to pretend that I'm using form navigations. I would love to see fetchers embrace this.

It would be sweet if react router would JSON.stringify() my data and add a application/json header to its Request.

I realize I can encode it myself and make it kinda work through FormData but that's an awkward workaround that makes me end up using FormData and JSON.parse(). This would work for route actions but not for my next request...

I'd love to be able to use fetchers outside of the router. Let me call an external api directly and not force me through the router. Let me have all of the request management goodness outside of route loaders.

Adding some success/error callbacks instead of having to rely on useEffect and fetcher.state would also be handy.

The last thing I'd like to see is some other way besides useEffect to kick off a fetcher.load() on component mount. Provide some way to avoid the duplicate requests being made in development while in strict mode. I'm not sure what this would look like though.

Most of this wishlist isn't related at all to UI routing but more concerns general data fetching so it seems out of the scope of what react router would be concerned about changing. I get that. It also seems silly to reimplement something like react query when that project exists already and can be used here.

I think tanstack router has a ton of potential and may just be exactly what I'm really looking for: a router not as tightly integrated into data fetching/mutating and not geared towards eventually upgrading to an opinionated full-stack framework. I'll be keeping my eye on that project.

Not counting on too many changes

Since Remix was acquired by Shopify, an ecommerce platform that is extremely concerned with progressively enhanced forms, I'm not holding my breath that these issues will be addressed. It doesn't seem like there would be much incentive to have the team focus on react router rough edges in SPA's when their primary interest is Remix. I'd be stoked to be wrong about that though.

Consolation

It seems to me that the vocal majority of the react crowd on twitter and other platforms work for massive companies, or that's where their experience has come from, and are required to support millions of users with their products. Personally, I feel this has resulted in undue imposter syndrome for myself and I assume others. I don't blame them for that btw. They're just sharing their experience and I'm glad of it.

Only recently have I really concluded that every developer is working under vastly different requirements. Someone working on the frontend for Facebook, Figma, Notion, Twitter, etc has very different requirements than someone like myself, who is working on application for a handful of users that doesn't need to be progressively enhanced, support mobile, need to work offline, etc.

Everyone seems to be saying that any "real" react project needs to really use a full-stack framework. Even the react docs now recommend starting new react projects with a full on framework that consists of a server side component. Sometimes the business requirements prevent you from being able to do this. React router hasn't gone away yet and I think it still has its place for certain use cases. It's silly but I feel guilty not embracing these new data utilities but I need to remember I have different requirements than what the maintainers of these projects aim to support.

I hope this post helps folks like me, who are developing private applications that don't need to be used by millions and not feel guilty because you're not adhering to the recommended approach or that you're alone in thinking some of this stuff is just awkward to use in a single page app.

Tanner's thoughts

I sent Tanner Linsley a copy of this post before I published to proof-read and point out any terrible hot takes he thinks I might have and received a response which gave me some much appreciated validation.

This hits hard from the perspective of SPAs. I’ve mentioned a few times publicly before: spas are the underrepresented middle class of front end. They’re everywhere, usually silent, many behind auth walls and likely not going away any time soon. They have definitely been over deployed as a silver bullet though, so naturally the bleeding edge pendulum is swinging hard back in the direction of the server, which is much needed for web tech to scale and evolve in areas where spas have suffered over the last generation of web frameworks, but that doesn’t mean I’m ready to abandon the pattern entirely, and for me personally that also means building tools that will happily cater to both paradigms. - Tanner Linsley, via Twitter DM

Solving Internal Problems - API Documentation

This is case #2 in coming up with solutions to internal problems to become a more valuable developer.

Case #2: API Documentation

At my current job, we are contracted to develop an internal frontend application for a fortune 500 company. The employees of this company must interact with several applications to do their jobs. The application I lead the frontend development for aims to alleviate that; to have one place for the employees to go to do their work. The application will then communicate with those different applications behind the scenes via api requests. The backend is composed of several different expressjs applications that are all brought together via a webserver. We, the frontend team, just need to interact with a single http port to make our requests against.

Example:

  • https://<current-domain>:8080/api/foo -> get's proxied to the "foo" service
  • https://<current-domain>:8080/api/bar -> get's proxied to the "bar" service

Each of these api services are developed by different teams within our client company.

Emulation

Developing within this company's network requires VPN access and that comes with a whole host of challenges. Instead, we've set up a mock expressjs server to develop against locally. This allows us to not have any dependency on their backend development environment. All we need is api documentation. We don't need to wait for these apis to actually be developed and deployed. We can just take the api documentation they've provided and create mock endpoints and responses within our express app to develop against. This is really nice because we have complete control over everything and can develop against various responses for any given endpoint(think error responses).

Receiving api documentation

We were/are being given "api documentation" in pretty much any and all formats:

  • emails
  • .txt documents
  • Word documents
  • power points
  • slack messages

Because the backend consists of several teams maintaining their own api services, there is no "standard" way of giving us api documentations. We have attempted to get this client to provide api documentation in a specific format but because we're just a vendor, we really have no control over them. We just take what we're given and put it in our express app. Because of this, and the fact that they do not maintain any sort of central api documentation, our mock express app has become the only central "api documentation" in existence for their api services.

The problems

At the time, our mock express app contained ~700 unique endpoints. There were a few problems I noticed:

Discarding request payloads

When adding mock POST/PUT endpoints, we were just making them repond with the expected success response, totally disregarding the request payloads.

app.post('/thing-service/thing', (req, res) => {
  return res.json({message: "Created thing successfully!"})
})

After we developed the frontend to send the expected payload based on the api docs we were sent, that request documentation was essentially gone. We'd have to search for when and where those api docs were sent if we ever needed to view them again.

The next problem was that the only "documentation" we had was our express app code which was very hard to search just due to the size it had grown and the inability to exclude search terms from the hardcode responses. We didn't really have a good way to search on urls. If we searched for "thing", that could show up in url patterns or, more commonly, in the hardcoded respones making it really difficult to find anything.

Clumsy and tedious

If we wanted to test various respones from a specific endpoint we'd have to go find it and then hardcode in a new response. This is a very manual and tedious task. Often we would end up losing the original response, which was still valid, because we'd overwrite it to test a new response.

The Solution

I wanted a way to:

  • keep various potential responses
  • keep request POST/PUT payloads
  • easily send/receive api documentation
  • easily explore all ~700 endpoints
  • keep our mock express app

Route files

That's when I took inspiration from Swagger(which I find very confusing to get started).

My first thought was to identify the most relevant pieces of data for any given endpoint:

  • url pattern
  • http method
  • request payloads - if applicable
  • query params - if applicable
  • responses
    • success responses
    • error responses

Luckily this is all stuff that can be encapsulated in a json file, which I know our backend api counterparts are comfortable with(as opposed to something like yaml).

{
  "title": "Create thing for username",
  "path": "/thing-service/:username/thing",
  "params": {
    "username": {
      "value": "johndoe",
      "help": "The owner's username"
    }
  },
  "method": "POST",
  "payload": {
    "thing": {
      "name": "Foobar",
      "color": "Blue"
    }
  },
  "responses": [
    {
      "status": 200,
      "response": {
        "message": "Thing created successfully!"
      }
    },
    {
      "status": 400,
      "response": {
        "errors": [
          {
            "detail": "Invalid color submitted"
          }
        ]
      }
    }
  ]
}

I could define several of these route files and then write code in my express app to:

  • glob a certain directory for these route files
  • parse them into javascript objects
  • iterate over them and create functioning endpoints for each
import glob from 'glob'
import express from 'express'

const app = express()

app.get('/foo/bar', (req, res) => {
  return res.json({foo: "bar"})
})

// all the other ~700 routes

const routes = glob
  .sync('api-routes/**/*.json')
  .map(require)

for (const route of routes) {
  app[route.method](route.path, (req, res) => {
    const firstResponse = route.responses[0]
    return res.json(firstResponse)
  })
}

const PORT = process.env.PORT || 8080
app.listen(PORT, function () {
  console.log('Dev Express server running at localhost:' + PORT)
})

This was great! I instantly solved several problems. I communicated to my backend counterparts that I would strongly prefer them to document their apis in these json route files. All I had to do was get the routes files and dump them in my api-routes directory and they'd instantly be avaible to develop against!

UI for the route files

I still wanted to solve searching and make reading these route files easier though. That's when I decided this documentation needed a user interface. The initial idea was pretty simple:

  • add an api endpoint to the mock server that
    • get's a list of existing endpoints on the express app instance(method, and url pattern)
    • get all routes from files
    • combine those to come up with a single list of known routes with as much information as possible
    • respond with that list of routes
  • create a react app to consume that endpoint and render documentation for the routes
  • serve that react app on a mock server endpoint

This whole thing seemed like it would shedding light on my existing express app so it seemed appropriate to call it "headlamp". Link to repo here. It started to take shape. Because I had a complete list of all the routes defined in the frontend, it was trivial to add searching by url pattern and http method.

image

Clicking on an endpoint would expand to show everything known about the endpoint...along with a few other added features...

image

Some features I added:

  • ability to actually submit the request and see the response in the UI and in the browser's network tab
  • ability to toggle different responses directly from the ui
    • we can now toggle the error response on and test that in our application UI
  • ability to submit and activate adhoc json responses
    • makes it incredibly easy to debug things if we know a certain production response is causing an issue
  • ability to search url patterns by omitting the route param names if you can't remember those
    • searching "/thing-service/:/thing" would find /thing-service/:username/thing
  • attempts to find where this endpoint is being used in the source code (see "References" section in the screenshot above)
  • shows the location of the route file if this endpoint was created from one
  • responds with headers to show
    • what route file is responsible for this request
    • a link to where you can view this request's UI documentation

image (Disregard the fact that the link points to a different port than what's in the browser url. This is just a result of running headlamp in development mode to hack on it. It points to the same port when used normally.)

  • ability to use .js route files and create dynamic responses. See third code snippet here.
  • ability to expand the references to view the acual source code of where this endpoint is used

image

  • ability to upload HAR files if you need to emulate a specific sequence of responses actually encountered in production

Result

This case of scratching our own itch has been immensely helpful and made our development experience so much nicer and more productive than interacting with an express app manually.

Our client now asks us how they can run this to view their own api's documentation.

Is the headlamp code the best, most organized, test-driven developed code out there? Absolutely not, nor does it need to be. This was an internal tool developed to make our own lives easier. It is serving its purpose extremely well for us. I haven't needed to touch it for almost 2 years now and we've used it every day since its inception.

Looking for ways to keep your work interesting while at the same time improving your productivity? Take the time to assess your current development challenges. View them as opportunities to come up with effective solutions. I thoroughly believe this is a solid approach to proving your worth and advancing your career.

Fun fact: At the time of writing this our mock express app now has 1298 unique endpoints.

Link to headlamp