Wed Feb 05 2020

How We Localized FastComments Without Slowing It Down

Article size is 3.6 kB and is a 3 min read

To start - note that I would not recommend the techniques outlined here to most companies. FastComments is an unusual beast where we are optimizing for milliseconds. Most web applications will be fine with adding 100ms of latency as a trade-off for localization support... but not us.

So what did we do for FastComments?

What I've seen in web apps so far is if you want to localize the client it does a lot of parsing of browser/system settings and then fetches a JSON file containing a map of translation id to localized string. The application is usually structured in a way that this network request is part of the framework setting itself up, meaning that you can't fetch your translations concurrently while fetching application state/data. They do get cached however - but we need to optimize for initial page load.

FastComments already doesn't use a framework (other than what a native browser provides) and fetching initial state is relatively simple for us so we have a lot of freedom.

We've already gotten to a state where our /comments endpoint is no longer RESTful in that it returns more than just comments. It returns configuration for rendering the widget, for starters. We can do this since fetching the comment objects is very simple and fast - if it took seconds we'd have to have a separate request for initializing the widget so we could show some content quickly.

So we already have a fast endpoint that's following a convention where we can tell the API to include additional information via query params. For example, for localization we added a '?includei10n' flag in which case the client will do locale detection and send back the appropriate set of translations. This response is gzipped by Nginx just like the JS file. We only pass this flag for initial /comment calls from the widget.

So that's basically it. The client script dropped a few hundred bytes in size (gzipped) and the API response grew a little bit - but we didn't add any additional requests to load the widget.

So far it seems this has increased our response time by a couple milliseconds. However after repeated requests the JIT seems to make this difference even smaller.

Another thing we looked at was addressing the endpoint that loads the initial iframe. Right now it's rendered with EJS. It turns out that EJS is indeed pretty fast. We tried replacing the template engine with just string concatenation - it's hardly faster than using EJS. Request response time still hovered around 20ms (including dns resolution). Initial requests with EJS seem to spike around 30 milliseconds though, so we may explore this again in the future.

Sorry if you were looking for something super fancy. Sometimes the fastest solution is also the simplest. :)