Efficient CI with Turborepo

Philipp Melab / Jul 23, 2024

Some time ago I wrote how we use pnpm and some advanced Docker features to optimise our image build processes, which resulted in some significant gains in our continuous delivery (CD) pipeline. Now it’s time to look at the first half of CI/CD, break down what Turborepo is and what it can do for you.

So many tasks

Modern web development has become complex and full of tools to orchestrate. There are compilers, linters, bundlers, type checkers and an army of other tiny gnomes to be herded that build out our websites. There is a lively discussion going on in the community if it should be this way or not. Some arguments are more valid than others, and certain individuals seem to have chosen to make “controversial” their new hobby, but from my perspective the case is clear: The web platform has done the impossible and evolved for 30 years with hardly any breaking changes, which would not have been possible without build tools that push the envelope and prove what is needed and could eventually become a standard. A long time ago, I was using a tool that would auto-generate images for rounded corners (to put them into table cells of course) and I don’t miss it. You would have to pry Typescript from my dead cold hands, but if an alternative gets built straight into Javascript, I will gladly switch. Who stops getting better stops being good, and even if web development is a lot better than 25 years ago, I doubt that we have seen the end of it.

Now that the question of “if” is out of the way, let’s talk about the “how” by coming up with a very contrived example that will trigger strong visceral reactions by the #nobuild crowd:

An imaginary NPM package, written in Typescript:

Even this allows for some over-engineering! Which tasks would make sense to run on this?

  • We want to type check it tsc, to make sure its correct before we run it.
  • We need to compile it, because node will choke on the argument type declaration. And we use swc instead of tsc because it’s *“written in Rust”*™ and fast and that absolutely matters for four lines of code.
  • Eventually we want to run it and see the glorious result in our terminal.

The most naive approach would be to just chain a couple of commands:

So every time we execute npm run start our program will be type checked, compiled and eventually run. That’s nice, but if we do this a lot and our codebase gets bigger, it’s a huge waste of time. Our editor of choice (Neovim by the way) already runs the type checking for us while writing, so we only need it in CI. And in a production environment we don’t want to re-compile it on every run. So we split our tasks up into different scripts.

Great, now we have a test script to run in CI and a compile script that we have to remember to run every time we hit start after we changed something … You see where this goes. I will forget to run compile 70% of the time and wonder why my changes have no effect. And this is just the simplest possible example. Now imagine a just slightly more complex case, where we also need to use PostCSS to build a stylesheet.

Now we need to remember to:

  • Run compile whenever a Typescript file changes
  • Run styles whenever a CSS file changes
  • Make sure both have run before start

A recipe for disaster! Isn’t there someone else who could do that?

The Simpsons: Can't someone else do it?

Turborepo

Yes! That’s what Turborepo does! It’s essentially a task runner that knows about dependencies and a tasks inputs and outputs to determine what has to happen at any time. Simply add turbo as a dev-dependency to your package and create your first turbo.json:

We just declared dependencies for our four tasks, so Turborepo knows what to do. When running npm turbo start it will run first test, compile and styles in parallel, and if all of them succeed also start. Great! When running npm turbo start again, the log output will be almost the same, except small messages that state that there was a cache hit for test, compile and styles. This becomes more apparent when running npm turbo start --output-logs=new-only, which then will only output logs of the tasks that are actually run.

If we change something in a Typescript file and re-run the command, we should see that it re-runs test and compile, but styles is skipped. Why?

Each task in the configuration can have inputs, outputs and dependsOn fields. Based on these, Turborepo knows what others to run and in which order.

  • inputs defines which files to monitor. If one of those changes, the task will re-run.

  • outputs defines which files are created by this task. Turborepo caches those and can just restore them if the input files are already known.

  • dependsOn defines which other tasks have to be completed before this one. This has two effects:

    • The dependencies are run before the current task.
    • If the dependencies output changes, the current task will be re-run.

With these simple tools we have achieved our goal to create a single command that will always lead to a working program, but only run what’s necessary!

Monorepos

While Turborepo already brings benefits to single packages, Monorepos are the place where it really shines. With the same primitives, it allows us to declare dependencies between multiple packages and test, build and run them efficiently.

Let’s imagine the following repository structure:

We have two applications, a Website, implemented in the Javascript Framework of the Week and a Content Management System in a 20 Year old PHP framework that just won’t die. And between them, two utility packages that contain UI components and stylesheets, as well as shared business logic.

It’s a good idea to define some common rules first, that all these packages conform to. We can do this by placing a turbo.json file into the project root:

This declares that all packages might have a compile, test and start task. If any of them are missing in a package it does not matter, since they will then just be ignored.

  • The compile task is self contained and simply prepares the package for consumption. The simplest example would be the logic package, which just contains Typescript code and compiles it to runnable Javascript and type definitions using tsup.
  • The test task becomes more interesting. If the ui package imports business logic functions from logic, then it requires the type definitions to be there before being able to run its own type check. Therefore we use the ^ symbol to define that test depends on the compile task of all dependencies.
  • And finally the start task forces the test task of all dependencies to be run (which will run compile of sub-dependencies), its own test to be successful and also to compile itself to have runnable code.

We defined the rules all packages conform to, but they might have different options on what the tasks themselves do. That’s where the extension feature of Turborepo comes into play. For our business logic package that only contains Typescript files that’s rather simple. We place a turbo.json file in the shared/logic directory that extends from the turbo.json at the root and adds inputs and outputs to the test and compile tasks.

The logic package does not have a start task, so it can just be ignored here.

Heading over to the ui package, things become a little more challenging. Here we have two different compile tasks. One that builds the Javascript sources from Typescript (compile:typescript) and one that prepares stylesheets with PostCSS (compile:postcss). First of all, this means we need to have these script definitions in the shared/ui/package.json file:

Now we need to make these two tasks respond to the unified compile task, which can be done with what Turborepo calls “Transit Nodes”. They don’t have a corresponding script in package.json but just declare dependencies to other tasks to make sure they are executed:

We also want to get a little more granular with our input and output definitions. test and compile:typescript should only re-run if Typescript files change. Respectively, we also only want to run PostCSS if any CSS files change.

The outputs definition is important too. PostCSS might minify the aggregated stylesheet. So if just a comment is added to one of the source files that actually does not make it into ./dist/styles.css Turborepo will detect that there are not actual changes in the output and won’t force dependent tasks to re-run. The benefit of this becomes apparent when we look at the cms package.

As mentioned, we are talking about a PHP framework, so Javascript files and type definitions are not of much use here. But it does include the stylesheet from the ui package to load it into the WYSIWYG editor. So ideally, we only have to run the compile:postcss task to get an operational cms package. We can override the dependencies of start in apps/cms/turbo.json to only depend on this specific task:

Assuming that the website package uses a framework like Remix or Waku that consists only of a compile and a start task, we just have to add correct inputs and outputs, which look very similar to the logic package definitions.

We now have a setup where we can just run the start task of one of the apps, and Turborepo will make sure everything is in place:

Remote Caching

One more feature of Turborepo, that glues everything together is Remote Caching. By default, when running commands, a local cache is maintained that can restore results of tasks that have been run before. It is also possible to move this cache to a shared space. This means that CI runs can skip tasks as well and caches can even be shared among coworkers. A shared cache can be hosted directly on Vercel, but the API spec is open and there are other implementations out there. We for example use a Github action that stores Turborepo caches in Github workflow artifacts, so subsequent workflow runs have access to the same caches and we only re-run tests for the things that actually changed.

Conclusion

The combination of PNPM and Turborepo allows us to create a setup, where we can build multiple connected applications within one repository and even deploy them to different environments, while building and testing only what’s really necessary. Apart from CI/CD pipeline time, this also improves developer experience, since we don’t have to remember which tasks need to be run. The new watch command allows us to take this even a step further by defining a single dev task that will run the correct task for each file change automatically. So we can stop thinking about whether to #build or #nobuild and focus on what matters. Great software and user experience!