Parallelizing A ClojureScript Test Suite, Part 1: The Guts of cljs.test


This opens a series describing how to add parallel execution to an existing CLJS test suite. The aim is to allow readers to “start where they are,” and be applicable to most projects, rather than requiring that they have already built their infrastructure in a certain way. This first article explains the general approach, and takes a tour of cljs.test internals. Further installments will cover implementation, improvements, and alternatives.


Test infrastructure provides a good touchstone for development, allowing us peace of mind as to whether we're accidentally introducing bugs. 

But imagine this: what if our software becomes bigger and more complicated than we expected. And with it, so does our test suite. Test times become longer and longer. Is it possible to take an existing ClojureScript test suite and make it run in parallel, with a minimum of fuss?

"Parallel" and "minimum of fuss" are two concepts that don't generally go together in the Javascript world. Javascript is single-threaded by design, whatever rumblings are on the horizon. There are web workers, which let us natively access our other cores, but they are not allowed to touch the DOM, making them unsuitable for testing.

But there is one form of Javascript parallelism you are likely all too familiar with:

I'm not "distracted," I'm parallel processing.

I'm not "distracted," I'm parallel processing.

Modern browsers spawn a new process for each tab. Thus, if we're executing our tests in a browser environment like chrome-headless, we already have easy access to parallel execution.

We can prove this to ourselves, actually:

This Node.js script opens chromium, then opens as many tabs as we have CPU cores. It gives each page a computation that takes about ten seconds to complete:

All four processors are working at full capacity, each taking about ten seconds to complete their assigned task.

All four processors are working at full capacity, each taking about ten seconds to complete their assigned task.

But how do we keep our tests separate from each other? For this, we have to understand how cljs.test works.

The standard cljs.test call looks something like this:

(cljs.test/run-tests '

There are two things we need to understand about this.

The first: namespaces are the smallest unit we are guaranteed to be able to test independently. cljs.test allows each namespace to include a function called test-ns-hook. If that function is not present, the default behavior occurs, and all the tests are bound up into a block and eventually run. But if test-ns-hook is bound, it is executed and the tests are ignored. This allows for more specialized testing flows---perhaps you need something more complicated than a series of function calls. It does mean there will be some namespaces that have to be tested as a whole, however.

The second thing to know is that run-tests is a macro call, not a function. run-tests expands to an expression which creates a block, nothing more than a very-slightly-souped-up vector of functions. If you look through the cljs.test source, about half of it is oriented towards creating blocks at different levels of hierarchy. There's a block for running a single test, there's a block for running all the tests in a namespace, there's the final product block, the expansion of which run-tests creates. Basically, it's blocks all the way down.

We’ve mentioned that blocks are vectors of functions, because it's not quite accurate to say they're vectors of tests, although the tests are certainly present. Some of the functions return...more blocks. Blocks are executed recursively—if, in the course of running a block, any of the functions returns a block, that block gets executed as well.

Second, a high proportion of the functions are administrative, safety, or reporting-oriented. Even the actual tests are in a wrapper function that catches and reports errors.

So how does a namespace correspond to the runnable block representing it? The end result of run-tests-block is of this form:

 (update-counters)         ;; as in: failures, errors, successes, etc.
 ;; repeat above for each namespace...

The first two items are generated by another macro, test-ns-block.

Ideally we'd be able to simply take the aggregated block that run-tests-block gives us, but the functions within that block report their results by updating a data structure hidden in an inaccessible (to us) closure. So the next best thing is for us to call test-ns-block manually, and handle the reporting ourselves. Exactly how to do this is a bit tricky, and we'll cover it in our next article.

About report

cljs.test is intended to be extensible. The cljs.test/report multimethod gets called with messages of various :types , and responds differently to each one:

(defmethod report [::default :summary] [m]
  (println "\nRan" (:test m) "tests containing"
           (+ (:pass m) (:fail m) (:error m)) "assertions.")
  (println (:fail m) "failures," (:error m) "errors."))

(defmethod report [::default :begin-test-ns] [m]
  (println "\nTesting" (name (:ns m))))

Many of them it simply ignores:

;; Ignore these message types:
(defmethod report [::default :end-test-ns] [m])
(defmethod report [::default :begin-test-var] [m]
(defmethod report [::default :end-test-var] [m])
(defmethod report [::default :end-run-tests] [m])
(defmethod report [::default :end-test-all-vars] [m])
(defmethod report [::default :end-test-vars] [m])

By passing a keyword other than ::default when calling run-tests, we can cause cljs.test to call our methods, giving us more control.

(defmethod [::parallel :end-test-ns] [m]
  (println (str "Finished testing" (:ns m) " in parallel!")))

(run-tests (assoc (empty-env) :reporter ::parallel))

We're now ready to start taking our test suite apart to be run in different tabs. In the next installment of this series, we’ll write a small queue to manage them, call our tests, receive the results, and report back through our terminal.


Danny Bell always wanted to use the Force, but he settled for Clojure instead. He's worked for multiple startups in online video, enterprise systems management, distance education, insurance, financial modeling, credit, and bespoke monitoring.