One of the more difficult parts of building software is making sure that it actually works!

When you're just starting out, a lot of time is invested in just figuring out how to produce a result on screen. In developer circles, this pursuit is known as building the happy path. We say "happy" because it means that if all goes according to plan perfectly and your user doesn't make any mistakes or misunderstand anything, your software will work.

Of course, that's not reality. No, the happy path is potentially one of hundreds if not thousands of paths that a user could take. While we can sit in the browser or with a device in hand, clicking and tapping for hours, days, and weeks, it's not the best use of our time. Instead of just dropping the ball, this is where testing comes in.

What is testing?

Testing is the automation of a few different things:

  • Verifying certain user behaviors work as expected in your application.
  • Verifying that the inputs to, outputs of, and side-effects of functionality in your application are what you'd expect.
  • Verifying that failures in your code are handled properly and are not destructive.

The key word here is verify. Instead of assuming that our code works, we write tests to verify that our code works. In essence, we're trying to write a bunch of "choose your own adventure" puzzles that test our code in various ways.

What are the different types of tests?

Where testing can get confusing is with all of the possible ways to do it. There are a lot of different tools, philosophies, and methodologies around how to do it. To the inexperienced developer, this can be a bit of a maze. Unfortunately, too, it can be discouraging enough to make some developers skip testing entirely.

From experience, there are three core types of tests that you want to consider writing in your application:

1. Unit Tests

Unit tests are tiny, like the name implies. They're trying to verify an isolated piece of functionality, like a function that you use to parse a user's name from a value in your database or convert some Markdown into HTML. Here's an example of a piece of code from Command, along with its unit test:

parseMarkdown.js

import showdown from 'showdown';

export default (markdown) => {
  const converter = new showdown.Converter({
    strikethrough: true,
  });
  return converter.makeHtml(markdown);
};

The code here takes in a string of some markdown and attempts to convert it using the Showdown markdown converter. The unit tests, then, wants to verify whether or not this function does its job (converting Markdown to HTML):

parseMarkdown.test.js

import parseMarkdown from './parseMarkdown';

describe('parseMarkdown.js', () => {
  test('it returns HTML when passed a string of Markdown', () => {
    const html = parseMarkdown('### Testing\n**Markdown** is working.');
    expect(html).toBe('<h3>Testing</h3>\n<p><strong>Markdown</strong> is working.</p>\n');
  });
});

To write this test, I'm using the Jest framework from Facebook. The test we're writing here is saying "when I call the parseMarkdown function and pass it a string of Markdown, I expect the returned value to be HTML that looks like this."

Simple as that. Quite literally: given this input, does this function do what I expect it to do?

2. Integration Tests

Integration tests, also like the name implies, are trying to verify the integration of multiple parts of your application. Said a different way: the integration of multiple units of your application. Here's another example from Command, where the goal is to update an existing product that one of my customers will manage:

updateProduct.js

/* eslint-disable consistent-return */

import Products from '../Products';

let action;

const updateProductInDatabase = ({ productId, update }) => {
  try {
    return Products.update(
      { _id: productId },
      {
        $set: {
          ...update,
        },
      },
    );
  } catch (exception) {
    throw new Error(`[updateProduct.updateProductInDatabase] ${exception.message}`);
  }
};

const checkIfOwner = ({ userId, productId }) => {
  try {
    const product = Products.findOne(productId, { fields: { userId: 1 } });
    return product && product.userId === userId;
  } catch (exception) {
    throw new Error(`[updateProduct.checkIfOwner] ${exception.message}`);
  }
};

const validateOptions = (options) => {
  try {
    if (!options) throw new Error('options object is required.');
    if (!options.userId) throw new Error('options.userId is required.');
    if (!options.productId) throw new Error('options.productId is required.');
    if (!options.update) throw new Error('options.update is required.');
  } catch (exception) {
    throw new Error(`[updateProduct.validateOptions] ${exception.message}`);
  }
};

const updateProduct = (options) => {
  try {
    validateOptions(options);
    const isOwner = checkIfOwner(options);
    if (!isOwner) throw new Error('Sorry, you need to be the owner of this product to update it.');
    updateProductInDatabase(options);
    action.resolve(options.productId);
  } catch (exception) {
    action.reject(`[updateProduct] ${exception.message}`);
  }
};

export default (options) =>
  new Promise((resolve, reject) => {
    action = { resolve, reject };
    updateProduct(options);
  });

And the corresponding integration test, also written with Jest:

updateProduct.test.js

import Products from '../Products';
import updateProduct from './updateProduct';

const testData = {
  userId: 'abc123',
  productId: 'product123',
  update: {
    name: 'Test Product',
    enablePublicRoadmap: true,
  },
};

describe('updateProduct.js', () => {
  beforeEach(() => {
    Products.update.mockReset();
    Products.update.mockImplementation(() => 'def123');

    Products.findOne.mockReset();
    Products.findOne.mockImplementation((_id) =>
      [{ _id: 'product123', userId: 'abc123' }].find((product) => product._id === _id),
    );
  });

  test('updates product', async () => {
    await updateProduct(testData);
    expect(Products.update).toHaveBeenCalledTimes(1);
    expect(Products.update).toHaveBeenCalledWith(
      { _id: testData.productId },
      {
        $set: {
          ...testData.update,
        },
      },
    );
  });
});

Integration tests are a little more involved. In this example, we're trying to test that two things work (or integrate) together: verifying that the user owns the product they're trying to update and that the update in the database succeeds.

While there's more code here, the idea is the same. Inside of our updateProduct.test.js file, we have a single test() written that calls our updateProduct function passing some test data. We then verify our expectation that Products.update will have been called one time and that Products.update will have been called with our test data.

If this test passes, we've increased our confidence that our code is working.

It's all about confidence! You can write tests till you're blue in the face, but that's not a guarantee that your code is perfect. Because there are limitless ways for your code to be used (and a ton of variables in how it behaves based on where it's used), it's next to impossible to test every permutation. What's important is to focus on is writing enough tests to give you confidence that your code is working. If that's one test, great! If it's 20, so be it.

3. End to end tests

End to end tests are a little more ambiguous based on their name. Ultimately, an end-to-end test is trying to verify that if a user performs some action in the browser, that action succeeds from the front-end all the way to the back-end of your application. So, if I fill out a form and push the submit button, it actually does what I expect in the browser.

Though not a popular term for it, these can also be called "browser tests." The reason why is that the actual test is performed in the browser. With end-to-end tests, we're automating the act of clicking through the application (imagine cloning yourself 100 times and each clone is responsible for clicking through a certain path in your application).

Here's an example end-to-end test from Pup, the boilerplate application we maintain at Clever Beagle for building your product:

ui/pages/Login/index.e2e.js

import { login, getPageUrl } from '../../../tests/helpers/e2e';

fixture('/login').page('http://localhost:3000/login');

test('should allow users to login and see their documents', async (browser) => {
  await login({
    email: 'user+1@test.com',
    password: 'password',
    browser,
  });

  await browser.expect(getPageUrl()).contains('/documents');
});

tests/helpers/e2e.js

import { ClientFunction, Selector } from 'testcafe';

export const login = async ({ email, password, browser }) => {
  await browser.typeText('[data-test="emailAddress"]', email);
  await browser.typeText('[data-test="password"]', password);
  await browser.click('button[type=submit]');
  await Selector('[data-test="user-nav-dropdown"]')(); // NOTE: If this exists, users was logged in.
};

export const getPageUrl = ClientFunction(() => window.location.href);

Here, we have a simple end-to-end tests that's designed to verify that a user can log in successfully. This test is written using a different testing tool called TestCafe.

The idea here is that we want to verify that if we do something in the browser, it produces the desired result in the browser. In the example above, we want to verify that a user can login via the browser. Using a helper function (a function that we reuse often, so we've given its own file so we don't have to copy-and-paste the code) we wrote called login(), we literally fill out the login form.

What our test is saying is "go to the /login page in the application, then fill out the emailAddress input with user+1@test.com and then fill out the password input with password and then click the submit button. Once you've done that, verify that the user was redirected to the /documents page (the first page a user should see after they login)."

If this test passes, that means that our user interface is behaving as expected, but also, the underlying code that that user interface connects to is behaving as expected, too!

Should you be writing tests?

The big question that needs to be answered here is should you be doing this? Well, that depends, ultimately on your level of experience with coding and developing applications.

It's pretty easy to see just from the examples above that there are a ton of moving parts. While things have improved significantly over the years, there's still a lot of setup you have to do before you can write tests. Tools like Pup help you to mitigate this somewhat by doing that setup for you.

That said, testing is important. While it can seem frivolous to an untrained eye, tests are the best tool to help developers verify that the code they're writing actually works. Does that mean that tests are mandatory? No. But it does mean that if you want to get as close as possible to a guarantee that things will work as expected: testing is worth factoring into your workflow.

A question of quality

Ultimately, the choice to write tests vs. not is a question of quality. What you want to ask yourself is "what level of quality do I want to deliver to my users?" The quality, here, is in respect to whether or not your application does what it says it does.

For example, if I click a button to start my subscription and I just see a spinner appear, as a user, that's a ding against the quality of your application. If I write a test to verify that that button click works as expected, I can guard against that negative experience.

A rule of thumb to follow

The best rule of thumb that I've found for whether or not you should test is your level of experience as a developer. Early on, testing can be cumbersome, overwhelming, and discouraging. When you're just starting out (think the first 1-2 years), it's best to focus on just finding your way around. Getting things to work.

After you're familiar with what it takes to ship an idea? Starting to learn about tests with an expectation that you won't be an expert is best. Maybe learn to write a few small tests, get comfortable with some tooling, but don't stress too much about having maximum coverage.

Coverage is the term used to describe the degree to which your code is covered by tests. The more cases you write tests for, the higher the level of coverage you have. Generally, the correct level of coverage depends on your tolerance for not having tests or expectations set by your team members if you're working for a company.

Tests don't have to be scary

Tests are a tool. While some developers turn them into a religion, don't let this scare you. Focus on delivering a quality product to your customers first and recognize tests as one tool for helping you to do that. Remember: just because you write tests doesn't mean that your code is bulletproof. It just means that you have a little more confidence that your code behaves as expected.

Over time, your goal should be to build up your test suite (a term used to refer to all of the tests in your application) to a level where customer feedback is less about performance and accuracy and more about "hey, it'd be neat if the app could do this!"