Traditional unit tests are ideal for exercising small pieces of logic in isolation. E2E tests, by contrast, are aimed at testing an entire page in the browser. Individual components sit in between, so they can be challenging to test in an efficient, concise way. These guidelines describe a method of component testing that incorporates many benefits of E2E tests, but with lower overhead. React Testing Library provides the framework for this type of integration test.

For additional information on unit testing at VA, see our Unit Testing guidelines. To learn about E2E tests at VA, check out End-to-end testing with Cypress.

Contents

Integrate and interact

Rendering a component, including integrations, then testing user interaction, ensures concise tests which exercise the most important code from a user’s perspective. Here’s an example from vets-website:

import { render, fireEvent } from '@testing-library/react';
import { createTestStore, renderWithStoreAndRouter} from '../../mocks/setup';
import ReceivedDoseScreenerPage from '../../../covid-19-vaccine/components/ReceivedDoseScreenerPage';

const initialState = { ... };

it('should not submit empty form', async () => {
  const store = createTestStore(initialState);
  const screen = renderWithStoreAndRouter(<ReceivedDoseScreenerPage />, { store });
  expect(await screen.findByText(/Continue/i)).to.exist;
  fireEvent.click(screen.getByText(/Continue/));

  expect(await screen.findByText('Please select an answer')).to.exist;
  expect(screen.history.push.called).to.not.be.true;
});
JS

This small test covers a lot of ground: setting initial store state, rendering a component with its children, simulating user interaction, expecting the correct rendered result, and checking router state. React Testing Library (RTL) is the underlying framework which enables this.

Why?

  • Achieving the above coverage with narrowly-focused unit tests would require multiple tests and would mock out valuable integrations.

  • An E2E test would include integrations, but would incur the overhead of firing up an external browser, rendering an entire page, and navigating to the page before running the test.

  • Targeting user-visible elements with ARIA roles makes RTL tests less brittle, allowing refactoring of components without breaking tests.

More information

Learn more about React Testing Library. Here are some thoughts on why RTL is an improvement over Enzyme. We have example RTL usage in our own VA documentation.

Note that while RTL is ideal for testing individual components, an E2E test that exercises many different components and depends on a wide variety of integrations may perform best in Cypress. Cypress is also preferable for tests that navigate between pages, some a11y tests, and certain Web Component tests (RTL doesn’t currently support the Shadow DOM).

Avoid shallow rendering

Shallow rendering is a method of rendering a component without its child components. Enzyme’s shallow() method is a common approach:

import { shallow } from 'enzyme';
import AppealHeader from '../../../components/appeals-v2/AppealHeader';

it('renders the heading text passed in as a prop', () => {
  const heading = ‘Test heading’;
  const wrapper = shallow(<AppealHeader heading={heading} />);
  expect(wrapper.find('h1').text()).to.equal(heading);
  wrapper.unmount();
});
JS

Where possible, avoid shallow rendering. Instead, render a component along with its children, as React Testing Library’s render() function does (see above).

Why?

  • Tests using shallow rendering can’t catch a variety of issues because they don’t run lifecycle methods, don’t check DOM element interaction, and don’t test component integrations.

  • Tests using shallow rendering tend to be tightly coupled to the particular implementation of the component under test. They are more likely to break when refactoring, for example when splitting or modifying child components.

More information

Read more on the dangers of shallow rendering.

Intercept requests instead of mocking fetch

Instead of mocking fetch, use msw to intercept and respond to requests. The following example configures msw to intercept POST requests to /login and respond with some JSON:

import { setupWorker, rest } from 'msw';

const worker = setupWorker(
  rest.post('/login', async (req, res, ctx) => {
    const { username } = await req.json()
    return res(
      ctx.json({
        username,
        firstName: 'John'
      })
    )
  }),
);

worker.start();
JS

You can configure some “happy path” responses globally for use by all your tests. To test failure scenarios, override the defaults as needed per test.

Why?

  • Tests use actual requests/responses, including headers, giving you more confidence your requests are correct

  • It’s easy to test error conditions like slow internet connections or endpoints that are broken.

  • Server handlers are reusable across your tests and your development environment

More information

Learn more about msw. Check out Stop Mocking Fetch for more information on the advantages of this approach.

Clean up mocks, dates, and all global state

Avoid mocks where possible, but if they’re required, make sure to reset them after your test completes. Where applicable, a sinon sandbox enables simple cleanup of all your stubs:

describe('MyComponent', () => {
  const sandbox = sinon.createSandbox();

  beforeEach(() => {
    sandbox.stub(mapboxUtils, 'getFeaturesFromAddress');
    sandbox.stub(ssoUtils, 'checkAutoSession');
  });

  afterEach(() => {
    sandbox.restore();
  });

  it(...);
});
JS

Similarly, if you use MockDate to set the clock, reset it after each test:

beforeEach(() => {
  MockDate.set('2018-01-01T12:14:51-04:00');
});

afterEach(() => {
  MockDate.reset();
});
JS

If your test makes changes to global state (localStorage, sessionStorage, the window object, etc) make sure to reset those as well.

Why?

  • Using mocks without resetting them causes the initial state for other tests to be different than expected, which may cause the tests to fail. Because tests are run in random order, these types of failures can be very difficult to track down and fix.

  • The same is true of global state like localStorage, sessionStorage, and the window object. Other tests may expect these to have a consistent or empty initial state, and may fail if that’s not the case.

  • Where possible, avoiding mocks brings your component under test closer to your component in production, making tests more reliable. In addition, mocks can make refactoring more difficult if the mocked component or library is changed.

More information

Learn more about sinon sandboxes and how they make cleaning up mocks/stubs easier. And here are some thoughts on why mocks are best avoided where possible.

Aim for the coverage sweet spot

The ideal coverage percentage varies depending on your project. For a small, focused module containing critical shared functionality, you might aim for 90% or even 100% coverage. In general, aim for at least 75% coverage as described in our unit test guidelines.

Low code coverage is a bad sign because it indicates much of your code is not being run at all by tests. The good news is that you can make large gains by adding a small number of tests. Take advantage of this large return on a small investment whenever you can.

If you find yourself testing obscure implementation details just to achieve small coverage gains, step back and reconsider. Tests that are tightly coupled to implementation make refactoring difficult because the tests must be refactored as well.

Finally, don’t rely solely on coverage percentage. Instead, think critically about each test and whether you’re covering the most important cases from a user’s perspective.

Why?

  • Boosting test coverage will increase your confidence when refactoring, allowing you to keep your app well-organized and maintainable without adding bugs.

  • Even 100% coverage doesn’t ensure your tests are correct and relevant – it just means tests ran your code at least once. Writing useful tests requires a thorough understanding of your application and a thoughtful approach.

  • It’s important to write enough tests to cover what users will actually do. Beyond that point you’ll likely get diminishing returns on your investment in tests.

More information

Read about Google’s Code Coverage Best Practices and check out The Case Against 100% Code Coverage.