There are many different scenarios where Cypress end-to-end tests may fail. It’s important to understand both why and how to solve these issues. Test failures are blockers in GitHub Actions deployment workflows in every instance, so these solutions will help your code move forward. This guide documents how to manage Cypress test failures. Cypress tests run in the continuous integration pipelines in
content-build and in the daily accessibility scan in
content-build. The continuous integration build for a pull request can be seen in GitHub when viewing the commit log in the branch or when a PR is opened.
Before you begin
content-build, you should be able to run Cypress tests locally. The
vets-website (link) and
content-build (link) document how to start the applications locally and run Cypress tests.
Currently, there is no simple way to recreate accessibility scan failures, but this guide will be updated when this is available.
If your E2E test is not running, and is not set to skip, proceed directly to step 3.
Step 1: Inspect the failure artifacts
Navigate to the Summary page for the continuous integration run that contains the failing test(s).
Click on Cypress Tests Summary and open the Mochawesome Report for the run.
For the daily accessibility scan, a link to the report can be found in the #-daily-accessibility-scan Slack channel.
Find the failing test in the report and inspect the video.
Alternatively, you can download the video and screenshots directly from the Artifacts section of the Summary page.
Note: Videos are not generated for failures in the daily accessibility scan.
If you cannot determine the root cause of the failure from the Mochawesome Report or from the artifacts, the next step is to recreate the failure locally.
Step 2: Recreate the failure locally and identify the root cause of failure
content-buildas described in their respective
Start Cypress in headed mode and search for the failing test.
Run the test and see if it fails.
If the test does not fail on the first try, it is likely that the test is flaky. In order to identify the cause of the flakiness, you will need to run the test in the loop as described in this document. This document also includes tips on how to manage flakiness.
vets-websitetest does not fail in a loop, the failure may be present only in the production build.
To build the production build locally, run
yarn build --buildtype=vagovprod.
Alternatively, you can download the production build from the CI run artifacts (
vagovprod.tar.bz2) and extract the build into
After building, start the production build with
node src/platform/testing/e2e/test-server.js --buildtype=vagovprod --port=3001.
Start Cypress and run the test as described in the
Proceed to Step 3.
Step 3: Implement a fix
Based on what you’ve learned, select the appropriate course of action below:
Check here to see if the test has been disallowed by the E2E Stress Test. If it has, the test has some flakiness and should be run through the process of troubleshooting the flakiness here. Once appropriate changes have been made to the test, open a PR to main with the test’s changes and it will automatically be stress tested again, and if passed, it will be removed from the disallow list. You can read more about this process here.
Try to recreate the failure on a review instance or other deployed environment like dev or staging. Determine the impact of the issue and the root cause of the issue, and prioritize fixing it accordingly.
Note: Failures in the daily accessibility scan can be inspected in deployed environments with the axe DevTools browser extension.
Skip the test with either
it.skip() so that the test does not block CI, and prioritize fixing the test. Typically, a race condition is present that can be addressed by adding conditions that verify elements are present or not present before interacting with them.
Check whether the issue is present in the staging environment.
Occasionally, a git issue or GitHub issue could be the culprit. As an experiment, an empty commit can be pushed up to a branch to see if it resolves the failure.
Fixes may require coordination among multiple teams. If a cause can still not be determined, create a support request with the QA Standards Team with a link to the failing build and any relevant information.
Ensure that your branch is up-to-date with the main branch. Integrate changes from the main branch into your branch through a merge or rebase.