At its simplest, a test in Rust is a function that’s annotated with the test attribute.

pub fn add(left: usize, right: usize) -> usize {
    left + right
}
 
#[cfg(test)]
mod tests {
    use super::*;
 
    #[test]
    fn it_works() {
        let result = add(2, 2);
        assert_eq!(result, 4);
    }
}

The assert! macro, provided by the standard library, is useful when you want to ensure that some condition in a test evaluates to true. If the value is false, the assert! macro calls panic! to cause the test to fail.

#[cfg(test)]
mod tests {
    use super::*;
 
    #[test]
    fn larger_can_hold_smaller() {
        let larger = Rectangle {
            width: 8,
            height: 7,
        };
        let smaller = Rectangle {
            width: 5,
            height: 1,
        };
 
        assert!(larger.can_hold(&smaller));
    }
}

assert_eq! and assert_ne! test for equality between the result of the code under test. These macros compare two arguments for equality or inequality, respectively. They’ll also print the two values if the assertion fails, which makes it easier to see why the test failed; conversely, the assert! macro only indicates that it got a false value for the == expression, without printing the values that led to the false value.

pub fn add_two(a: i32) -> i32 {
    a + 2
}
 
#[cfg(test)]
mod tests {
    use super::*;
 
    #[test]
    fn it_adds_two() {
        assert_eq!(4, add_two(2));
    }
}

The assert_ne! macro will pass if the two values we give it are not equal and fail if they’re equal. This macro is most useful for cases when we’re not sure what a value will be, but we know what the value definitely shouldn’t be. For example, if we’re testing a function that is guaranteed to change its input in some way, but the way in which the input is changed depends on the day of the week that we run our tests, the best thing to assert might be that the output of the function is not equal to the input.

Under the surface, the assert_eq! and assert_ne! macros use the operators == and !=, respectively. When the assertions fail, these macros print their arguments using debug formatting, which means the values being compared must implement the PartialEq and Debug traits. All primitive types and most of the standard library types implement these traits. For structs and enums that you define yourself, you’ll need to implement PartialEq to assert equality of those types. You’ll also need to implement Debug to print the values when the assertion fails. Because both traits are derivable traits, as mentioned in Listing 5-12 in Chapter 5, this is usually as straightforward as adding the #[derive(PartialEq, Debug)] annotation to your struct or enum definition.

Custom failure message

You can also add a custom message to be printed with the failure message as optional arguments to the assert!, assert_eq!, and assert_ne! macros.

#[test]
fn greeting_contains_name() {
	let result = greeting("Carol");
	assert!(
		result.contains("Carol"),
		"Greeting did not contain name, value was `{}`",
		result
	);
}
$ cargo test
   Compiling greeter v0.1.0 (file:///projects/greeter)
    Finished test [unoptimized + debuginfo] target(s) in 0.93s
     Running unittests src/lib.rs (target/debug/deps/greeter-170b942eb5bf5e3a)
 
running 1 test
test tests::greeting_contains_name ... FAILED
 
failures:
 
---- tests::greeting_contains_name stdout ----
thread 'tests::greeting_contains_name' panicked at src/lib.rs:12:9:
Greeting did not contain name, value was `Hello!`
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
 
 
failures:
    tests::greeting_contains_name
 
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
 
error: test failed, to rerun pass `--lib`

Checking for panics with should_panic

#[test]
#[should_panic]
fn greater_than_100() {
	Guess::new(200);
}

Tests that use should_panic can be imprecise. A should_panic test would pass even if the test panics for a different reason from the one we were expecting. To make should_panic tests more precise, we can add an optional expected parameter to the should_panic attribute. The test harness will make sure that the failure message contains the provided text.

#[test]
#[should_panic(expected = "less than or equal to 100")]
fn greater_than_100() {
	Guess::new(200);
}

Using Result<T, E> in Tests

We can also write tests that use Result<T, E>.

#[test]
fn it_works() -> Result<(), String> {
	if 2 + 2 == 4 {
		Ok(())
	} else {
		Err(String::from("two plus two does not equal four"))
	}
}

You can’t use the #[should_panic] annotation on tests that use Result<T, E>. To assert that an operation returns an Err variant, don’t use the question mark operator on the Result<T, E> value. Instead, use assert!(value.is_err()).

Controlling how tests are run

The default behavior of the binary produced by cargo test is to run all the tests in parallel and capture output generated during test runs, preventing the output from being displayed and making it easier to read the output related to the test results. You can specify command line options to change this default behavior.

Some command line options go to cargo test, and some go to the resulting test binary. To separate these two types of arguments, you list the arguments that go to cargo test followed by the separator -- and then the ones that go to the test binary. Running cargo test --help displays the options you can use with cargo test, and running cargo test -- --help displays the options you can use after the separator.

Running Tests in Parallel or Consecutively

When you run multiple tests, by default they run in parallel using threads, meaning they finish running faster and you get feedback quicker. Because the tests are running at the same time, you must make sure your tests don’t depend on each other or on any shared state, including a shared environment, such as the current working directory or environment variables.

If you don’t want to run the tests in parallel or if you want more fine-grained control over the number of threads used, you can send the --test-threads flag and the number of threads you want to use to the test binary. Take a look at the following example:

$ cargo test -- --test-threads=1

Showing Function Output

By default, if a test passes, Rust’s test library captures anything printed to standard output. For example, if we call println! in a test and the test passes, we won’t see the println! output in the terminal; we’ll see only the line that indicates the test passed. If a test fails, we’ll see whatever was printed to standard output with the rest of the failure message.

If we want to see printed values for passing tests as well, we can tell Rust to also show the output of successful tests with --show-output.

$ cargo test -- --show-output

Running a Subset of Tests by Name

You can choose which tests to run by passing cargo test the name or names of the test(s) you want to run as an argument.

pub fn add_two(a: i32) -> i32 {
    a + 2
}
 
#[cfg(test)]
mod tests {
    use super::*;
 
    #[test]
    fn add_two_and_two() {
        assert_eq!(4, add_two(2));
    }
 
    #[test]
    fn add_three_and_two() {
        assert_eq!(5, add_two(3));
    }
 
    #[test]
    fn one_hundred() {
        assert_eq!(102, add_two(100));
    }
}

Run a single test:

$ cargo test one_hundred
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished test [unoptimized + debuginfo] target(s) in 0.69s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
 
running 1 test
test tests::one_hundred ... ok
 
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s

Filter to run multiple tests:

$ cargo test add
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished test [unoptimized + debuginfo] target(s) in 0.61s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
 
running 2 tests
test tests::add_three_and_two ... ok
test tests::add_two_and_two ... ok
 
test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s

Ignore some tests unless specifically requested

Sometimes a few specific tests can be very time-consuming to execute, so you might want to exclude them during most runs of cargo test. Rather than listing as arguments all tests you do want to run, you can instead annotate the time-consuming tests using the ignore attribute to exclude them, as shown:

#[test]
fn it_works() {
    assert_eq!(2 + 2, 4);
}
 
#[test]
#[ignore]
fn expensive_test() {
    // code that takes an hour to run
}

If you want to run only the ignored tests, we can use:

cargo test -- --ignored

If you want to run all tests whether they’re ignored or not, you can run:

cargo test -- --include-ignored

Test organization

The Rust community thinks about tests in terms of two main categories: unit tests and integration tests. Unit tests are small and more focused, testing one module in isolation at a time, and can test private interfaces. Integration tests are entirely external to your library and use your code in the same way any other external code would, using only the public interface and potentially exercising multiple modules per test.

Unit tests

The purpose of unit tests is to test each unit of code in isolation from the rest of the code to quickly pinpoint where code is and isn’t working as expected. You’ll put unit tests in the src directory in each file with the code that they’re testing. The convention is to create a module named tests in each file to contain the test functions and to annotate the module with cfg(test).

The attribute cfg stands for configuration and tells Rust that the following item should only be included given a certain configuration option. In this case, the configuration option is test, which is provided by Rust for compiling and running tests. By using the cfg attribute, Cargo compiles our test code only if we actively run the tests with cargo test. This includes any helper functions that might be within this module, in addition to the functions annotated with #[test].

The #[cfg(test)] annotation on the tests module tells Rust to compile and run the test code only when you run cargo test, not when you run cargo build. This saves compile time when you only want to build the library and saves space in the resulting compiled artifact because the tests are not included. You’ll see that because integration tests go in a different directory, they don’t need the #[cfg(test)] annotation. However, because unit tests go in the same files as the code, you’ll use #[cfg(test)] to specify that they shouldn’t be included in the compiled result.

Testing private functions

There’s debate within the testing community about whether or not private functions should be tested directly, and other languages make it difficult or impossible to test private functions. Regardless of which testing ideology you adhere to, Rust’s privacy rules do allow you to test private functions.

pub fn add_two(a: i32) -> i32 {
    internal_adder(a, 2)
}
 
fn internal_adder(a: i32, b: i32) -> i32 {
    a + b
}
 
#[cfg(test)]
mod tests {
    use super::*;
 
    #[test]
    fn internal() {
        assert_eq!(4, internal_adder(2, 2));
    }
}

Note that the internal_adder function is not marked as pub. Tests are just Rust code, and the tests module is just another module. Items in child modules can use the items in their ancestor modules. In this test, we bring all of the tests module’s parent’s items into scope with use super::*, and then the test can call internal_adder. If you don’t think private functions should be tested, there’s nothing in Rust that will compel you to do so.

Integration tests

In Rust, integration tests are entirely external to your library. They use your library in the same way any other code would, which means they can only call functions that are part of your library’s public API. Their purpose is to test whether many parts of your library work together correctly. Units of code that work correctly on their own could have problems when integrated, so test coverage of the integrated code is important as well. To create integration tests, you first need a tests directory.

We create a tests directory at the top level of our project directory, next to src. Cargo knows to look for integration test files in this directory. We can then make as many test files as we want, and Cargo will compile each of the files as an individual crate.

adder
β”œβ”€β”€ Cargo.lock
β”œβ”€β”€ Cargo.toml
β”œβ”€β”€ src
β”‚Β Β  └── lib.rs
└── tests
    └── integration_test.rs
use adder::add_two;
 
#[test]
fn it_adds_two() {
    assert_eq!(4, add_two(2));
}

Each file in the tests directory is a separate crate, so we need to bring our library into each test crate’s scope. For that reason we add use adder::add_two at the top of the code, which we didn’t need in the unit tests.

We don’t need to annotate any code in tests/integration_test.rs with #[cfg(test)]. Cargo treats the tests directory specially and compiles files in this directory only when we run cargo test.

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished test [unoptimized + debuginfo] target(s) in 1.31s
     Running unittests src/lib.rs (target/debug/deps/adder-1082c4b063a8fbe6)
 
running 1 test
test tests::internal ... ok
 
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
 
     Running tests/integration_test.rs (target/debug/deps/integration_test-1082c4b063a8fbe6)
 
running 1 test
test it_adds_two ... ok
 
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
 
   Doc-tests adder
 
running 0 tests
 
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

The three sections of output include the unit tests, the integration test, and the doc tests. Note that if any test in a section fails, the following sections will not be run. For example, if a unit test fails, there won’t be any output for integration and doc tests because those tests will only be run if all unit tests are passing.

Each integration test file has its own section, so if we add more files in the tests directory, there will be more integration test sections.

We can still run a particular integration test function by specifying the test function’s name as an argument to cargo test. To run all the tests in a particular integration test file, use the --test argument of cargo test followed by the name of the file:

cargo test --test integration_test

This command runs only the tests in the tests/integration_test.rs file.

Submodules in integration tests

As you add more integration tests, you might want to make more files in the tests directory to help organize them; for example, you can group the test functions by the functionality they’re testing. As mentioned earlier, each file in the tests directory is compiled as its own separate crate, which is useful for creating separate scopes to more closely imitate the way end users will be using your crate. However, this means files in the tests directory don’t share the same behavior as files in src do regarding how to separate code into modules and files.

The different behavior of tests directory files is most noticeable when you have a set of helper functions to use in multiple integration test files.

For example, if we create tests/common.rs and place a function named setup in it, we can add some code to setup that we want to call from multiple test functions in multiple test files:

pub fn setup() {
    // setup code specific to your library's tests would go here
}

To avoid having common appear in the test output, instead of creating tests/common.rs, we’ll create tests/common/mod.rs. The project directory now looks like this:

β”œβ”€β”€ Cargo.lock
β”œβ”€β”€ Cargo.toml
β”œβ”€β”€ src
β”‚Β Β  └── lib.rs
└── tests
    β”œβ”€β”€ common
    β”‚Β Β  └── mod.rs
    └── integration_test.rs

Naming the file this way tells Rust not to treat the common module as an integration test file. When we move the setup function code into tests/common/mod.rs and delete the tests/common.rs file, the section in the test output will no longer appear. Files in subdirectories of the tests directory don’t get compiled as separate crates or have sections in the test output.

After we’ve created tests/common/mod.rs, we can use it from any of the integration test files as a module.

use adder;
 
mod common;
 
#[test]
fn it_adds_two() {
    common::setup();
    assert_eq!(4, adder::add_two(2));
}

Integration tests for binary crates

If our project is a binary crate that only contains a src/main.rs file and doesn’t have a src/lib.rs file, we can’t create integration tests in the tests directory and bring functions defined in the src/main.rs file into scope with a use statement. Only library crates expose functions that other crates can use; binary crates are meant to be run on their own.

This is one of the reasons Rust projects that provide a binary have a straightforward src/main.rs file that calls logic that lives in the src/lib.rs file. Using that structure, integration tests can test the library crate with use to make the important functionality available. If the important functionality works, the small amount of code in the src/main.rs file will work as well, and that small amount of code doesn’t need to be tested.