Writing and running tests in Perl 6

Testing code is an integral part of software development. Tests provide automated, repeatable verifications of code behaviour, and ensures your code works as expected.

In Perl 6, the Test module provides a testing framework similar to the Perl 5 Test::More module. Therefore, anyone familiar with Test::More (and related modules) should be comfortable with Perl 6's Test module.

Perl 6's official spectest suite uses Test.

The testing functions emit output conforming to the Test Anything Protocol.


Writing tests

As with any Perl project, the tests live under the t directory in the project's base directory.

A typical test file looks something like this:

    use v6.c;
    use Test;      # a Standard module included with Rakudo 
    use lib 'lib';
    plan $num-tests;
    # .... tests 
    done-testing;  # optional with 'plan' 

We ensure that we're using Perl 6, via the use v6.c pragma, then we load the Test module and specify where our libraries are. We then specify how many tests we plan to run (such that the testing framework can tell us if more or fewer tests were run than we expected) and when finished with the tests, we use done-testing to tell the framework we are done.

Thread Safety

Note that routines in Test module are not thread-safe. This means you should not attempt to use the testing routines in multiple threads simultaneously, as the TAP output might come out of order and confuse the program interpreting it.

There are no current plans to make it thread safe. If threaded-testing is crucial to you, you may find some suitable ecosystem modules to use instead of Test for your testing needs.

Running tests

Tests can be run individually by specifying the test filename on the command line:

    $ perl6 t/test-filename.t

Or via the prove command from Perl 5, where perl6 is specified as the executable that runs the tests:

    $ prove --exec perl6 -r t

To abort the test suite upon first failure, set the PERL6_TEST_DIE_ON_FAIL environmental variable:

    $ PERL6_TEST_DIE_ON_FAIL=1 perl6 t/test-filename.t

The same variable can be used within the test file. Set it before loading the Test module:

use Test;

Test plans

Specify the count of tests -- usually written at the beginning of a test file.

plan 15;   # expect to run 15 tests 

In subtests, plan is used to specify the count of tests within the subtest.

If a plan is used, it's not necessary to specify the end of testing with done-testing.

You can also provide a :skip-all named argument instead of a test count, to indicate that you want to skip all of the tests. Such a plan will call exit, unless used inside of a subtest.

plan :skip-all<These tests are only for Windows> unless $*DISTRO.is-win;
plan 1;
ok dir 'C:/'# this won't get run on non-Windows 

If used in a subtest, it will instead return from that subtest's Callable. For that reason, to be able to use :skip-all inside a subtest, you must use a sub instead of a regular block:

plan 2;
subtest "Some Windows tests" => sub { # <-- note the `sub`; can't use bare block 
    plan :skip-all<We aren't on Windows> unless $*DISTRO.is-win;
    plan 1;
    ok dir 'C:/'# this won't get run on non-Windows 
ok 42# this will run everywhere and isn't affected by skip-all inside subtest 

Note that plan with :skip-all is to avoid performing any tests without marking the test run as failed (i.e. the plan is to not run anything and that's all good). Use skip-rest to skip all further tests, once the run has started (i.e. planned to run some tests, maybe even ran some, but now we're skipping all the rest of them). Use bail-out to fail the test run without running any further tests (i.e. things are so bad, there's no point in running anything else; we've failed).

Specify that testing has finished. Use this function when you don't have a plan with the number of tests to run. A plan is not required when using done-testing.

It's recommended that the done-testing function be removed and replaced with a plan function when all tests are finalized. Use of plan can help detect test failures otherwise not reported because tests were accidentally skipped due to bugs in the tests or bugs in the compiler. For example:

sub do-stuff {@};
use Test;
ok .is-prime for do-stuff;
# output: 

The above example is where a done-testing fails. do-stuff() returned nothing and tested nothing, even though it should've returned results to test. But the test suite doesn't know how many tests were meant to be run, so it passes.

Adding plan gives a true picture of the test:

sub do-stuff {@};
use Test;
plan 1;
ok .is-prime for do-stuff;
# output: 
# Looks like you planned 1 test, but ran 0 

Note that leaving the done-testing in place will have no effect on the new test results, but it should be removed for clarity.

Testing return values

The Test module exports various functions that check the return value of a given expression and produce standardized test output.

In practice, the expression will often be a call to a function or method that you want to unit-test.

By Bool value

The ok function marks a test as passed if the given $value evaluates to True. The nok function marks a test as passed if the given value evaluates to False. Both functions accept an optional $description of the test.

my $responsemy $query...;
ok  $response.success'HTTP response was successful';
nok $query.error,      'Query completed without error';

In principle, you could use ok for every kind of comparison test, by including the comparison in the expression passed to $value:

sub factorial($x{ ... };
ok factorial(6== 720'Factorial - small integer';

However, where possible it's better to use one of the specialized comparison test functions below, because they can print more helpful diagnostics output in case the comparison fails.

By string comparison

Marks a test as passed if $value and $expected compare positively with the eq operator, unless $expected is a type object, in which case === operator will be used instead; accepts an optional $description of the test.

NOTE: eq operator the is() uses stringifies, which means is() is not a good function for testing more complex things, such as lists: is (1, (2, (3,))), [1, 2, 3] passes the test, even though the operands are vastly different. For those cases, use is-deeply routine

my $pdf-documentsub factorial($x{ ... }...;
is $pdf-document.author"Joe"'Retrieving the author field';
is factorial(6),         720,   'Factorial - small integer';
my Int $a;
is $aInt'The variable $a is an unassigned Int';

Note: if only whitespace differs between the values, is() will output failure message differently, to show the whitespace in each values. For example, in the output below, the second test shows the literal \t in the got: line:

is "foo\tbar""foo\tbaz";   # expected: 'foo     baz'␤#      got: 'foo   bar' 
is "foo\tbar""foo    bar"# expected: "foo    bar"␤#      got: "foo\tbar" 

Marks a test as passed if $value and $expected are not equal using the same rules as is(). The function accepts an optional $description of the test.

isnt pi3'The constant π is not equal to 3';
my Int $a = 23;
$a = Nil;
isnt $aNil'Nil should not survive being put in a container';

By approximate numeric comparison

Marks a test as passed if the $value and $expected numerical values are approximately equal to each other. The subroutine can be called in numerous ways that let you test using relative tolerance ($rel-tol) or absolute tolerance ($abs-tol) of different values.

If no tolerance is set, the function will base the tolerance on the absolute value of $expected: if it's smaller than 1e-6, use absolute tolerance of 1e-5; if it's larger, use relative tolerance of 1e-6.

my Numeric ($value$expected$abs-tol$rel-tol= ...
is-approx $value$expected;
is-approx $value$expected'test description';
is-approx $value$expected$abs-tol;
is-approx $value$expected$abs-tol'test description';
is-approx $value$expected:$rel-tol;
is-approx $value$expected:$rel-tol'test description';
is-approx $value$expected:$abs-tol;
is-approx $value$expected:$abs-tol'test description';
is-approx $value$expected:$abs-tol:$rel-tol;
is-approx $value$expected:$abs-tol:$rel-tol'test description';

Absolute Tolerance

When an absolute tolerance is set, it's used as the actual maximum value by which the $value and $expected can differ. For example:

is-approx 342# success 
is-approx 362# fail 
is-approx 3003022# success 
is-approx 3004002# fail 
is-approx 3006002# fail 

Regardless of values given, the difference between them cannot be more than 2.

Relative Tolerance

When a relative tolerance is set, the test checks the relative difference between values. Given the same tolerance, the larger the numbers given, the larger the value they can differ by can be.

For example:

is-approx 1010.5:rel-tol<0.1># success 
is-approx 1011.5:rel-tol<0.1># fail 
is-approx 100105:rel-tol<0.1># success 
is-approx 100115:rel-tol<0.1># fail 

Both versions use 0.1 for relative tolerance, yet the first can differ by about 1 while the second can differ by about 10. The function used to calculate the difference is:

              |value - expected|
rel-diff = ────────────────────────

and the test will fail if rel-diff is higher than $rel-tol.

Both Absolute and Relative Tolerance Specified

    is-approx $value$expected:rel-tol<.5>:abs-tol<10>;

When both absolute and relative tolerances are specified, each will be tested independently, and the is-approx test will succeed only if both pass.

By structural comparison

Marks a test as passed if $value and $expected are equivalent, using the same semantics as the eqv operator. This is the best way to check for equality of (deep) data structures. The function accepts an optional $description of the test.

use v6.c;
use Test;
plan 1;
sub string-info(Str() $_{
    Map.new: (
      length  =>  .chars,
      char-counts => Bag.new-from-pairs: (
          letters => +.comb(/<:letter>/),
          digits  => +.comb(/<:digit>/),
          other   => +.comb(/<.-:letter-:digit>/),
is-deeply string-info('42 Butterflies ♥ Perl'), Map.new((
    char-counts => Bag.new-from-pairs: ( :15letters, :2digits, :4other, )
)), 'string-info gives right info';

Note: for historical reasons, Seq:D arguments to is-deeply get converted to Lists. If you want to ensure strict Seq comparisons, use cmp-ok $got, 'eqv', $expected, $desc instead.

By arbitrary comparison

Compares $value and $expected with the given $comparison comparator and passes the test if the comparison yields a True value. The $description of the test is optional.

The $comparison comparator can be either a Callable or a Str containing an infix operator, such as '==', a '~~', or a user-defined infix.

cmp-ok 'my spelling is apperling''~~', /perl/"bad speller";

A Callable $comparison lets you use custom comparisons:

sub my-comp { $^a / $^b  < rand };
cmp-ok 1&my-comp2'the dice giveth and the dice taketh away'
cmp-ok 2-> $a$b { $a.is-prime and $b.is-prime and $a < $b }7,
    'we got primes, one larger than the other!';

By object type

Marks a test as passed if the given object $value is, or inherits from, the given $expected-type. For convenience, types may also be specified as a string. The function accepts an optional $description of the test.

class Womble {}
class GreatUncleBulgaria is Womble {}
my $womble = GreatUncleBulgaria.new;
isa-ok $wombleWomble"Great Uncle Bulgaria is a womble";
isa-ok $womble'Womble';     # equivalent 

By method name

Marks a test as passed if the given $variable can run the given $method-name. The function accepts an optional $description. For instance:

class Womble {};
my $womble = Womble.new;
# with automatically generated test description 
can-ok $womble'collect-rubbish';
#  => An object of type 'Womble' can do the method 'collect-rubbish' 
# with human-generated test description 
can-ok $womble'collect-rubbish'"Wombles can collect rubbish";
#  => Wombles can collect rubbish 

By role

Marks a test as passed if the given $variable can do the given $role. The function accepts an optional $description of the test.

# create a Womble who can invent 
role Invent {
    method brainstorm { say "Aha!" }
class Womble {}
class Tobermory is Womble does Invent {}
# ... and later in the tests 
use Test;
my $tobermory = Tobermory.new;
# with automatically generated test description 
does-ok $tobermoryInvent;
#  => The object does role Type 
does-ok $tobermoryInvent"Tobermory can invent";
#  => Tobermory can invent 

By regex

like 'foo', /fo/'foo looks like fo';

Marks a test as passed if the $value, when coerced to a string, matches the $expected-regex. The function accepts an optional $description of the test.

unlike 'foo', /bar/'foo does not look like bar';

Marks a test as passed if the $value, when coerced to a string, does not match the $expected-regex. The function accepts an optional $description of the test.

Testing modules

Marks a test as passed if the given $module loads correctly.

use-ok 'Full::Qualified::ModuleName';

Testing exceptions

Marks a test as passed if the given $code throws an exception.

The function accepts an optional $description of the test.

sub saruman(Bool :$ents-destroy-isengard{
    die "Killed by Wormtongue" if $ents-destroy-isengard;
dies-ok { saruman(ents-destroy-isengard => True}"Saruman dies";

Marks a test as passed if the given $code does not throw an exception.

The function accepts an optional $description of the test.

sub frodo(Bool :$destroys-ring{
    die "Oops, that wasn't supposed to happen" unless $destroys-ring;
lives-ok { frodo(destroys-ring => True}"Frodo survives";

Marks a test as passed if the given $string throws an exception when evaled as code.

The function accepts an optional $description of the test.

eval-dies-ok q[my $joffrey = "nasty";
               die "bye bye Ned" if $joffrey ~~ /nasty/],
    "Ned Stark dies";

Marks a test as passed if the given $string does not throw an exception when evaled as code.

The function accepts an optional $description of the test.

eval-lives-ok q[my $daenerys-burns = False;
                die "Oops, Khaleesi now ashes" if $daenerys-burns],
    "Dany is blood of the dragon";

Marks a test as passed if the given $code throws the specific exception $expected-exception. The code $code may be specified as something Callable or as a string to be EVALed. The exception may be specified as a type object or as a string containing its type name.

If an exception was thrown, it will also try to match the matcher hash, where the key is the name of the method to be called on the exception, and the value is the value it should have to pass. For example:

sub frodo(Bool :$destroys-ring{ fail "Oops. Frodo dies" unless $destroys-ring };
throws-like { frodo }Exceptionmessage => /dies/;

The function accepts an optional $description of the test.

Please note that you can only use the string form (for EVAL) if you are not referencing any symbols in the surrounding scope. If you are, you should encapsulate your string with a block and an EVAL instead. For instance:

throws-like { EVAL q[ fac("foo") ] }X::TypeCheck::Argument;

Grouping tests

The result of a group of subtests is only ok if all subtests are ok.

The subtest function executes the given block, consisting of usually more than one test, possibly including a plan or done-testing, and counts as one test in plan, todo, or skip counts. It will pass the test only if all tests in the block pass. The function accepts an optional $description of the test.

class Womble {}
class GreatUncleBulgaria is Womble {
    has $.location = "Wimbledon Common";
    has $.spectacles = True;
subtest {
    my $womble = GreatUncleBulgaria.new;
    isa-ok $womble,            Womble,             "Correct type";
    is     $womble.location,   "Wimbledon Common""Correct location";
    ok     $womble.spectacles,                     "Correct eyewear";
}"Check Great Uncle Bulgaria";

You can also place the description as the first positional argument, or use a Pair with description as the key and subtest's code as the value. This can be useful for subtests with large bodies.

subtest 'A bunch of tests'{
    plan 42;
subtest 'Another bunch of tests' => {
    plan 72;

Skipping tests

Sometimes tests just aren't ready to be run, for instance a feature might not yet be implemented, in which case tests can be marked as todo. Or it could be the case that a given feature only works on a particular platform - in which case one would skip the test on other platforms.

Mark $count tests as TODO, giving a $reason as to why. By default only one test will be marked TODO.

sub my-custom-pi { 3 };
todo 'not yet precise enough';         # Mark the test as TODO. 
is my-custom-pi(), pi'my-custom-pi'# Run the test, but don't report 
                                       # failure in test harness. 

The result from the test code above will be something like:

    not ok 1 - my-custom-pi# TODO not yet precise enough 
    # Failed test 'my-custom-pi' 
    # at test-todo.t line 7 
    # expected: '3.14159265358979' 
    #      got: '3' 

Note that if you todo a subtest, all of the failing tests inside of it will be automatically marked TODO as well and will not count towards your original TODO count.

Skip $count tests, giving a $reason as to why. By default only one test will be skipped. Use such functionality when a test (or tests) would die if run.

sub num-forward-slashes($arg{ ... } ;
if $*KERNEL ~~ 'linux' {
    is num-forward-slashes("/a/b"),             2;
    is num-forward-slashes("/a//b".IO.cleanup), 2;
else {
    skip "Can't use forward slashes on Windows"2;

Note that if you mark a test as skipped, you must also prevent that test from running.

Skip the remaining tests. If the remainder of the tests in the test file would all fail due to some condition, use this function to skip them, providing an optional $reason as to why.

my $locationsub womble { ... }...;
unless $location ~~ "Wimbledon Common" {
    skip-rest "We can't womble, the remaining tests will fail";
# tests requiring functional wombling 
ok womble();
# ... 

Note that skip-rest requires a plan to be set, otherwise the skip-rest call will throw an error. Note that skip-rest does not exit the test run. Do it manually, or use conditionals to avoid running any further tests.

See also plan :skip-all('...') to avoid running any tests at all and bail-out to abort the test run and mark it as failed.

If you already know the tests will fail, you can bail out of the test run using bail-out():

    my $has-db-connection;
    $has-db-connection  or bail-out 'Must have database connection for testing';

The function aborts the current test run, signaling failure to the harness. Takes an optional reason for bailing out. The subroutine will call exit(), so if you need to do a clean-up, do it before calling bail-out().

If you want to abort the test run, but without marking it as failed, see skip-rest or plan :skip-all('...')

Manual control

If the convenience functionality documented above does not suit your needs, you can use the following functions to manually direct the test harness output.

The pass function marks a test as passed. flunk marks a test as not passed. Both functions accept an optional test $description.

pass "Actually, this test has passed";
flunk "But this one hasn't passed";

Since these subroutines do not provide indication of what value was received and what was expected, they should be used sparingly, such as when evaluating a complex test condition.

Display diagnostic information in a TAP-compatible manner on the standard error stream. This is usually used when a particular test has failed to provide information that the test itself did not provide. Or it can be used to provide visual markers on how the testing of a test-file is progressing (which can be important when doing stress testing).

diag "Yay!  The tests got to here!";