role Iterable { }

Iterable serves as an API for objects that can be iterated with for and related iteration constructs, like assignment to a Positional variable.

Iterable objects nested in other Iterable objects (but not within scalar containers) flatten in certain contexts, for example when passed to a slurpy parameter (*@a), or on explicit calls to flat.

Its most important aspect is a method stub for iterator.

class DNA does Iterable {
    has $.chain;
    method new ($chain where { $chain ~~ /^^ <[ACGT]>+ $$ / } ) {
        self.bless:$chain );
    }
 
    method iterator(DNA:D:{
        $!chain.comb.rotor(3).iterator;
    }
}
 
my $a := DNA.new('GAATCC');
.say for $a# OUTPUT: «(G A A)␤(T C C)␤» 

This example mixes in the Iterable role to offer a new way of iterating over what is essentially a string (constrained by where to just the four DNA letters). In the last statement, for actually hooks to the iterator role printing the letters in groups of 3.

Methods§

method iterator§

method iterator(--> Iterator:D)

Method stub that ensures all classes doing the Iterable role have a method iterator.

It is supposed to return an Iterator.

say (1..10).iterator;

method flat§

method flat(Iterable:D: --> Iterable)

Returns another Iterable that flattens out all iterables that the first one returns.

For example

say (<a b>'c').elems;         # OUTPUT: «2␤» 
say (<a b>'c').flat.elems;    # OUTPUT: «3␤»

because <a b> is a List and thus iterable, so (<a b>, 'c').flat returns ('a', 'b', 'c'), which has three elems.

Note that the flattening is recursive, so ((("a", "b"), "c"), "d").flat returns ("a", "b", "c", "d"), but it does not flatten itemized sublists:

say ($('a''b'), 'c').flat;    # OUTPUT: «($("a", "b"), "c")␤»

You can use the hyper method call to call the .List method on all the inner itemized sublists and so de-containerize them, so that flat can flatten them:

say ($('a''b'), 'c')>>.List.flat.elems;    # OUTPUT: «3␤»

method lazy§

method lazy(--> Iterable)

Returns a lazy iterable wrapping the invocant.

say (1 ... 1000).is-lazy;      # OUTPUT: «False␤» 
say (1 ... 1000).lazy.is-lazy# OUTPUT: «True␤»

method hyper§

method hyper(Int(Cool:$batch = 64Int(Cool:$degree = Kernel.cpu-cores - 1)

Returns another Iterable that is potentially iterated in parallel, with a given batch size and degree of parallelism.

The order of elements is preserved.

say ([1..100].hyper.map({ $_ +1 }).list);

Use hyper in situations where it is OK to do the processing of items in parallel, and the output order should be kept relative to the input order. See race for situations where items are processed in parallel and the output order does not matter.

Options degree and batch§

The degree option (short for "degree of parallelism") configures how many parallel workers should be started. To start 4 workers (e.g. to use at most 4 cores), pass :degree(4) to the hyper or race method. Note that in some cases, choosing a degree higher than the available CPU cores can make sense, for example I/O bound work or latency-heavy tasks like web crawling. For CPU-bound work, however, it makes no sense to pick a number higher than the CPU core count.

The batch size option configures the number of items sent to a given parallel worker at once. It allows for making a throughput/latency trade-off. If, for example, an operation is long-running per item, and you need the first results as soon as possible, set it to 1. That means every parallel worker gets 1 item to process at a time, and reports the result as soon as possible. In consequence, the overhead for inter-thread communication is maximized. In the other extreme, if you have 1000 items to process and 10 workers, and you give every worker a batch of 100 items, you will incur minimal overhead for dispatching the items, but you will only get the first results when 100 items are processed by the fastest worker (or, for hyper, when the worker getting the first batch returns.) Also, if not all items take the same amount of time to process, you might run into the situation where some workers are already done and sit around without being able to help with the remaining work. In situations where not all items take the same time to process, and you don't want too much inter-thread communication overhead, picking a number somewhere in the middle makes sense. Your aim might be to keep all workers about evenly busy to make best use of the resources available.

You can also check out this blog post on the semantics of hyper and race

The default for :degree is the number of available CPU cores minus 1 as of the 2020.02 release of the Rakudo compiler.

As of release 2022.07 of the Rakudo compiler, it is also possible to specify an undefined value to indicate to use the default.

method race§

method race(Int(Cool:$batch = 64Int(Cool:$degree = 4 --> Iterable)

Returns another Iterable that is potentially iterated in parallel, with a given batch size and degree of parallelism (number of parallel workers).

Unlike hyper, race does not preserve the order of elements (mnemonic: in a race, you never know who will arrive first).

say ([1..100].race.map({ $_ +1 }).list);

Use race in situations where it is OK to do the processing of items in parallel, and the output order does not matter. See hyper for situations where you want items processed in parallel and the output order should be kept relative to the input order.

Blog post on the semantics of hyper and race

See hyper for an explanation of :$batch and :$degree.

Typegraph§

Type relations for Iterable
raku-type-graph Iterable Iterable Mu Mu Any Any Any->Mu Cool Cool Cool->Any Associative Associative Map Map Map->Iterable Map->Cool Map->Associative Positional Positional IO::Path::Parts IO::Path::Parts IO::Path::Parts->Iterable IO::Path::Parts->Any IO::Path::Parts->Associative IO::Path::Parts->Positional PositionalBindFailover PositionalBindFailover Sequence Sequence Sequence->PositionalBindFailover RaceSeq RaceSeq RaceSeq->Iterable RaceSeq->Any RaceSeq->Sequence HyperSeq HyperSeq HyperSeq->Iterable HyperSeq->Any HyperSeq->Sequence Seq Seq Seq->Iterable Seq->Cool Seq->Sequence Range Range Range->Iterable Range->Cool Range->Positional List List List->Iterable List->Cool List->Positional Hash Hash Hash->Map PseudoStash PseudoStash PseudoStash->Map Array Array Array->List Slip Slip Slip->List Stash Stash Stash->Hash

Expand chart above