Group of named regexes that form a formal grammar
Grammar is a powerful tool used to destructure text and often to return data structures that have been created by interpreting that text.
For example, Perl 6 is parsed and executed using a Perl 6-style grammar.
An example that's more practical to the common Perl 6 user is the JSON::Tiny module, which can deserialize any valid JSON file, however the deserializing code is written in less than 100 lines of simple, extensible code.
If you didn't like grammar in school, don't let that scare you off grammars. Grammars allow you to group regexes, just as classes allow you to group methods of regular code.
In this case, we have to specify that the regex is lexically scoped using the
my keyword, because named regexes are normally used within grammars.
Being named gives us the advantage of being able to easily reuse the regex elsewhere:
say so "32.51" ~~ ; # OUTPUT: «True␤»say so "15 + 4.5" ~~ /\s* '+' \s*/ # OUTPUT: «True␤»
regex isn't the only declarator for named regexes. In fact, it's the least common. Most of the time, the
rule declarators are used. These are both ratcheting, which means that the match engine won't back up and try again if it fails to match something. This will usually do what you want, but isn't appropriate for all cases:
mymymy = 'Tokens won\'t backtrack, which makes them fail quicker!';say so ~~ ; # OUTPUT: «True␤»say so ~~ ; # OUTPUT: «False␤»# the entire string get taken by the .+
Note that non-backtracking works on terms, that is, as the example below, if you have matched something, then you will never backtrack. But when you fail to match, if there is another candidate introduced by
||, you will retry to match again.
my ;my ;say so "bd" ~~ ; # OUTPUT: «False␤»say so "bd" ~~ ; # OUTPUT: «True␤»
The only difference between the
rule declarators is that the
rule declarator causes
:sigspace to go into effect for the Regex:
mymysay so 'onceuponatime' ~~ ; # OUTPUT: «True␤»say so 'once upon a time' ~~ ; # OUTPUT: «False␤»say so 'onceuponatime' ~~ ; # OUTPUT: «False␤»say so 'once upon a time' ~~ ; # OUTPUT: «True␤»
Grammar is the superclass that classes automatically get when they are declared with the
grammar keyword instead of
class. Grammars should only be used to parse text; if you wish to extract complex data, you can add actions within the grammar, or an action object is recommended to be used in conjunction with the grammar.
For instance, if you have a lot of alternations, it may become difficult to produce readable code or subclass your grammar. In the Actions class below, the ternary in
method TOP is less than ideal and it becomes even worse the more operations we add:
say Calculator.parse('2 + 3', actions => Calculations).made;# OUTPUT: «5␤»
To make things better, we can use proto regexes that look like
:sym<...> adverbs on tokens:
say Calculator.parse('2 + 3', actions => Calculations).made;# OUTPUT: «5␤»
In the grammar, the alternation has now been replaced with
<calc-op>, which is essentially the name of a group of values we'll create. We do so by defining a rule prototype with
proto rule calc-op. Each of our previous alternations have been replaced by a new
rule calc-op definition and the name of the alternation is attached with
In the actions class, we now got rid of the ternary operator and simply take the
.made value from the
$<calc-op> match object. And the actions for individual alternations now follow the same naming pattern as in the grammar:
method calc-op:sym<add> and
The real beauty of this method can be seen when you subclass that grammar and actions class. Let's say we want to add a multiplication feature to the calculator:
is Calculatoris Calculationssay BetterCalculator.parse('2 * 3', actions => BetterCalculations).made;# OUTPUT: «6␤»
All we had to add are additional rule and action to the
calc-op group and the thing works—all thanks to proto regexes.
TOP token is the default first token attempted to match when parsing with a grammar. Note that if you're parsing with
token TOP is automatically anchored to the start and end of the string. If you don't want to parse the whole string, look up
rule TOP or
regex TOP are also acceptable.
A different token can be chosen to be matched first using the
:rule named argument to
.parsefile. These are all
rule instead of
token is used, any whitespace after an atom is turned into a non-capturing call to
ws, written as
. means non-capturing. That is to say:
Is the same as:
ws matches one or more whitespace characters (
\s) or a word boundary (
# First <.ws> matches word boundary at the start of the line# and second <.ws> matches the whitespace between 'b' and 'c'say 'ab c' ~~ / ab c /; # OUTPUT: «｢ab c｣␤»# Failed match: there is neither any whitespace nor a word# boundary between 'a' and 'b'say 'ab' ~~ /. b/; # OUTPUT: «Nil␤»# Successful match: there is a word boundary between ')' and 'b'say ')b' ~~ /. b/; # OUTPUT: «｢)b｣␤»
You can also redefine the default
.parse: "4 \n\n 5"; # Succeeds.parse: "4 \n\n 5"; # Fails
<sym> token can be used inside proto regexes to match the string value of the
:sym adverb for that particular regex:
.parse("I ♥ Perl", actions => class).made.say; # OUTPUT: «Perl␤»
This comes in handy when you're already differentiating the proto regexes with the strings you're going to match, as using
<sym> token prevents repetition of those strings.
<?> is the always succeed assertion. When used as a grammar token, it can be used to trigger an Action class method. In the following grammar we look for Arabic digits and define a
succ token with the always succeed assertion.
In the action class, we use calls to the
succ method to do set up (in this case, we prepare a new element in
@!numbers). In the
digit method, we convert an Arabic digit into a Devanagari digit and add it to the last element of
@!numbers. Thanks to
succ, the last element will always be the number for the currently parsed
say Digifier.parse('255 435 777', actions => Devanagari.new).made;# OUTPUT: «(२५५ ४३५ ७७७)␤»
It's fine to use methods instead of rules or tokens in a grammar, as long as they return a Cursor:
The grammar above will attempt different matches depending on the arguments provided by parse methods:
say +DigitMatcher.subparse: '12७१७९०९', args => \(:full-unicode);# OUTPUT: «12717909␤»say +DigitMatcher.subparse: '12७१७९०९', args => \(:!full-unicode);# OUTPUT: «12␤»
Variables can be defined in tokens by prefixing the lines of code defining them with
:. Arbitrary code can be embedded anywhere in a token by surrounding it with curly braces. This is useful for keeping state between tokens, which can be used to alter how the grammar will parse text. Using dynamic variables (variables with
%* twigils) in tokens cascades down through all tokens defined thereafter within the one where it's defined, avoiding having to pass them from token to token as arguments.
One use for dynamic variables is guards for matches. This example uses guards to explain which regex classes parse whitespace literally:
Here, text such as "use rules for significant whitespace by default" will only match if the state assigned by whether rules, tokens, or regexes are mentioned matches with the correct guard:
say GrammarAdvice.subparse("use rules for significant whitespace by default");# OUTPUT: «use rules for significant whitespace by default»say GrammarAdvice.subparse("use tokens for insignificant whitespace by default");# OUTPUT: «use tokens for insignificant whitespace by default»say GrammarAdvice.subparse("use regexes for insignificant whitespace by default");# OUTPUT: «use regexes for insignificant whitespace by default»say GrammarAdvice.subparse("use regexes for significant whitespace by default")# OUTPUT: #<failed match>
A successful grammar match gives you a parse tree of Match objects, and the deeper that match tree gets, and the more branches in the grammar are, the harder it becomes to navigate the match tree to get the information you are actually interested in.
To avoid the need for diving deep into a match tree, you can supply an actions object. After each successful parse of a named rule in your grammar, it tries to call a method of the same name as the grammar rule, giving it the newly created Match object as a positional argument. If no such method exists, it is skipped.
Here is a contrived example of a grammar and actions in action:
my = TestGrammar.parse('40', actions => TestActions.new);say ; # OUTPUT: «｢40｣␤»say .made; # OUTPUT: «42␤»
An instance of
TestActions is passed as named argument
actions to the parse call, and when token
TOP has matched successfully, it automatically calls method
TOP, passing the match object as an argument.
To make it clear that the argument is a match object, the example uses
$/ as a parameter name to the action method, though that's just a handy convention, nothing intrinsic.
$match would have worked too. (Though using
$/ does give the advantage of providing
$<capture> as a shortcut for
A slightly more involved example follows:
my = KeyValuePairsActions;my = KeyValuePairs.parse(for @ ->
This produces the following output:
Key: second Value: bKey: hits Value: 42Key: perl Value: 6
pair, which parsed a pair separated by an equals sign, aliases the two calls to token
identifier to separate capture names to make them available more easily and intuitively. The corresponding action method constructs a Pair object, and uses the
.made property of the sub match objects. So it (like the action method
TOP too) exploits the fact that action methods for submatches are called before those of the calling/outer regex. So action methods are called in post-order.
The action method
TOP simply collects all the objects that were
.made by the multiple matches of the
pair rule, and returns them in a list.
Also note that
KeyValuePairsActions was passed as a type object to method
parse, which was possible because none of the action methods use attributes (which would only be available in an instance).
In other cases, action methods might want to keep state in attributes. Then of course you must pass an instance to method parse.
ws is special: when
:sigspace is enabled (and it is when we are using
rule), it replaces certain whitespace sequences. This is why the spaces around the equals sign in
rule pair work just fine and why the whitespace before closing
} does not gobble up the newlines looked for in