alpaca
Members list
Packages
Type members
Experimental classlikes
Represents a specific type of token definition that denotes an ignored token during the lexing process.
Represents a specific type of token definition that denotes an ignored token during the lexing process.
Ignored tokens typically refer to tokens that are matched and processed by the lexer but are excluded from the final parsed token stream. Examples of ignored tokens include whitespace, comments, or any other tokens that are syntactically meaningful but do not contribute to the structured representation of the input source.
This opaque type is parameterized by a context type Ctx, which must be a subtype of LexerCtx. The LexerCtx trait serves as a base for maintaining global lexing state, such as the current position, the last matched token, and the remaining input.
The ValidName and Nothing type parameters are placeholder constraints inherited from Token, but IgnoredToken does not provide its own additional constraints or behavior beyond being excluded from normal processing.
The use of an opaque type ensures safe and restricted use within the scope of the lexer, as this type cannot be directly manipulated outside the context of its definition.
Attributes
- Experimental
- true
- Source
- lexer.scala
- Supertypes
Base trait for lexer global context.
Base trait for lexer global context.
The global context maintains state during lexing, including the current position in the input, the last matched token, and the remaining text to process. Users can extend this trait to add custom state tracking.
Attributes
- Companion
- object
- Experimental
- true
- Source
- lexer.scala
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
- Self type
-
Product
Attributes
- Companion
- trait
- Experimental
- true
- Source
- lexer.scala
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
LexerCtx.type
Attributes
- Companion
- trait
- Experimental
- true
- Source
- parser.scala
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
ParserCtx.type
Base trait for parser global context.
Base trait for parser global context.
Unlike the lexer, the parser's global context is typically empty by default, but can be extended to track custom state during parsing such as symbol tables, type information, or other semantic data.
Attributes
- Companion
- object
- Experimental
- true
- Source
- parser.scala
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
class Empty
Attributes
- Experimental
- true
- Source
- parser.scala
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
Production.type
Represents a grammar rule in the parser.
Represents a grammar rule in the parser.
A rule defines how a non-terminal symbol can be parsed by specifying one or more productions. Each production maps a pattern of symbols to a result value.
Rules are created using the rule function and can be used in pattern matching within parser productions.
Type parameters
- R
-
the type of value produced when this rule is matched
Attributes
- Experimental
- true
- Source
- parser.scala
- Supertypes
-
class Objecttrait Matchableclass Any
Defines an opaque type Token that represents a token used in a lexer.
Defines an opaque type Token that represents a token used in a lexer.
This type has three type parameters:
Name: The type of the token's name, restricted to a subtype ofValidName.Ctx: The type of the lexer context, restricted to a subtype ofLexerCtx.Value: The type of the token's value.
The exact implementation details of the underlying type are abstracted away by using Any. Opaque types provide type safety without exposing the underlying representation.
Attributes
- Companion
- object
- Experimental
- true
- Source
- lexer.scala
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
trait IgnoredToken[Ctx]
Factory methods for creating token definitions in the lexer DSL.
Factory methods for creating token definitions in the lexer DSL.
Attributes
- Companion
- trait
- Experimental
- true
- Source
- lexer.scala
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
Token.type
Types
Type representing conflict resolution rules for the parser.
Type representing conflict resolution rules for the parser.
Conflict resolutions are used to resolve shift/reduce and reduce/reduce conflicts in the parsing table by specifying precedence relationships between productions and tokens.
Attributes
- Source
- parser.scala
Type alias for lexer rule definitions.
Type alias for lexer rule definitions.
A lexer definition is a partial function that maps string patterns (as regex literals) to token definitions.
Type parameters
- Ctx
-
the global context type
Attributes
- Source
- lexer.scala
Attributes
- Source
- parser.scala
Defines a single production in a grammar rule.
Defines a single production in a grammar rule.
A production definition is a partial function that matches a specific pattern of symbols (as a tuple of terminals and non-terminals, or a single lexeme) and produces a result value of type R. Productions are the building blocks of grammar rules, specifying how input sequences are recognized and transformed.
Production definitions are typically passed to the rule function to define the possible ways a non-terminal can be parsed.
See the documentation for rule for more details.
Type parameters
- R
-
the result type produced by this production
Attributes
- Source
- parser.scala
Value members
Experimental methods
Creates a lexer from a DSL-based definition.
Creates a lexer from a DSL-based definition.
This is the main entry point for defining a lexer. It uses a macro to compile the lexer definition into efficient tokenization code.
Example:
val myLexer = lexer {
case "\\d+" => Token["number"]
case "[a-zA-Z]+" => Token["identifier"]
case "\\s+" => Token.Ignored
}
Type parameters
- Ctx
-
the global context type, defaults to DefaultGlobalCtx
Value parameters
- betweenStages
-
implicit BetweenStages for context updates
- empty
-
implicit Empty instance to create the initial context
- errorHandling
-
implicit ErrorHandling for custom error recovery
- rules
-
the lexer rules as a partial function
Attributes
- Returns
-
a Tokenization instance that can tokenize input strings
- Experimental
- true
- Source
- lexer.scala
Creates a grammar rule from one or more productions.
Creates a grammar rule from one or more productions.
This is the main way to define grammar rules in the parser DSL. Each production is a partial function that matches a pattern of symbols (terminals and non-terminals) and produces a result value.
This is compile-time only and should only be used inside parser class definitions.
Example:
val expr: Rule[Int] = rule(
{ case (number(a), Lexer.+(_), number(b)) => a.toInt + b.toInt },
{ case (number(n)) => n.toInt }
)
Type parameters
- R
-
the result type produced by this rule
Value parameters
- productions
-
one or more productions that define this rule
Attributes
- Returns
-
a Rule instance
- Experimental
- true
- Source
- parser.scala
Givens
Experimental givens
Propagates the lexer context through the DSL so that token constructors can access it implicitly.
Propagates the lexer context through the DSL so that token constructors can access it implicitly.
Attributes
- Experimental
- true
- Source
- lexer.scala
Extensions
Experimental extensions
Specifies that this production/token should have higher precedence than others.
Specifies that this production/token should have higher precedence than others.
This is compile-time only and should only be used inside parser rule definitions.
Example:
Production(expr, "*", expr) after Production(expr, "+", expr)
Value parameters
- second
-
the productions/tokens that should have lower precedence
Attributes
- Returns
-
a conflict resolution rule
- Experimental
- true
- Source
- parser.scala
Specifies that this production/token should have lower precedence than others.
Specifies that this production/token should have lower precedence than others.
This is compile-time only and should only be used inside parser rule definitions.
Example:
Production(expr, "+", expr) before Production(expr, "*", expr)
Value parameters
- second
-
the productions/tokens that should have higher precedence
Attributes
- Returns
-
a conflict resolution rule
- Experimental
- true
- Source
- parser.scala
Defines a named production for use in grammar rules and conflict resolution.
Defines a named production for use in grammar rules and conflict resolution.
This extension method allows you to assign a name to a specific production within a rule. Named productions can be referenced in conflict resolution rules using the Production selector, enabling fine-grained control over precedence and associativity.
Usage:
val add: Rule[Int] = rule(
"sum" { case (number(a), Lexer.+(_), number(b)) => a.toInt + b.toInt },
{ case (number(n)) => n.toInt }
)
// In conflict resolution:
override val resolutions = Set(
production.sum.after(Lexer.+),
)
Type parameters
- R
-
the result type produced by this production
Value parameters
- production
-
the production to name
Attributes
- Returns
-
the original production, annotated with the given name
- Experimental
- true
- Source
- parser.scala
Parses a list of lexems using the defined grammar.
Parses a list of lexems using the defined grammar.
This is a convenience method that infers the result type from the root rule.
Value parameters
- lexems
-
the list of lexems to parse
Attributes
- Returns
-
a tuple of (context, result), where result may be null on parse failure
- Experimental
- true
- Source
- parser.scala
