Create ridiculously fast Lexers
After months without any new release, 0.14 is finally out!
regex-syntax
. This bump actually improves so performances, but only slightly (see below). The documentation about token disambiguation reflects that changes too.Performance changes from #320:
group before changes
----- ------ -------
count_ok/identifiers 1.04 869.2±6.09ns 854.7 MB/sec 1.00 832.9±14.13ns 891.9 MB/sec
count_ok/keywords_operators_and_punctators 1.04 2.6±0.02µs 784.4 MB/sec 1.00 2.5±0.08µs 811.9 MB/sec
count_ok/strings 1.03 597.6±5.73ns 1389.9 MB/sec 1.00 582.6±6.94ns 1425.8 MB/sec
iterate/identifiers 1.05 883.7±23.22ns 840.7 MB/sec 1.00 838.2±12.23ns 886.3 MB/sec
iterate/keywords_operators_and_punctators 1.01 2.6±0.03µs 768.0 MB/sec 1.00 2.6±0.03µs 778.2 MB/sec
iterate/strings 1.02 595.7±7.48ns 1394.5 MB/sec 1.00 583.6±4.39ns 1423.3 MB/sec
The detailed list of patches can be found below, many thanks to all contributors!
As mentioned in 0.13, the author of Logos, @maciejhirsz, is reducing his time on GitHub. A few months ago, I was granted collaborator rights, so I can help to maintain this project by reviewing and merging PRs.
As a result, please tag me, @jeertmans, whenever you need help or else (if I did not give any sign of life for a few days).
From now on, I will be able to publish new versions to crates.io, which I hope to do more frequently than in the past months (or years) of this project.
However, I do not master this project at all, and I welcome any help from the community to make this project grow! I tried to setup a nice working environment, with many tests and guides, to facilitate first time contributors' life!
Debug
error type requirement by @shilangyu in https://github.com/maciejhirsz/logos/pull/298
Source
test for GAT Slice
by @kmicklas in https://github.com/maciejhirsz/logos/pull/334
Full Changelog: https://github.com/maciejhirsz/logos/compare/v0.13...v0.14
This is a long overdue release that includes a bunch of community provided PRs over the last months:
Lexer
produce a Result<Token, Token::Error>
which removes, the need for the #[error]
variant. More on that below. (#273)#[logos(crate = path::to::logos)]
. (#268){n,m}
regex ranges. (#278)SpannedIter
now derefs into Lexer
, so all Lexer
methods are available on it, based on work started by @simvux. (#283, #231)#[logos(skip ...)]
attribute to help declare whitespace now that #[error]
variant is gone. (#284)syn
dependency to 2.0. (#289)In 0.12 interacting with Logos would look like:
use logos::Logos;
#[derive(Logos, Debug, PartialEq)]
enum Token {
#[token("Hello")]
Hello,
#[token("Logos")]
Logos,
// This variant will be emitted if an error is encountered
#[error]
// By convention we also add an attribute to skip whitespace here
#[regex(r"[ \t\n\f]+", logos::skip)]
Error,
}
fn main() {
let mut lex = Token::lexer("Hello Logos");
// `Lexer` is an iterator of `Token`:
assert_eq!(lex.next(), Some(Token::Hello));
assert_eq!(lex.next(), Some(Token::Logos));
assert_eq!(lex.next(), None);
}
In 0.13 the #[error]
variant is gone, this is the updated code:
use logos::Logos;
#[derive(Logos, Debug, PartialEq)]
#[logos(skip r"[ \t\n\f]+")] // new way to annotate whitespace
enum Token {
#[token("Hello")]
Hello,
#[token("Logos")]
Logos,
}
fn main() {
let mut lex = Token::lexer("Hello Logos");
// `Lexer` is an iterator of `Result<Token, Token::Error>`:
assert_eq!(lex.next(), Some(Ok(Token::Hello)));
assert_eq!(lex.next(), Some(Ok(Token::Logos)));
assert_eq!(lex.next(), None);
}
By default the associated Token::Error
type is just ()
and holds no data. Originally I've put the #[error]
variant on the token because having one flat enum to make a match
on seemed like a performance win. Upon some scrutiny however there is no performance cost to matching on Result<Token, ()>
vs flat Token
due to Rust's ability to optimize enums: if your Token
is a simple enumeration with only unit variants (holds no data), and the number of variants doesn't exceed 254, then Result<Token, ()>
and Option<Result<Token, ()>>
will be represented by a single byte at runtime, and matching for pattern Ok(Token::Hello)
in 0.13 should compile to the same code as plain Token::Hello
would be in 0.12. See the full discussion in #104.
If you've been using Logos for a while now and you've been fighting with some compile-time bugs, you might have been expecting a bunch of fixes here. Alas, as much as it pains me, there are none. There are two main reasons for this, one being me taking a really long coding break last year well into this year:
This is including public and private repos, I simply wrote no code at all in any capacity, not personal projects, not professional, nothing at all. I really needed that time.
While I'm back and active now, a lot of my time currently is spent on Kobold.
The elephant in the room is that the derive macro is simply unfixable in the current state and requires a complete rewrite. Last time this happened was 3 years ago in 0.10. That rewrite improved things considerably and allowed me to fix numerous outstanding bugs at the time. It took me about two weeks of pretty intense work, and I reckon this time it would be no smaller task.
The main issue is that I've chosen to implement different states of the state machine as separate functions. This made it impossible for the these functions to share common variables on stack, but being really clever I got it to work for the existing test suite. A fatal mistake. Douglas Crockford once said, paraphrasing from memory:
Debugging is harder than writing code. If you write code as cleverly as you possibly can, you are by definition not smart enough to debug it.
The good news is I know how to fix things in a way that basically avoids nearly all pitfalls of current implementation. Instead of functions all states of the state machine should be just unit variants of a single enum with some code in their corresponding match
expression arms stuck in a loop.
There are pros and cons to that. When it comes to performance lookup tables that currently use fn
pointers would become much smaller (by a factor of 8 for most projects compiled on 64bit architecture), making it much less demanding on CPU cache. On the flip side the generated code would lose automatic inlining of nested function calls, and while that can be done manually as an optimization rolled into the macro itself, it might be a new source of bugs if not done right, and it will almost definitely bloat out the amount of code produced (though it might be sensible to have some feature flags to disable this, particularly for targets like wasm32
).
The rewrite in principle should also make it possible to make Logos use ropes such as ropey.
#[regex]
Not as much of a nightmare, but something I feel has been a problem for users is relying on a subset of regex syntax, particularly limiting backtracking and look-ahead features for the sake of both simplicity and performance. While it is possible to express pretty much any common programming language syntax necessary with that subset, often enough people find it confusing that you can't just do things like x*?
.
What I'd like to do eventually is expand on the #[token]
attribute with more DSL-y features, using string literals as means of escaping characters, backwards compatible with current use of the attribute to denote a simple byte sequence. Consider a simple block comment /* ... */
, I would like that to be declared as:
#[derive(Logos)]
enum Token {
#[token("/*" ... "*/")]
BlockComment,
// ...
}
The equivalent regex rules for that today would be #[regex(r"\/\*([^*]|\*[^/])*\*\/")]
... I think. The fact that I'm not able to write this out of the top of my head without being sure it works should be a testament enough of how clunky it can be. Not to mention this one example kind of breaks one of the main promises of Logos, and that is of making the generated code faster than anything you could write by hand. You do not want to see the codegen for that particular regex.
I've no idea.
In the next few weeks I'd like to get Kobold to 1.0. I also want to get Ramhorns to 1.0 sometime soon, that carte honestly should have been marked 1.0 since last year. I've been guilty of not pushing my crates to 1.0 fast enough, and by not fast enough I mean I've been on noughty versions for most of my projects for years, even though most of them really are production ready and I'm just too much of a chicken to stabilize the API.
With the changes done in 0.13 now I believe the last two issues standing ahead of Logos going 1.0 are the two mentioned above: derive rewrite to fix bugs, and #[token]
syntax expansion / phasing out of #[regex]
to make the API nicer to use. The former is something I'm best suited to do alone, although the latter could be done by community (though realistically only after the rewrite is done).
If there is a living soul willing to undertake the big rewrite and help push this along faster, do reach out. I can at very least help explain what the hell current codegen is doing. I'd be also happy to give a person or two the ability to review and merge community PRs.
This is a long overdue patch release that includes a number of fixes from community:
FilterResult
type where emitting an error or skipping would be necessary (#236 by @Jeremy-Stafford)Chunk
is now implemented for all array sizes via const
generics, which fixes issues with long regex patterns (#221 by @icewind1991)ignore(case)
and ignore(ascii_case)
(by @gymore-io #198).lookup!
macro.logos-derive
that had a security alert on it.char
boundary (#138).#[logos(subpattern name = r"regex")]
on the token enum, which is then used as a "subroutine" in regex rule definitions with the syntax (?&name)
(by @CAD97, #131). Example:
#[derive(Logos)]
#[logos(subpattern xdigit = r"[0-9a-fA-F]")]
enum Token {
#[regex("0[xX](?&xdigit)+")]
LiteralHex,
// ...
}
(f*oo)*
Previously when two definitions computed could match the same input and were assigned the same priority, Logos
would make an arbitrary choice about which token to produce. This behavior could produce unexpected results, it is therefore now considered a compile error, and will be reported as such.
Consider two regexes, one matching [abc]+
while the other matches [cde]+
. Both of those would have a computed priority of 1
, and both could match any sequence of c
.
Logos will now return a compile error with hints for solution in this case:
error: A definition of variant `Abc` can match the same input as another definition of variant `Cde`.
hint: Consider giving one definition a higher priority: #[regex(..., priority = 2)]
--> tests/tests/edgecase.rs:410:17
|
410 | #[regex("[abc]+")]
| ^^^^^^^^
error: A definition of variant `Cde` can match the same input as another definition of variant `Abc`.
hint: Consider giving one definition a higher priority: #[regex(..., priority = 2)]
--> tests/tests/edgecase.rs:413:17
|
413 | #[regex("[cde]+")]
| ^^^^^^^^
error: aborting due to 2 previous errors
Setting priority = 2
to either token will override the computed priority, allowing Logos to properly disambiguate the tokens.
Deriving Logos on an enum with type parameters like so:
#[derive(Logos, Debug, PartialEq)]
enum Token<S, N> {
#[regex(r"[ \n\t\f]+", logos::skip)]
#[error]
Error,
#[regex("[a-z]+")]
Ident(S),
#[regex("[0-9]+", |lex| lex.slice().parse())]
Number(N)
}
Will now produce following errors:
error: Generic type parameter without a concrete type
Define a concrete type Logos can use: #[logos(type S = Type)]
--> tests/tests/edgecase.rs:339:16
|
339 | enum Token<S, N> {
| ^
error: Generic type parameter without a concrete type
Define a concrete type Logos can use: #[logos(type N = Type)]
--> tests/tests/edgecase.rs:339:19
|
339 | enum Token<S, N> {
| ^
It's now possible to define concrete types for the generic type parameters:
#[derive(Logos, Debug, PartialEq)]
#[logos(
type S = &str,
type N = u64,
)]
enum Token<S, N> {
This will derive the Logos
trait for Token<&str, u64>
. All reference types (like &str
here) will automatically use the lifetime of the source.
callback = ...
syntax inside #[regex(...)]
and #[token(...)]
attributes. This allows for callback and priority to be placed arbitrarily within the attribute. All of these are now legal and equivalent:
#[regex("[abc]+", my_callback, priority = 10)]
#[regex("[abc]+", callback = my_callback, priority = 10)]
#[regex("[abc]+", priority = 10, callback = my_callback)]
Logos has a new cute logo :nerd_face:.
There is a number of breaking changes this release, aimed at reducing API surface and being more idiomatic, while adding some long awaited features, like the ability to put slices of the source or arbitrary values returned from callbacks directly into a token.
Most of the changes related to the #[derive]
macro will trigger a compile error message with an explanation to aid migration from 0.10. Those will be removed in the future.
#[trivia]
attribute has been removed. Whitespace handling is easily added by defining #[regex(r"[ \n\t\f]+", logos::skip)]
on any token enum variant. Putting it along #[error]
is recommended.Lexer
no longer has the advance
method, or publicly visible token
field. Instead Lexer
now implements the Iterator
trait and has a next
method that returns an Option
of a token.#[end]
attribute was removed since it became obsolete.#[regex = "..."]
and #[token = "..."]
definitions are no longer valid syntax and will error when used in this way. Those need to be transformed to #[regex("...")]
or #[token("...")]
respectively.#[regex]
or #[token]
. Those can be either paths to functions defined elsewhere (#[token("...", my_callback)]
), or inlined directly into the attribute using closure syntax (#[token("...", |lex| { ... })]
).Extras
trait has been removed. You can still associate a custom struct to the Lexer
using #[logos(extras = MyExtras)]
with any type that implements Default
, and manipulate it using callbacks.#[callback]
attribute was removed since it was just polluting attribute namespace while offering no advantages over attaching callbacks to #[regex]
or #[token]
.Lexer::range
has been renamed to span
. span
and slice
methods continue to return information for the most recently returned token.Lexer
has a new spanned
method takes the ownership of Lexer
and returns an interator over (Token, Span)
tuples.Option
s/Result
s of values) that can be put into token enum variants. Currently only a single value in a tuple-like variants is supported (Token::Variant(T)
).Token::Variant(&str)
with a matching str
slice if no callback is provided.Skip
type. It's also possible to dynamically skip matches using the Filter
type.u64
.#[derive]
The derive macro at heart of Logos has been rewritten virtually from scratch between 0.10.0-rc2 and now. The state machine is now built from a graph that permits arbitrary jumps between nodes, instead of a tree that needs to build up permutations of every possible path leading to a token match. This has fixed a whole number of old outstanding issues (#87, #81, #80, #79, #78, #70).
The new codebase is nearly 1k LOC shorter, compiles faster, outputs smaller code, is more cache friendly, and is already proving itself to be easier to debug and optimize. This new release also gets rid of a number of hacks that were previously introduced to manage token disambiguation and loops, which were a huge source of bugs. All nodes in the new state machine are indexed, and loops are described as circular state jumps. Jumps between states are realized by tail recursion calls in the generated code, which in most cases should have performance profile of goto
in C.
A case that was breaking the old Logos was using a simple \w+
regex. This is a particularly devious regex flag, since it expands to cover codepoints for every Unicode alphabet, which was then further expand into corresponding byte sequences.
Old logos produced staggering 228k lines of Rust code just for that single pattern, which then usually failed to compile after rustc consumed all available memory. That's because a single \w
produced a tree with 303 leaf nodes, which then had to be duplicated for every leaf to handle the loop, which in the end produced a monstrous tree with 91809 leaf nodes (not counting any branches leading to those).
By contrast, in the new version a single \w
produces a graph with 278 nodes total, while \w+
produces a graph with 279 nodes. That is to say, we went from n ** 2
, to n + 1
, which is a universe of a difference.
This release also improves the previously flaky token disambiguation, which is now properly defined and documented. It also leaves us with an option to provide different strategies in the future.
I think the next minor/major release will do a clean-up of the API surface. Since Logos currently pollutes the attribute namespace quite a lot, using pretty generic labels like token
or callback
, it might be wise to wrap most if not all of them into #[logos(...)]
. This should help to make the crate more future-proof, and play nicer with other custom derives that you might want to put on your token enum
s.
I'm very excited to have this release out, and have a whiteboard full of ideas of what can bet tweaked to improve the performance, before even touching SIMD (which is on the horizon somewhere).
NulTermStr
support. In turn, a lot of effort has been put to increase the performance using regular &str
sources, which more than makes up for it.0x00
) to terminate input.&[u8]
, and will fail to compile with &str
.Split
trait.Lexicon
type alias. Logos::lexicon
has been replaced by Logos::lex
, which then implements all logic internally.