-
Notifications
You must be signed in to change notification settings - Fork 439
Make RawTokenKind
a trivial enum
#1334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make RawTokenKind
a trivial enum
#1334
Conversation
RawTokenKind
a trivial enum 🚥 #1333RawTokenKind
a trivial enum
219196b
to
7090074
Compare
@swift-ci Please test |
/// Pre-computes the keyword a lexeme might represent. This makes matching | ||
/// a lexeme that has been converted into `PrepareForKeyword` match cheaper to | ||
/// match against multiple `TokenSpec` that assume a keyword. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would remove the main advantage I see in not just converting to keyword in the lexer - ie. for identifier cases, we don't need to do the string -> keyword conversion. But with PrepareForKeywordMatch
we're now ignoring that. So why not just always convert to a keyword in the lexer once and simplify the rest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My impression is that the entire “how should we represent things that might be keywords in the lexer” discussion is related to this PR but it’s fundamentally an orthogonal issue.
IIUC we all agree that RawTokenKind
should be a trivial enum, so I would propose that we merge this PR and continue the discussion of how keywords should be represented in a follow-up PR. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear, I'm not arguing against merging this PR. Yes, I agree we want a trivial enum here. I was just
responding to your comment in the other PR about this addressing it 😅
Sources/SwiftParser/Modifiers.swift
Outdated
@@ -201,7 +201,7 @@ extension Parser { | |||
} else { | |||
unexpectedBeforeDetail = nil | |||
detail = RawTokenSyntax( | |||
missing: .keyword(.set), | |||
missing: .keyword, | |||
text: "set", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keyword.set.defaultText
? (and the try/etc ones)
Or alternatively have a RawTokenSyntax(mising:arena:)
where missing
is a Keyword
that does automatically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I added missingToken(Keyword)
as a method on Parser
.
7090074
to
fdbc043
Compare
@swift-ci Please test |
This changes `RawTokenKind` to be a trivial enum again, after I introduced an associated value for `keyword` a few months ago. To avoid repeatedly comparing strings or accessing `Keyword.defaultText` and thus regressing in performance, before matching a lexeme against a bunc of `TokenSpec` that refer to keywords, we pre-compute the keyword it might represent by wrapping it in `PrepareForKeywordMatch`. Performance-wise I have seen slight improvements on on the order of ~2%-3% locally but nothing that I would eyeball as being statistically relevant - but there doesn’t seem to be a performance regression. rdar://104659543
fdbc043
to
667de33
Compare
@swift-ci Please test |
This changes
RawTokenKind
to be a trivial enum again, after I introduced an associated value forkeyword
a few months ago. To avoid repeatedly comparing strings or accessingKeyword.defaultText
and thus regressing in performance, before matching a lexeme against a bunc ofTokenSpec
that refer to keywords, we pre-compute the keyword it might represent by wrapping it inPrepareForKeywordMatch
.Performance-wise I have seen slight improvements on on the order of ~2%-3% locally but nothing that I would eyeball as being statistically relevant - but there doesn’t seem to be a performance regression.
rdar://104659543