29 Commits

Author SHA1 Message Date
9c85c6cf18 bump: Release v0.12.0 2025-01-21 14:37:49 -03:00
d4e189e596 chore: mark more mcfunction tests as bad 2025-01-21 14:37:10 -03:00
71fb699f96 chore: Pin ubuntu version in CI 2025-01-21 14:37:03 -03:00
5fe309f24c feat: Bumped to latest chroma release 2025-01-21 12:31:39 -03:00
62db71ae4d build: automate AUR release
Some checks failed
Tests / build (push) Has been cancelled
Coverage / build (push) Has been cancelled
2024-10-14 17:27:31 -03:00
fff6cad5ac bump: Release v0.11.1 2024-10-14 16:56:17 -03:00
44e6af8546 fix: support choosing lexers when used as a library 2024-10-14 16:45:50 -03:00
9e2585a875 bump: Release v0.11.0 2024-10-14 13:28:47 -03:00
c16b139fa3 feat: support selecting only some themes
Some checks are pending
Tests / build (push) Waiting to run
2024-10-14 13:11:22 -03:00
e11775040c chore(build): strip static binary
Some checks failed
Tests / build (push) Has been cancelled
2024-09-26 20:50:14 -03:00
30bc8cccba chore: build before tag 2024-09-26 20:47:58 -03:00
1638c253cb bump: Release v0.10.0 2024-09-26 20:35:51 -03:00
c374f52aee Merge pull request #9 from ralsina/conditional-lexers-and-themes
Conditional lexers and themes
2024-09-26 20:34:24 -03:00
96fd9bdfe9 fix: Fix metadata to show crystal 2024-09-26 18:47:47 -03:00
0423811c5d feat: optional conditional baking of lexers 2024-09-26 18:47:47 -03:00
3d9d3ab5cf fix: strip binaries for release artifacts
Some checks failed
Tests / build (push) Has been cancelled
2024-09-21 21:28:13 -03:00
92a97490f1 bump: Release v0.9.1 2024-09-21 21:08:41 -03:00
22decedf3a test: added minimal tests for svg and png formatters
Some checks are pending
Tests / build (push) Waiting to run
2024-09-21 21:08:03 -03:00
8b34a1659d fix: Bug in high-level API for png formatter 2024-09-21 21:07:44 -03:00
3bf8172b89 fix: Terminal formatter was skipping things that it could highlight 2024-09-21 20:57:24 -03:00
4432da2893 bump: Release v0.9.0 2024-09-21 20:33:24 -03:00
6a6827f26a feat: PNG writer based on Stumpy libs 2024-09-21 20:22:30 -03:00
766f9b4708 chore: Improve changelog handling 2024-09-21 14:08:07 -03:00
9d49ff78d6 chore: detect version bump in release script 2024-09-21 14:05:14 -03:00
fb924543a0 chore: clean 2024-09-21 14:00:13 -03:00
09d4b7b02e bump: Release v0.8.0 2024-09-21 13:40:01 -03:00
08e81683ca fix: HTML formatter was setting bold wrong 2024-09-21 13:36:31 -03:00
9c70fbf389 feat: SVG formatter 2024-09-21 12:56:40 -03:00
d26393d8c9 chore: fix example code in README
Some checks failed
Tests / build (push) Has been cancelled
2024-09-19 11:20:08 -03:00
54 changed files with 1783 additions and 675 deletions

View File

@@ -1,9 +1,9 @@
# This configuration file was generated by `ameba --gen-config`
# on 2024-09-11 00:56:14 UTC using Ameba version 1.6.1.
# on 2024-09-21 14:59:30 UTC using Ameba version 1.6.1.
# The point is for the user to remove these configuration records
# one by one as the reported problems are removed from the code base.
# Problems found: 4
# Problems found: 3
# Run `ameba --only Documentation/DocumentationAdmonition` for details
Documentation/DocumentationAdmonition:
Description: Reports documentation admonitions
@@ -11,10 +11,24 @@ Documentation/DocumentationAdmonition:
Excluded:
- src/lexer.cr
- src/actions.cr
- spec/examples/crystal/lexer_spec.cr
Admonitions:
- TODO
- FIXME
- BUG
Enabled: true
Severity: Warning
# Problems found: 1
# Run `ameba --only Lint/SpecFilename` for details
Lint/SpecFilename:
Description: Enforces spec filenames to have `_spec` suffix
Excluded:
- spec/examples/crystal/hello.cr
IgnoredDirs:
- spec/support
- spec/fixtures
- spec/data
IgnoredFilenames:
- spec_helper
Enabled: true
Severity: Warning

View File

@@ -9,7 +9,7 @@ permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: Download source
uses: actions/checkout@v4

View File

@@ -7,7 +7,7 @@ permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: Download source
uses: actions/checkout@v4

3
.gitignore vendored
View File

@@ -12,3 +12,6 @@ venv/
.croupier
coverage/
run_tests
# We use the internal crystal lexer
lexers/crystal.xml

View File

@@ -2,6 +2,90 @@
All notable changes to this project will be documented in this file.
## [0.12.0] - 2025-01-21
### 🚀 Features
- Bumped to latest chroma release
### ⚙️ Miscellaneous Tasks
- Pin ubuntu version in CI
- Mark more mcfunction tests as bad
### Build
- Automate AUR release
## [0.11.1] - 2024-10-14
### 🐛 Bug Fixes
- Support choosing lexers when used as a library
## [0.11.0] - 2024-10-14
### 🚀 Features
- Support selecting only some themes
## [0.10.0] - 2024-09-26
### 🚀 Features
- Optional conditional baking of lexers
### 🐛 Bug Fixes
- Strip binaries for release artifacts
- Fix metadata to show crystal
## [0.9.1] - 2024-09-22
### 🐛 Bug Fixes
- Terminal formatter was skipping things that it could highlight
- Bug in high-level API for png formatter
### 🧪 Testing
- Added minimal tests for svg and png formatters
## [0.9.0] - 2024-09-21
### 🚀 Features
- PNG writer based on Stumpy libs
### ⚙️ Miscellaneous Tasks
- Clean
- Detect version bump in release script
- Improve changelog handling
## [0.8.0] - 2024-09-21
### 🚀 Features
- SVG formatter
### 🐛 Bug Fixes
- HTML formatter was setting bold wrong
### 📚 Documentation
- Added instructions to add as a dependency
### 🧪 Testing
- Add basic tests for crystal and delegating lexers
- Added tests for CSS generation
### ⚙ Miscellaneous Tasks
- Fix example code in README
## [0.7.0] - 2024-09-10
### 🚀 Features

View File

@@ -113,3 +113,25 @@ tasks:
kcov --clean --include-path=./src ${PWD}/coverage ./bin/run_tests
outputs:
- coverage/index.html
loc:
phony: true
always_run: true
dependencies:
- src
commands: |
tokei src -e src/constants/
aur:
phony: true
always_run: true
commands: |
rm -rf aur-{{NAME}}
git clone ssh://aur@aur.archlinux.org/{{NAME}}.git aur-{{NAME}}
sed s/pkgver=.*/pkgver=$(shards version)/ -i aur-{{NAME}}/PKGBUILD
sed s/pkgrel=.*/pkgrel=1/ -i aur-{{NAME}}/PKGBUILD
cd aur-{{NAME}} && updpkgsums && makepkg --printsrcinfo > .SRCINFO
cd aur-{{NAME}} && makepkg -fsr
cd aur-{{NAME}} && git add PKGBUILD .SRCINFO
cd aur-{{NAME}} && git commit -a -m "Update to $(shards version)"
cd aur-{{NAME}} && git push

View File

@@ -71,7 +71,7 @@ This does more or less the same thing, but more manually:
```crystal
lexer = Tartrazine.lexer("crystal")
formatter = Tartrazine::Html.new (
formatter = Tartrazine::Html.new(
theme: Tartrazine.theme("catppuccin-macchiato"),
line_numbers: true,
standalone: true,
@@ -82,6 +82,51 @@ puts formatter.format("puts \"Hello, world!\"", lexer)
The reason you may want to use the manual version is to reuse
the lexer and formatter objects for performance reasons.
## Choosing what Lexers you want
By default Tartrazine will support all its lexers by embedding
them in the binary. This makes the binary large. If you are
using it as a library, you may want to just include a selection of lexers. To do that:
* Pass the `-Dnolexers` flag to the compiler
* Set the `TT_LEXERS` environment variable to a
comma-separated list of lexers you want to include.
This builds a binary with only the python, markdown, bash and yaml lexers (enough to highlight this `README.md`):
```bash
> TT_LEXERS=python,markdown,bash,yaml shards build -Dnolexers -d --error-trace
Dependencies are satisfied
Building: tartrazine
```
## Choosing what themes you want
Themes come from two places, tartrazine itself and [Sixteen](https://github.com/ralsina/sixteen).
To only embed selected themes, build your project with the `-Dnothemes` option, and
you can set two environment variables to control which themes are included:
* `TT_THEMES` is a comma-separated list of themes to include from tartrazine (see the styles directory in the source)
* `SIXTEEN_THEMES` is a comma-separated list of themes to include from Sixteen (see the base16 directory in the sixteen source)
For example (using the tartrazine CLI as the project):
```bash
$ TT_THEMES=colorful,autumn SIXTEEN_THEMES=pasque,pico shards build -Dnothemes
Dependencies are satisfied
Building: tartrazine
$ ./bin/tartrazine --list-themes
autumn
colorful
pasque
pico
```
Be careful not to build without any themes at all, nothing will work.
## Contributing
1. Fork it (<https://github.com/ralsina/tartrazine/fork>)

View File

@@ -7,10 +7,10 @@ docker run --rm --privileged \
# Build for AMD64
docker build . -f Dockerfile.static -t tartrazine-builder
docker run -ti --rm -v "$PWD":/app --user="$UID" tartrazine-builder /bin/sh -c "cd /app && rm -rf lib shard.lock && shards build --static --release"
docker run -ti --rm -v "$PWD":/app --user="$UID" tartrazine-builder /bin/sh -c "cd /app && rm -rf lib shard.lock && shards build --static --release && strip bin/tartrazine"
mv bin/tartrazine bin/tartrazine-static-linux-amd64
# Build for ARM64
docker build . -f Dockerfile.static --platform linux/arm64 -t tartrazine-builder
docker run -ti --rm -v "$PWD":/app --platform linux/arm64 --user="$UID" tartrazine-builder /bin/sh -c "cd /app && rm -rf lib shard.lock && shards build --static --release"
docker run -ti --rm -v "$PWD":/app --platform linux/arm64 --user="$UID" tartrazine-builder /bin/sh -c "cd /app && rm -rf lib shard.lock && shards build --static --release && strip bin/tartrazine"
mv bin/tartrazine bin/tartrazine-static-linux-arm64

View File

@@ -67,7 +67,6 @@ commit_parsers = [
{ message = "^chore\\(deps.*\\)", skip = true },
{ message = "^chore\\(pr\\)", skip = true },
{ message = "^chore\\(pull\\)", skip = true },
{ message = "^chore\\(ignore\\)", skip = true },
{ message = "^chore|^ci", group = "<!-- 7 -->⚙️ Miscellaneous Tasks" },
{ body = ".*security", group = "<!-- 8 -->🛡️ Security" },
{ message = "^revert", group = "<!-- 9 -->◀️ Revert" },

View File

@@ -2,14 +2,14 @@
set e
PKGNAME=$(basename "$PWD")
VERSION=$(git cliff --bumped-version |cut -dv -f2)
VERSION=$(git cliff --bumped-version --unreleased |cut -dv -f2)
sed "s/^version:.*$/version: $VERSION/g" -i shard.yml
git add shard.yml
hace lint test
git cliff --bump -o
git cliff --bump -u -p CHANGELOG.md
git commit -a -m "bump: Release v$VERSION"
hace static
git tag "v$VERSION"
git push --tags
hace static
gh release create "v$VERSION" "bin/$PKGNAME-static-linux-amd64" "bin/$PKGNAME-static-linux-arm64" --title "Release v$VERSION" --notes "$(git cliff -l -s all)"

Binary file not shown.

BIN
fonts/courier-bold.pcf.gz Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

165
lexers/atl.xml Normal file
View File

@@ -0,0 +1,165 @@
<lexer>
<config>
<name>ATL</name>
<alias>atl</alias>
<filename>*.atl</filename>
<mime_type>text/x-atl</mime_type>
<dot_all>true</dot_all>
</config>
<rules>
<state name="root">
<rule pattern="(--.*?)(\n)">
<bygroups>
<token type="CommentSingle" />
<token type="TextWhitespace" />
</bygroups>
</rule>
<rule pattern="(and|distinct|endif|else|for|foreach|if|implies|in|let|not|or|self|super|then|thisModule|xor)\b">
<token type="Keyword" />
</rule>
<rule pattern="(OclUndefined|true|false|#\w+)\b">
<token type="KeywordConstant" />
</rule>
<rule pattern="(module|query|library|create|from|to|uses)\b">
<token type="KeywordNamespace" />
</rule>
<rule pattern="(do)(\s*)({)">
<bygroups>
<token type="KeywordNamespace" />
<token type="TextWhitespace" />
<token type="Punctuation" />
</bygroups>
</rule>
<rule pattern="(abstract|endpoint|entrypoint|lazy|unique)(\s+)">
<bygroups>
<token type="KeywordDeclaration" />
<token type="TextWhitespace" />
</bygroups>
</rule>
<rule pattern="(rule)(\s+)">
<bygroups>
<token type="KeywordNamespace" />
<token type="TextWhitespace" />
</bygroups>
</rule>
<rule pattern="(helper)(\s+)">
<bygroups>
<token type="KeywordNamespace" />
<token type="TextWhitespace" />
</bygroups>
</rule>
<rule pattern="(context)(\s+)">
<bygroups>
<token type="KeywordNamespace" />
<token type="TextWhitespace" />
</bygroups>
</rule>
<rule pattern="(def)(\s*)(:)(\s*)">
<bygroups>
<token type="KeywordNamespace" />
<token type="TextWhitespace" />
<token type="Punctuation" />
<token type="TextWhitespace" />
</bygroups>
</rule>
<rule pattern="(Bag|Boolean|Integer|OrderedSet|Real|Sequence|Set|String|Tuple)">
<token type="KeywordType" />
</rule>
<rule pattern="(\w+)(\s*)(&lt;-|&lt;:=)">
<bygroups>
<token type="NameNamespace" />
<token type="TextWhitespace" />
<token type="Punctuation" />
</bygroups>
</rule>
<rule pattern="#&quot;">
<token type="KeywordConstant" />
<push state="quotedenumliteral" />
</rule>
<rule pattern="&quot;">
<token type="NameNamespace" />
<push state="quotedname" />
</rule>
<rule pattern="[^\S\n]+">
<token type="TextWhitespace" />
</rule>
<rule pattern="&#x27;">
<token type="LiteralString" />
<push state="string" />
</rule>
<rule
pattern="[0-9]*\.[0-9]+">
<token type="LiteralNumberFloat" />
</rule>
<rule pattern="0|[1-9][0-9]*">
<token type="LiteralNumberInteger" />
</rule>
<rule pattern="[*&lt;&gt;+=/-]">
<token type="Operator" />
</rule>
<rule pattern="([{}();:.,!|]|-&gt;)">
<token type="Punctuation" />
</rule>
<rule pattern="\n">
<token type="TextWhitespace" />
</rule>
<rule pattern="\w+">
<token type="NameNamespace" />
</rule>
</state>
<state name="string">
<rule pattern="[^\\&#x27;]+">
<token type="LiteralString" />
</rule>
<rule pattern="\\\\">
<token type="LiteralString" />
</rule>
<rule pattern="\\&#x27;">
<token type="LiteralString" />
</rule>
<rule pattern="\\">
<token type="LiteralString" />
</rule>
<rule pattern="&#x27;">
<token type="LiteralString" />
<pop depth="1" />
</rule>
</state>
<state name="quotedname">
<rule pattern="[^\\&quot;]+">
<token type="NameNamespace" />
</rule>
<rule pattern="\\\\">
<token type="NameNamespace" />
</rule>
<rule pattern="\\&quot;">
<token type="NameNamespace" />
</rule>
<rule pattern="\\">
<token type="NameNamespace" />
</rule>
<rule pattern="&quot;">
<token type="NameNamespace" />
<pop depth="1" />
</rule>
</state>
<state name="quotedenumliteral">
<rule pattern="[^\\&quot;]+">
<token type="KeywordConstant" />
</rule>
<rule pattern="\\\\">
<token type="KeywordConstant" />
</rule>
<rule pattern="\\&quot;">
<token type="KeywordConstant" />
</rule>
<rule pattern="\\">
<token type="KeywordConstant" />
</rule>
<rule pattern="&quot;">
<token type="KeywordConstant" />
<pop depth="1" />
</rule>
</state>
</rules>
</lexer>

120
lexers/beef.xml Normal file
View File

@@ -0,0 +1,120 @@
<lexer>
<config>
<name>Beef</name>
<alias>beef</alias>
<filename>*.bf</filename>
<mime_type>text/x-beef</mime_type>
<dot_all>true</dot_all>
<ensure_nl>true</ensure_nl>
</config>
<rules>
<state name="root">
<rule pattern="^\s*\[.*?\]">
<token type="NameAttribute"/>
</rule>
<rule pattern="[^\S\n]+">
<token type="Text"/>
</rule>
<rule pattern="\\\n">
<token type="Text"/>
</rule>
<rule pattern="///[^\n\r]*">
<token type="CommentSpecial"/>
</rule>
<rule pattern="//[^\n\r]*">
<token type="CommentSingle"/>
</rule>
<rule pattern="/[*].*?[*]/">
<token type="CommentMultiline"/>
</rule>
<rule pattern="\n">
<token type="Text"/>
</rule>
<rule pattern="[~!%^&amp;*()+=|\[\]:;,.&lt;&gt;/?-]">
<token type="Punctuation"/>
</rule>
<rule pattern="[{}]">
<token type="Punctuation"/>
</rule>
<rule pattern="@&#34;(&#34;&#34;|[^&#34;])*&#34;">
<token type="LiteralString"/>
</rule>
<rule pattern="\$@?&#34;(&#34;&#34;|[^&#34;])*&#34;">
<token type="LiteralString"/>
</rule>
<rule pattern="&#34;(\\\\|\\&#34;|[^&#34;\n])*[&#34;\n]">
<token type="LiteralString"/>
</rule>
<rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;">
<token type="LiteralStringChar"/>
</rule>
<rule pattern="0[xX][0-9a-fA-F]+[Ll]?|\d[_\d]*(\.\d*)?([eE][+-]?\d+)?[flFLdD]?">
<token type="LiteralNumber"/>
</rule>
<rule pattern="#[ \t]*(if|endif|else|elif|define|undef|line|error|warning|region|endregion|pragma|nullable)\b">
<token type="CommentPreproc"/>
</rule>
<rule pattern="\b(extern)(\s+)(alias)\b">
<bygroups>
<token type="Keyword"/>
<token type="Text"/>
<token type="Keyword"/>
</bygroups>
</rule>
<rule pattern="(as|await|base|break|by|case|catch|checked|continue|default|delegate|else|event|finally|fixed|for|repeat|goto|if|in|init|is|let|lock|new|scope|on|out|params|readonly|ref|return|sizeof|stackalloc|switch|this|throw|try|typeof|unchecked|virtual|void|while|get|set|new|yield|add|remove|value|alias|ascending|descending|from|group|into|orderby|select|thenby|where|join|equals)\b">
<token type="Keyword"/>
</rule>
<rule pattern="(global)(::)">
<bygroups>
<token type="Keyword"/>
<token type="Punctuation"/>
</bygroups>
</rule>
<rule pattern="(abstract|async|const|enum|explicit|extern|implicit|internal|operator|override|partial|extension|private|protected|public|static|sealed|unsafe|volatile)\b">
<token type="KeywordDeclaration"/>
</rule>
<rule pattern="(bool|byte|char8|char16|char32|decimal|double|float|int|int8|int16|int32|int64|long|object|sbyte|short|string|uint|uint8|uint16|uint32|uint64|uint|let|var)\b\??">
<token type="KeywordType"/>
</rule>
<rule pattern="(true|false|null)\b">
<token type="KeywordConstant"/>
</rule>
<rule pattern="(class|struct|record|interface)(\s+)">
<bygroups>
<token type="Keyword"/>
<token type="Text"/>
</bygroups>
<push state="class"/>
</rule>
<rule pattern="(namespace|using)(\s+)">
<bygroups>
<token type="Keyword"/>
<token type="Text"/>
</bygroups>
<push state="namespace"/>
</rule>
<rule pattern="@?[_a-zA-Z]\w*">
<token type="Name"/>
</rule>
</state>
<state name="class">
<rule pattern="@?[_a-zA-Z]\w*">
<token type="NameClass"/>
<pop depth="1"/>
</rule>
<rule>
<pop depth="1"/>
</rule>
</state>
<state name="namespace">
<rule pattern="(?=\()">
<token type="Text"/>
<pop depth="1"/>
</rule>
<rule pattern="(@?[_a-zA-Z]\w*|\.)+">
<token type="NameNamespace"/>
<pop depth="1"/>
</rule>
</state>
</rules>
</lexer>

53
lexers/csv.xml Normal file
View File

@@ -0,0 +1,53 @@
<!--
Lexer for RFC-4180 compliant CSV subject to the following additions:
- UTF-8 encoding is accepted (the RFC requires 7-bit ASCII)
- The line terminator character can be LF or CRLF (the RFC allows CRLF only)
Link to the RFC-4180 specification: https://tools.ietf.org/html/rfc4180
Additions inspired by:
https://github.com/frictionlessdata/datapackage/issues/204#issuecomment-193242077
Future improvements:
- Identify non-quoted numbers as LiteralNumber
- Identify y as an error in "x"y. Currently it's identified as another string
literal.
-->
<lexer>
<config>
<name>CSV</name>
<alias>csv</alias>
<filename>*.csv</filename>
<mime_type>text/csv</mime_type>
</config>
<rules>
<state name="root">
<rule pattern="\r?\n">
<token type="Punctuation" />
</rule>
<rule pattern=",">
<token type="Punctuation" />
</rule>
<rule pattern="&quot;">
<token type="LiteralStringDouble" />
<push state="escaped" />
</rule>
<rule pattern="[^\r\n,]+">
<token type="LiteralString" />
</rule>
</state>
<state name="escaped">
<rule pattern="&quot;&quot;">
<token type="LiteralStringEscape"/>
</rule>
<rule pattern="&quot;">
<token type="LiteralStringDouble" />
<pop depth="1"/>
</rule>
<rule pattern="[^&quot;]+">
<token type="LiteralStringDouble" />
</rule>
</state>
</rules>
</lexer>

View File

@@ -3,7 +3,6 @@
<name>Groff</name>
<alias>groff</alias>
<alias>nroff</alias>
<alias>roff</alias>
<alias>man</alias>
<filename>*.[1-9]</filename>
<filename>*.1p</filename>

View File

@@ -95,19 +95,22 @@
<rule pattern="[:!#$%&amp;*+.\\/&lt;=&gt;?@^|~-]+">
<token type="Operator"/>
</rule>
<rule pattern="\d+[eE][+-]?\d+">
<rule pattern="\d+_*[eE][+-]?\d+">
<token type="LiteralNumberFloat"/>
</rule>
<rule pattern="\d+\.\d+([eE][+-]?\d+)?">
<rule pattern="\d+(_+[\d]+)*\.\d+(_+[\d]+)*([eE][+-]?\d+)?">
<token type="LiteralNumberFloat"/>
</rule>
<rule pattern="0[oO][0-7]+">
<rule pattern="0[oO](_*[0-7])+">
<token type="LiteralNumberOct"/>
</rule>
<rule pattern="0[xX][\da-fA-F]+">
<rule pattern="0[xX](_*[\da-fA-F])+">
<token type="LiteralNumberHex"/>
</rule>
<rule pattern="\d+">
<rule pattern="0[bB](_*[01])+">
<token type="LiteralNumberBin"/>
</rule>
<rule pattern="\d+(_*[\d])*">
<token type="LiteralNumberInteger"/>
</rule>
<rule pattern="&#39;">

View File

@@ -3,6 +3,7 @@
<name>JSON</name>
<alias>json</alias>
<filename>*.json</filename>
<filename>*.jsonc</filename>
<filename>*.avsc</filename>
<mime_type>application/json</mime_type>
<dot_all>true</dot_all>

137
lexers/jsonnet.xml Normal file
View File

@@ -0,0 +1,137 @@
<lexer>
<config>
<name>Jsonnet</name>
<alias>jsonnet</alias>
<filename>*.jsonnet</filename>
<filename>*.libsonnet</filename>
</config>
<rules>
<state name="_comments">
<rule pattern="(//|#).*\n"><token type="CommentSingle"/></rule>
<rule pattern="/\*\*([^/]|/(?!\*))*\*/"><token type="LiteralStringDoc"/></rule>
<rule pattern="/\*([^/]|/(?!\*))*\*/"><token type="Comment"/></rule>
</state>
<state name="root">
<rule><include state="_comments"/></rule>
<rule pattern="@&#x27;.*&#x27;"><token type="LiteralString"/></rule>
<rule pattern="@&quot;.*&quot;"><token type="LiteralString"/></rule>
<rule pattern="&#x27;"><token type="LiteralString"/><push state="singlestring"/></rule>
<rule pattern="&quot;"><token type="LiteralString"/><push state="doublestring"/></rule>
<rule pattern="\|\|\|(.|\n)*\|\|\|"><token type="LiteralString"/></rule>
<rule pattern="[+-]?[0-9]+(.[0-9])?"><token type="LiteralNumberFloat"/></rule>
<rule pattern="[!$~+\-&amp;|^=&lt;&gt;*/%]"><token type="Operator"/></rule>
<rule pattern="\{"><token type="Punctuation"/><push state="object"/></rule>
<rule pattern="\["><token type="Punctuation"/><push state="array"/></rule>
<rule pattern="local\b"><token type="Keyword"/><push state="local_name"/></rule>
<rule pattern="assert\b"><token type="Keyword"/><push state="assert"/></rule>
<rule pattern="(assert|else|error|false|for|if|import|importstr|in|null|tailstrict|then|self|super|true)\b"><token type="Keyword"/></rule>
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
<rule pattern="function(?=\()"><token type="Keyword"/><push state="function_params"/></rule>
<rule pattern="std\.[^\W\d]\w*(?=\()"><token type="NameBuiltin"/><push state="function_args"/></rule>
<rule pattern="[^\W\d]\w*(?=\()"><token type="NameFunction"/><push state="function_args"/></rule>
<rule pattern="[^\W\d]\w*"><token type="NameVariable"/></rule>
<rule pattern="[\.()]"><token type="Punctuation"/></rule>
</state>
<state name="singlestring">
<rule pattern="[^&#x27;\\]"><token type="LiteralString"/></rule>
<rule pattern="\\."><token type="LiteralStringEscape"/></rule>
<rule pattern="&#x27;"><token type="LiteralString"/><pop depth="1"/></rule>
</state>
<state name="doublestring">
<rule pattern="[^&quot;\\]"><token type="LiteralString"/></rule>
<rule pattern="\\."><token type="LiteralStringEscape"/></rule>
<rule pattern="&quot;"><token type="LiteralString"/><pop depth="1"/></rule>
</state>
<state name="array">
<rule pattern=","><token type="Punctuation"/></rule>
<rule pattern="\]"><token type="Punctuation"/><pop depth="1"/></rule>
<rule><include state="root"/></rule>
</state>
<state name="local_name">
<rule pattern="[^\W\d]\w*(?=\()"><token type="NameFunction"/><push state="function_params"/></rule>
<rule pattern="[^\W\d]\w*"><token type="NameVariable"/></rule>
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
<rule pattern="(?==)"><token type="TextWhitespace"/><push state="#pop" state="local_value"/></rule>
</state>
<state name="local_value">
<rule pattern="="><token type="Operator"/></rule>
<rule pattern=";"><token type="Punctuation"/><pop depth="1"/></rule>
<rule><include state="root"/></rule>
</state>
<state name="assert">
<rule pattern=":"><token type="Punctuation"/></rule>
<rule pattern=";"><token type="Punctuation"/><pop depth="1"/></rule>
<rule><include state="root"/></rule>
</state>
<state name="function_params">
<rule pattern="[^\W\d]\w*"><token type="NameVariable"/></rule>
<rule pattern="\("><token type="Punctuation"/></rule>
<rule pattern="\)"><token type="Punctuation"/><pop depth="1"/></rule>
<rule pattern=","><token type="Punctuation"/></rule>
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
<rule pattern="="><token type="Operator"/><push state="function_param_default"/></rule>
</state>
<state name="function_args">
<rule pattern="\("><token type="Punctuation"/></rule>
<rule pattern="\)"><token type="Punctuation"/><pop depth="1"/></rule>
<rule pattern=","><token type="Punctuation"/></rule>
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
<rule><include state="root"/></rule>
</state>
<state name="object">
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
<rule pattern="local\b"><token type="Keyword"/><push state="object_local_name"/></rule>
<rule pattern="assert\b"><token type="Keyword"/><push state="object_assert"/></rule>
<rule pattern="\["><token type="Operator"/><push state="field_name_expr"/></rule>
<rule pattern="(?=[^\W\d]\w*)"><token type="Text"/><push state="field_name"/></rule>
<rule pattern="\}"><token type="Punctuation"/><pop depth="1"/></rule>
<rule pattern="&quot;"><token type="NameVariable"/><push state="double_field_name"/></rule>
<rule pattern="&#x27;"><token type="NameVariable"/><push state="single_field_name"/></rule>
<rule><include state="_comments"/></rule>
</state>
<state name="field_name">
<rule pattern="[^\W\d]\w*(?=\()"><token type="NameFunction"/><push state="field_separator" state="function_params"/></rule>
<rule pattern="[^\W\d]\w*"><token type="NameVariable"/><push state="field_separator"/></rule>
</state>
<state name="double_field_name">
<rule pattern="([^&quot;\\]|\\.)*&quot;"><token type="NameVariable"/><push state="field_separator"/></rule>
</state>
<state name="single_field_name">
<rule pattern="([^&#x27;\\]|\\.)*&#x27;"><token type="NameVariable"/><push state="field_separator"/></rule>
</state>
<state name="field_name_expr">
<rule pattern="\]"><token type="Operator"/><push state="field_separator"/></rule>
<rule><include state="root"/></rule>
</state>
<state name="function_param_default">
<rule pattern="(?=[,\)])"><token type="TextWhitespace"/><pop depth="1"/></rule>
<rule><include state="root"/></rule>
</state>
<state name="field_separator">
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
<rule pattern="\+?::?:?"><token type="Punctuation"/><push state="#pop" state="#pop" state="field_value"/></rule>
<rule><include state="_comments"/></rule>
</state>
<state name="field_value">
<rule pattern=","><token type="Punctuation"/><pop depth="1"/></rule>
<rule pattern="\}"><token type="Punctuation"/><pop depth="2"/></rule>
<rule><include state="root"/></rule>
</state>
<state name="object_assert">
<rule pattern=":"><token type="Punctuation"/></rule>
<rule pattern=","><token type="Punctuation"/><pop depth="1"/></rule>
<rule><include state="root"/></rule>
</state>
<state name="object_local_name">
<rule pattern="[^\W\d]\w*"><token type="NameVariable"/><push state="#pop" state="object_local_value"/></rule>
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
</state>
<state name="object_local_value">
<rule pattern="="><token type="Operator"/></rule>
<rule pattern=","><token type="Punctuation"/><pop depth="1"/></rule>
<rule pattern="\}"><token type="Punctuation"/><pop depth="2"/></rule>
<rule><include state="root"/></rule>
</state>
</rules>
</lexer>

View File

@@ -45,7 +45,7 @@
</emitters>
</usingbygroup>
</rule>
<rule pattern="(ACCESS|ADD|ADDRESSES|AGGREGATE|ALIGNED|ALL|ALTER|ANALYSIS|AND|ANY|ARITY|ARN|ARRANGEMENT|ARRAY|AS|ASC|ASSERT|ASSUME|AT|AUCTION|AUTHORITY|AVAILABILITY|AVRO|AWS|BATCH|BEGIN|BETWEEN|BIGINT|BILLED|BODY|BOOLEAN|BOTH|BPCHAR|BROKEN|BROKER|BROKERS|BY|BYTES|CARDINALITY|CASCADE|CASE|CAST|CERTIFICATE|CHAIN|CHAINS|CHAR|CHARACTER|CHARACTERISTICS|CHECK|CLIENT|CLOSE|CLUSTER|CLUSTERS|COALESCE|COLLATE|COLUMN|COLUMNS|COMMENT|COMMIT|COMMITTED|COMPACTION|COMPATIBILITY|COMPRESSION|COMPUTE|COMPUTECTL|CONFIG|CONFLUENT|CONNECTION|CONNECTIONS|CONSTRAINT|COPY|COUNT|COUNTER|CREATE|CREATECLUSTER|CREATEDB|CREATEROLE|CREATION|CROSS|CSV|CURRENT|CURSOR|DATABASE|DATABASES|DATUMS|DAY|DAYS|DEALLOCATE|DEBEZIUM|DEBUG|DEBUGGING|DEC|DECIMAL|DECLARE|DECODING|DECORRELATED|DEFAULT|DEFAULTS|DELETE|DELIMITED|DELIMITER|DELTA|DESC|DETAILS|DISCARD|DISK|DISTINCT|DOC|DOT|DOUBLE|DROP|EAGER|ELEMENT|ELSE|ENABLE|END|ENDPOINT|ENFORCED|ENVELOPE|ERROR|ERRORS|ESCAPE|ESTIMATE|EVERY|EXCEPT|EXECUTE|EXISTS|EXPECTED|EXPLAIN|EXPOSE|EXPRESSIONS|EXTERNAL|EXTRACT|FACTOR|FALSE|FAST|FEATURES|FETCH|FIELDS|FILE|FILTER|FIRST|FIXPOINT|FLOAT|FOLLOWING|FOR|FOREIGN|FORMAT|FORWARD|FROM|FULL|FULLNAME|FUNCTION|GENERATOR|GRANT|GREATEST|GROUP|GROUPS|HAVING|HEADER|HEADERS|HISTORY|HOLD|HOST|HOUR|HOURS|HUMANIZED|ID|IDENTIFIERS|IDS|IF|IGNORE|ILIKE|IMPLEMENTATIONS|IMPORTED|IN|INCLUDE|INDEX|INDEXES|INFO|INHERIT|INLINE|INNER|INPUT|INSERT|INSIGHTS|INSPECT|INT|INTEGER|INTERNAL|INTERSECT|INTERVAL|INTO|INTROSPECTION|IS|ISNULL|ISOLATION|JOIN|JOINS|JSON|KAFKA|KEY|KEYS|LAST|LATERAL|LATEST|LEADING|LEAST|LEFT|LEGACY|LETREC|LEVEL|LIKE|LIMIT|LINEAR|LIST|LOAD|LOCAL|LOCALLY|LOG|LOGICAL|LOGIN|LOWERING|MANAGED|MANUAL|MAP|MARKETING|MATERIALIZE|MATERIALIZED|MAX|MECHANISMS|MEMBERSHIP|MESSAGE|METADATA|MINUTE|MINUTES|MODE|MONTH|MONTHS|MUTUALLY|MYSQL|NAME|NAMES|NATURAL|NEGATIVE|NEW|NEXT|NO|NOCREATECLUSTER|NOCREATEDB|NOCREATEROLE|NODE|NOINHERIT|NOLOGIN|NON|NONE|NOSUPERUSER|NOT|NOTICE|NOTICES|NULL|NULLIF|NULLS|OBJECTS|OF|OFFSET|ON|ONLY|OPERATOR|OPTIMIZED|OPTIMIZER|OPTIONS|OR|ORDER|ORDINALITY|OUTER|OVER|OWNED|OWNER|PARTITION|PARTITIONS|PASSWORD|PATH|PHYSICAL|PLAN|PLANS|PORT|POSITION|POSTGRES|PRECEDING|PRECISION|PREFIX|PREPARE|PRIMARY|PRIVATELINK|PRIVILEGES|PROGRESS|PROTOBUF|PROTOCOL|PUBLICATION|PUSHDOWN|QUERY|QUOTE|RAISE|RANGE|RATE|RAW|READ|REAL|REASSIGN|RECURSION|RECURSIVE|REDACTED|REFERENCE|REFERENCES|REFRESH|REGEX|REGION|REGISTRY|REHYDRATION|RENAME|REOPTIMIZE|REPEATABLE|REPLACE|REPLAN|REPLICA|REPLICAS|REPLICATION|RESET|RESPECT|RESTRICT|RETAIN|RETURN|RETURNING|REVOKE|RIGHT|ROLE|ROLES|ROLLBACK|ROTATE|ROUNDS|ROW|ROWS|SASL|SCALE|SCHEDULE|SCHEMA|SCHEMAS|SECOND|SECONDS|SECRET|SECRETS|SECURITY|SEED|SELECT|SEQUENCES|SERIALIZABLE|SERVICE|SESSION|SET|SHARD|SHOW|SINK|SINKS|SIZE|SMALLINT|SNAPSHOT|SOME|SOURCE|SOURCES|SSH|SSL|START|STDIN|STDOUT|STORAGE|STORAGECTL|STRATEGY|STRICT|STRING|STRONG|SUBSCRIBE|SUBSOURCE|SUBSOURCES|SUBSTRING|SUBTREE|SUPERUSER|SWAP|SYNTAX|SYSTEM|TABLE|TABLES|TAIL|TEMP|TEMPORARY|TEXT|THEN|TICK|TIES|TIME|TIMELINE|TIMEOUT|TIMESTAMP|TIMESTAMPTZ|TIMING|TO|TOKEN|TOPIC|TPCH|TRACE|TRAILING|TRANSACTION|TRANSACTIONAL|TRIM|TRUE|TUNNEL|TYPE|TYPES|UNBOUNDED|UNCOMMITTED|UNION|UNIQUE|UNKNOWN|UP|UPDATE|UPSERT|URL|USAGE|USER|USERNAME|USERS|USING|VALIDATE|VALUE|VALUES|VARCHAR|VARIADIC|VARYING|VERSION|VIEW|VIEWS|WARNING|WEBHOOK|WHEN|WHERE|WINDOW|WIRE|WITH|WITHIN|WITHOUT|WORK|WORKERS|WRITE|YEAR|YEARS|ZONE|ZONES)\b">
<rule pattern="(ACCESS|ADD|ADDRESSES|AGGREGATE|ALIGNED|ALL|ALTER|ANALYSIS|AND|ANY|ARITY|ARN|ARRANGEMENT|ARRAY|AS|ASC|ASSERT|ASSUME|AT|AUCTION|AUTHORITY|AVAILABILITY|AVRO|AWS|BATCH|BEGIN|BETWEEN|BIGINT|BILLED|BODY|BOOLEAN|BOTH|BPCHAR|BROKEN|BROKER|BROKERS|BY|BYTES|CARDINALITY|CASCADE|CASE|CAST|CERTIFICATE|CHAIN|CHAINS|CHAR|CHARACTER|CHARACTERISTICS|CHECK|CLASS|CLIENT|CLOCK|CLOSE|CLUSTER|CLUSTERS|COALESCE|COLLATE|COLUMN|COLUMNS|COMMENT|COMMIT|COMMITTED|COMPACTION|COMPATIBILITY|COMPRESSION|COMPUTE|COMPUTECTL|CONFIG|CONFLUENT|CONNECTION|CONNECTIONS|CONSTRAINT|CONTINUAL|COPY|COUNT|COUNTER|CREATE|CREATECLUSTER|CREATEDB|CREATEROLE|CREATION|CROSS|CSV|CURRENT|CURSOR|DATABASE|DATABASES|DATUMS|DAY|DAYS|DEALLOCATE|DEBEZIUM|DEBUG|DEBUGGING|DEC|DECIMAL|DECLARE|DECODING|DECORRELATED|DEFAULT|DEFAULTS|DELETE|DELIMITED|DELIMITER|DELTA|DESC|DETAILS|DISCARD|DISK|DISTINCT|DOC|DOT|DOUBLE|DROP|EAGER|ELEMENT|ELSE|ENABLE|END|ENDPOINT|ENFORCED|ENVELOPE|ERROR|ERRORS|ESCAPE|ESTIMATE|EVERY|EXCEPT|EXCLUDE|EXECUTE|EXISTS|EXPECTED|EXPLAIN|EXPOSE|EXPRESSIONS|EXTERNAL|EXTRACT|FACTOR|FALSE|FAST|FEATURES|FETCH|FIELDS|FILE|FILTER|FIRST|FIXPOINT|FLOAT|FOLLOWING|FOR|FOREIGN|FORMAT|FORWARD|FROM|FULL|FULLNAME|FUNCTION|FUSION|GENERATOR|GRANT|GREATEST|GROUP|GROUPS|HAVING|HEADER|HEADERS|HISTORY|HOLD|HOST|HOUR|HOURS|HUMANIZED|HYDRATION|ID|IDENTIFIERS|IDS|IF|IGNORE|ILIKE|IMPLEMENTATIONS|IMPORTED|IN|INCLUDE|INDEX|INDEXES|INFO|INHERIT|INLINE|INNER|INPUT|INSERT|INSIGHTS|INSPECT|INT|INTEGER|INTERNAL|INTERSECT|INTERVAL|INTO|INTROSPECTION|IS|ISNULL|ISOLATION|JOIN|JOINS|JSON|KAFKA|KEY|KEYS|LAST|LATERAL|LATEST|LEADING|LEAST|LEFT|LEGACY|LETREC|LEVEL|LIKE|LIMIT|LINEAR|LIST|LOAD|LOCAL|LOCALLY|LOG|LOGICAL|LOGIN|LOWERING|MANAGED|MANUAL|MAP|MARKETING|MATERIALIZE|MATERIALIZED|MAX|MECHANISMS|MEMBERSHIP|MESSAGE|METADATA|MINUTE|MINUTES|MODE|MONTH|MONTHS|MUTUALLY|MYSQL|NAME|NAMES|NATURAL|NEGATIVE|NEW|NEXT|NO|NOCREATECLUSTER|NOCREATEDB|NOCREATEROLE|NODE|NOINHERIT|NOLOGIN|NON|NONE|NOSUPERUSER|NOT|NOTICE|NOTICES|NULL|NULLIF|NULLS|OBJECTS|OF|OFFSET|ON|ONLY|OPERATOR|OPTIMIZED|OPTIMIZER|OPTIONS|OR|ORDER|ORDINALITY|OUTER|OVER|OWNED|OWNER|PARTITION|PARTITIONS|PASSWORD|PATH|PHYSICAL|PLAN|PLANS|PORT|POSITION|POSTGRES|PRECEDING|PRECISION|PREFIX|PREPARE|PRIMARY|PRIVATELINK|PRIVILEGES|PROGRESS|PROTOBUF|PROTOCOL|PUBLIC|PUBLICATION|PUSHDOWN|QUERY|QUOTE|RAISE|RANGE|RATE|RAW|READ|READY|REAL|REASSIGN|RECURSION|RECURSIVE|REDACTED|REDUCE|REFERENCE|REFERENCES|REFRESH|REGEX|REGION|REGISTRY|RENAME|REOPTIMIZE|REPEATABLE|REPLACE|REPLAN|REPLICA|REPLICAS|REPLICATION|RESET|RESPECT|RESTRICT|RETAIN|RETURN|RETURNING|REVOKE|RIGHT|ROLE|ROLES|ROLLBACK|ROTATE|ROUNDS|ROW|ROWS|SASL|SCALE|SCHEDULE|SCHEMA|SCHEMAS|SECOND|SECONDS|SECRET|SECRETS|SECURITY|SEED|SELECT|SEQUENCES|SERIALIZABLE|SERVICE|SESSION|SET|SHARD|SHOW|SINK|SINKS|SIZE|SMALLINT|SNAPSHOT|SOME|SOURCE|SOURCES|SSH|SSL|START|STDIN|STDOUT|STORAGE|STORAGECTL|STRATEGY|STRICT|STRING|STRONG|SUBSCRIBE|SUBSOURCE|SUBSOURCES|SUBSTRING|SUBTREE|SUPERUSER|SWAP|SYNTAX|SYSTEM|TABLE|TABLES|TAIL|TASK|TEMP|TEMPORARY|TEXT|THEN|TICK|TIES|TIME|TIMELINE|TIMEOUT|TIMESTAMP|TIMESTAMPTZ|TIMING|TO|TOKEN|TOPIC|TPCH|TRACE|TRAILING|TRANSACTION|TRANSACTIONAL|TRIM|TRUE|TUNNEL|TYPE|TYPES|UNBOUNDED|UNCOMMITTED|UNION|UNIQUE|UNKNOWN|UNNEST|UNTIL|UP|UPDATE|UPSERT|URL|USAGE|USER|USERNAME|USERS|USING|VALIDATE|VALUE|VALUES|VARCHAR|VARIADIC|VARYING|VERSION|VIEW|VIEWS|WAIT|WARNING|WEBHOOK|WHEN|WHERE|WINDOW|WIRE|WITH|WITHIN|WITHOUT|WORK|WORKERS|WORKLOAD|WRITE|YEAR|YEARS|YUGABYTE|ZONE|ZONES)\b">
<token type="Keyword" />
</rule>
<rule pattern="[+*/&lt;&gt;=~!@#%^&amp;|`?-]+">

View File

@@ -1,182 +1,137 @@
<lexer>
<config>
<name>mcfunction</name>
<name>MCFunction</name>
<alias>mcfunction</alias>
<alias>mcf</alias>
<filename>*.mcfunction</filename>
<dot_all>true</dot_all>
<not_multiline>true</not_multiline>
<mime_type>text/mcfunction</mime_type>
</config>
<rules>
<state name="nbtobjectvalue">
<rule pattern="(&#34;(\\\\|\\&#34;|[^&#34;])*&#34;|[a-zA-Z0-9_]+)">
<token type="NameTag"/>
<push state="nbtobjectattribute"/>
</rule>
<rule pattern="\}">
<token type="Punctuation"/>
<pop depth="1"/>
</rule>
</state>
<state name="nbtarrayvalue">
<rule>
<include state="nbtvalue"/>
</rule>
<rule pattern=",">
<token type="Punctuation"/>
</rule>
<rule pattern="\]">
<token type="Punctuation"/>
<pop depth="1"/>
</rule>
</state>
<state name="nbtvalue">
<rule>
<include state="simplevalue"/>
</rule>
<rule pattern="\{">
<token type="Punctuation"/>
<push state="nbtobjectvalue"/>
</rule>
<rule pattern="\[">
<token type="Punctuation"/>
<push state="nbtarrayvalue"/>
</rule>
</state>
<state name="argumentvalue">
<rule>
<include state="simplevalue"/>
</rule>
<rule pattern=",">
<token type="Punctuation"/>
<pop depth="1"/>
</rule>
<rule pattern="[}\]]">
<token type="Punctuation"/>
<pop depth="2"/>
</rule>
</state>
<state name="argumentlist">
<rule pattern="(nbt)(={)">
<bygroups>
<token type="NameAttribute"/>
<token type="Punctuation"/>
</bygroups>
<push state="nbtobjectvalue"/>
</rule>
<rule pattern="([A-Za-z0-9/_!]+)(={)">
<bygroups>
<token type="NameAttribute"/>
<token type="Punctuation"/>
</bygroups>
<push state="argumentlist"/>
</rule>
<rule pattern="([A-Za-z0-9/_!]+)(=)">
<bygroups>
<token type="NameAttribute"/>
<token type="Punctuation"/>
</bygroups>
<push state="argumentvalue"/>
</rule>
<rule>
<include state="simplevalue"/>
</rule>
<rule pattern=",">
<token type="Punctuation"/>
</rule>
<rule pattern="[}\]]">
<token type="Punctuation"/>
<pop depth="1"/>
</rule>
</state>
<state name="root">
<rule pattern="#.*?\n">
<token type="CommentSingle"/>
</rule>
<rule pattern="/?(geteduclientinfo|clearspawnpoint|defaultgamemode|transferserver|toggledownfall|immutableworld|detectredstone|setidletimeout|playanimation|classroommode|spreadplayers|testforblocks|setmaxplayers|setworldspawn|testforblock|worldbuilder|createagent|worldborder|camerashake|advancement|raytracefog|locatebiome|tickingarea|replaceitem|attributes|spawnpoint|difficulty|experience|scoreboard|whitelist|structure|playsound|stopsound|forceload|spectate|gamerule|function|schedule|wsserver|teleport|position|save-off|particle|setblock|datapack|mobevent|transfer|gamemode|save-all|bossbar|enchant|trigger|collect|execute|weather|teammsg|tpagent|banlist|dropall|publish|tellraw|testfor|save-on|destroy|ability|locate|summon|remove|effect|reload|ban-ip|recipe|pardon|detect|music|clear|clone|event|mixer|debug|title|ride|stop|list|turn|data|team|kick|loot|tell|help|give|flog|fill|move|time|seed|kill|save|item|deop|code|tag|ban|msg|say|tp|me|op|xp|w|place)\b">
<token type="KeywordReserved"/>
</rule>
<rule pattern="(@p|@r|@a|@e|@s|@c|@v)">
<token type="KeywordConstant"/>
</rule>
<rule pattern="\[">
<token type="Punctuation"/>
<push state="argumentlist"/>
</rule>
<rule pattern="{">
<token type="Punctuation"/>
<push state="nbtobjectvalue"/>
</rule>
<rule pattern="~">
<token type="NameBuiltin"/>
</rule>
<rule pattern="([a-zA-Z_]+:)?[a-zA-Z_]+\b">
<token type="Text"/>
</rule>
<rule pattern="([a-z]+)(\.)([0-9]+)\b">
<bygroups>
<token type="Text"/>
<token type="Punctuation"/>
<token type="LiteralNumber"/>
</bygroups>
</rule>
<rule pattern="([&lt;&gt;=]|&lt;=|&gt;=)">
<token type="Punctuation"/>
</rule>
<rule>
<include state="simplevalue"/>
</rule>
<rule pattern="\s+">
<token type="TextWhitespace"/>
</rule>
<rule><include state="names"/></rule>
<rule><include state="comments"/></rule>
<rule><include state="literals"/></rule>
<rule><include state="whitespace"/></rule>
<rule><include state="property"/></rule>
<rule><include state="operators"/></rule>
<rule><include state="selectors"/></rule>
</state>
<state name="simplevalue">
<rule pattern="(true|false)">
<token type="KeywordConstant"/>
</rule>
<rule pattern="[01]b">
<token type="LiteralNumber"/>
</rule>
<rule pattern="-?(0|[1-9]\d*)(\.\d+[eE](\+|-)?\d+|[eE](\+|-)?\d+|\.\d+)">
<token type="LiteralNumberFloat"/>
</rule>
<rule pattern="(-?\d+)(\.\.)(-?\d+)">
<bygroups>
<token type="LiteralNumberInteger"/>
<token type="Punctuation"/>
<token type="LiteralNumberInteger"/>
</bygroups>
</rule>
<rule pattern="-?(0|[1-9]\d*)">
<token type="LiteralNumberInteger"/>
</rule>
<rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
<token type="LiteralStringDouble"/>
</rule>
<rule pattern="&#39;[^&#39;]+&#39;">
<token type="LiteralStringSingle"/>
</rule>
<rule pattern="([!#]?)(\w+)">
<bygroups>
<token type="Punctuation"/>
<token type="Text"/>
</bygroups>
</rule>
<state name="names">
<rule pattern="^(\s*)([a-z_]+)"><bygroups><token type="TextWhitespace"/><token type="NameBuiltin"/></bygroups></rule>
<rule pattern="(?&lt;=run)\s+[a-z_]+"><token type="NameBuiltin"/></rule>
<rule pattern="\b[0-9a-fA-F]+(?:-[0-9a-fA-F]+){4}\b"><token type="NameVariable"/></rule>
<rule><include state="resource-name"/></rule>
<rule pattern="[A-Za-z_][\w.#%$]+"><token type="KeywordConstant"/></rule>
<rule pattern="[#%$][\w.#%$]+"><token type="NameVariableMagic"/></rule>
</state>
<state name="nbtobjectattribute">
<rule>
<include state="nbtvalue"/>
</rule>
<rule pattern=":">
<token type="Punctuation"/>
</rule>
<rule pattern=",">
<token type="Punctuation"/>
<pop depth="1"/>
</rule>
<rule pattern="\}">
<token type="Punctuation"/>
<pop depth="2"/>
</rule>
<state name="resource-name">
<rule pattern="#?[a-z_][a-z_.-]*:[a-z0-9_./-]+"><token type="NameFunction"/></rule>
<rule pattern="#?[a-z0-9_\.\-]+\/[a-z0-9_\.\-\/]+"><token type="NameFunction"/></rule>
</state>
<state name="whitespace">
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
</state>
<state name="comments">
<rule pattern="^\s*(#[&gt;!])"><token type="CommentMultiline"/><push state="comments.block" state="comments.block.emphasized"/></rule>
<rule pattern="#.*$"><token type="CommentSingle"/></rule>
</state>
<state name="comments.block">
<rule pattern="^\s*#[&gt;!]"><token type="CommentMultiline"/><push state="comments.block.emphasized"/></rule>
<rule pattern="^\s*#"><token type="CommentMultiline"/><push state="comments.block.normal"/></rule>
<rule><pop depth="1"/></rule>
</state>
<state name="comments.block.normal">
<rule><include state="comments.block.special"/></rule>
<rule pattern="\S+"><token type="CommentMultiline"/></rule>
<rule pattern="\n"><token type="Text"/><pop depth="1"/></rule>
<rule><include state="whitespace"/></rule>
</state>
<state name="comments.block.emphasized">
<rule><include state="comments.block.special"/></rule>
<rule pattern="\S+"><token type="LiteralStringDoc"/></rule>
<rule pattern="\n"><token type="Text"/><pop depth="1"/></rule>
<rule><include state="whitespace"/></rule>
</state>
<state name="comments.block.special">
<rule pattern="@\S+"><token type="NameDecorator"/></rule>
<rule><include state="resource-name"/></rule>
<rule pattern="[#%$][\w.#%$]+"><token type="NameVariableMagic"/></rule>
</state>
<state name="operators">
<rule pattern="[\-~%^?!+*&lt;&gt;\\/|&amp;=.]"><token type="Operator"/></rule>
</state>
<state name="literals">
<rule pattern="\.\."><token type="Literal"/></rule>
<rule pattern="(true|false)"><token type="KeywordPseudo"/></rule>
<rule pattern="[A-Za-z_]+"><token type="NameVariableClass"/></rule>
<rule pattern="[0-7]b"><token type="LiteralNumberByte"/></rule>
<rule pattern="[+-]?\d*\.?\d+([eE]?[+-]?\d+)?[df]?\b"><token type="LiteralNumberFloat"/></rule>
<rule pattern="[+-]?\d+\b"><token type="LiteralNumberInteger"/></rule>
<rule pattern="&quot;"><token type="LiteralStringDouble"/><push state="literals.string-double"/></rule>
<rule pattern="&#x27;"><token type="LiteralStringSingle"/><push state="literals.string-single"/></rule>
</state>
<state name="literals.string-double">
<rule pattern="\\."><token type="LiteralStringEscape"/></rule>
<rule pattern="[^\\&quot;\n]+"><token type="LiteralStringDouble"/></rule>
<rule pattern="&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
</state>
<state name="literals.string-single">
<rule pattern="\\."><token type="LiteralStringEscape"/></rule>
<rule pattern="[^\\&#x27;\n]+"><token type="LiteralStringSingle"/></rule>
<rule pattern="&#x27;"><token type="LiteralStringSingle"/><pop depth="1"/></rule>
</state>
<state name="selectors">
<rule pattern="@[a-z]"><token type="NameVariable"/></rule>
</state>
<state name="property">
<rule pattern="\{"><token type="Punctuation"/><push state="property.curly" state="property.key"/></rule>
<rule pattern="\["><token type="Punctuation"/><push state="property.square" state="property.key"/></rule>
</state>
<state name="property.curly">
<rule><include state="whitespace"/></rule>
<rule><include state="property"/></rule>
<rule pattern="\}"><token type="Punctuation"/><pop depth="1"/></rule>
</state>
<state name="property.square">
<rule><include state="whitespace"/></rule>
<rule><include state="property"/></rule>
<rule pattern="\]"><token type="Punctuation"/><pop depth="1"/></rule>
<rule pattern=","><token type="Punctuation"/></rule>
</state>
<state name="property.key">
<rule><include state="whitespace"/></rule>
<rule pattern="#?[a-z_][a-z_\.\-]*\:[a-z0-9_\.\-/]+(?=\s*\=)"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
<rule pattern="#?[a-z_][a-z0-9_\.\-/]+"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
<rule pattern="[A-Za-z_\-\+]+"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
<rule pattern="&quot;"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
<rule pattern="&#x27;"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
<rule pattern="-?\d+"><token type="LiteralNumberInteger"/><push state="property.delimiter"/></rule>
<rule><pop depth="1"/></rule>
</state>
<state name="property.key.string-double">
<rule pattern="\\."><token type="LiteralStringEscape"/></rule>
<rule pattern="[^\\&quot;\n]+"><token type="NameAttribute"/></rule>
<rule pattern="&quot;"><token type="NameAttribute"/><pop depth="1"/></rule>
</state>
<state name="property.key.string-single">
<rule pattern="\\."><token type="LiteralStringEscape"/></rule>
<rule pattern="[^\\&#x27;\n]+"><token type="NameAttribute"/></rule>
<rule pattern="&#x27;"><token type="NameAttribute"/><pop depth="1"/></rule>
</state>
<state name="property.delimiter">
<rule><include state="whitespace"/></rule>
<rule pattern="[:=]!?"><token type="Punctuation"/><push state="property.value"/></rule>
<rule pattern=","><token type="Punctuation"/></rule>
<rule><pop depth="1"/></rule>
</state>
<state name="property.value">
<rule><include state="whitespace"/></rule>
<rule pattern="#?[a-z_][a-z_\.\-]*\:[a-z0-9_\.\-/]+"><token type="NameTag"/></rule>
<rule pattern="#?[a-z_][a-z0-9_\.\-/]+"><token type="NameTag"/></rule>
<rule><include state="literals"/></rule>
<rule><include state="property"/></rule>
<rule><pop depth="1"/></rule>
</state>
</rules>
</lexer>

View File

@@ -106,7 +106,7 @@
</bygroups>
<push state="interpol"/>
</rule>
<rule pattern="(&amp;&amp;|&gt;=|&lt;=|\+\+|-&gt;|!=|\|\||//|==|@|!|\+|\?|&lt;|\.|&gt;|\*)">
<rule pattern="(&amp;&amp;|&gt;=|&lt;=|\+\+|-&gt;|!=|=|\|\||//|==|@|!|\+|\?|&lt;|\.|&gt;|\*)">
<token type="Operator"/>
</rule>
<rule pattern="[;:]">

59
lexers/nsis.xml Normal file
View File

@@ -0,0 +1,59 @@
<lexer>
<config>
<name>NSIS</name>
<alias>nsis</alias>
<alias>nsi</alias>
<alias>nsh</alias>
<filename>*.nsi</filename>
<filename>*.nsh</filename>
<mime_type>text/x-nsis</mime_type>
<case_insensitive>true</case_insensitive>
<not_multiline>true</not_multiline>
</config>
<rules>
<state name="root">
<rule pattern="([;#].*)(\n)"><bygroups><token type="Comment"/><token type="TextWhitespace"/></bygroups></rule>
<rule pattern="&#x27;.*?&#x27;"><token type="LiteralStringSingle"/></rule>
<rule pattern="&quot;"><token type="LiteralStringDouble"/><push state="str_double"/></rule>
<rule pattern="`"><token type="LiteralStringBacktick"/><push state="str_backtick"/></rule>
<rule><include state="macro"/></rule>
<rule><include state="interpol"/></rule>
<rule><include state="basic"/></rule>
<rule pattern="\$\{[a-z_|][\w|]*\}"><token type="KeywordPseudo"/></rule>
<rule pattern="/[a-z_]\w*"><token type="NameAttribute"/></rule>
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
<rule pattern="[\w.]+"><token type="Text"/></rule>
</state>
<state name="basic">
<rule pattern="(\n)(Function)(\s+)([._a-z][.\w]*)\b"><bygroups><token type="TextWhitespace"/><token type="Keyword"/><token type="TextWhitespace"/><token type="NameFunction"/></bygroups></rule>
<rule pattern="\b([_a-z]\w*)(::)([a-z][a-z0-9]*)\b"><bygroups><token type="KeywordNamespace"/><token type="Punctuation"/><token type="NameFunction"/></bygroups></rule>
<rule pattern="\b([_a-z]\w*)(:)"><bygroups><token type="NameLabel"/><token type="Punctuation"/></bygroups></rule>
<rule pattern="(\b[ULS]|\B)([!&lt;&gt;=]?=|\&lt;\&gt;?|\&gt;)\B"><token type="Operator"/></rule>
<rule pattern="[|+-]"><token type="Operator"/></rule>
<rule pattern="\\"><token type="Punctuation"/></rule>
<rule pattern="\b(Abort|Add(?:BrandingImage|Size)|Allow(?:RootDirInstall|SkipFiles)|AutoCloseWindow|BG(?:Font|Gradient)|BrandingText|BringToFront|Call(?:InstDLL)?|(?:Sub)?Caption|ChangeUI|CheckBitmap|ClearErrors|CompletedText|ComponentText|CopyFiles|CRCCheck|Create(?:Directory|Font|Shortcut)|Delete(?:INI(?:Sec|Str)|Reg(?:Key|Value))?|DetailPrint|DetailsButtonText|Dir(?:Show|Text|Var|Verify)|(?:Disabled|Enabled)Bitmap|EnableWindow|EnumReg(?:Key|Value)|Exch|Exec(?:Shell|Wait)?|ExpandEnvStrings|File(?:BufSize|Close|ErrorText|Open|Read(?:Byte)?|Seek|Write(?:Byte)?)?|Find(?:Close|First|Next|Window)|FlushINI|Function(?:End)?|Get(?:CurInstType|CurrentAddress|DlgItem|DLLVersion(?:Local)?|ErrorLevel|FileTime(?:Local)?|FullPathName|FunctionAddress|InstDirError|LabelAddress|TempFileName)|Goto|HideWindow|Icon|If(?:Abort|Errors|FileExists|RebootFlag|Silent)|InitPluginsDir|Install(?:ButtonText|Colors|Dir(?:RegKey)?)|Inst(?:ProgressFlags|Type(?:[GS]etText)?)|Int(?:CmpU?|Fmt|Op)|IsWindow|LangString(?:UP)?|License(?:BkColor|Data|ForceSelection|LangString|Text)|LoadLanguageFile|LockWindow|Log(?:Set|Text)|MessageBox|MiscButtonText|Name|Nop|OutFile|(?:Uninst)?Page(?:Ex(?:End)?)?|PluginDir|Pop|Push|Quit|Read(?:(?:Env|INI|Reg)Str|RegDWORD)|Reboot|(?:Un)?RegDLL|Rename|RequestExecutionLevel|ReserveFile|Return|RMDir|SearchPath|Section(?:Divider|End|(?:(?:Get|Set)(?:Flags|InstTypes|Size|Text))|Group(?:End)?|In)?|SendMessage|Set(?:AutoClose|BrandingImage|Compress(?:ionLevel|or(?:DictSize)?)?|CtlColors|CurInstType|DatablockOptimize|DateSave|Details(?:Print|View)|Error(?:s|Level)|FileAttributes|Font|OutPath|Overwrite|PluginUnload|RebootFlag|ShellVarContext|Silent|StaticBkColor)|Show(?:(?:I|Uni)nstDetails|Window)|Silent(?:Un)?Install|Sleep|SpaceTexts|Str(?:CmpS?|Cpy|Len)|SubSection(?:End)?|Uninstall(?:ButtonText|(?:Sub)?Caption|EXEName|Icon|Text)|UninstPage|Var|VI(?:AddVersionKey|ProductVersion)|WindowIcon|Write(?:INIStr|Reg(:?Bin|DWORD|(?:Expand)?Str)|Uninstaller)|XPStyle)\b"><token type="Keyword"/></rule>
<rule pattern="\b(CUR|END|(?:FILE_ATTRIBUTE_)?(?:ARCHIVE|HIDDEN|NORMAL|OFFLINE|READONLY|SYSTEM|TEMPORARY)|HK(CC|CR|CU|DD|LM|PD|U)|HKEY_(?:CLASSES_ROOT|CURRENT_(?:CONFIG|USER)|DYN_DATA|LOCAL_MACHINE|PERFORMANCE_DATA|USERS)|ID(?:ABORT|CANCEL|IGNORE|NO|OK|RETRY|YES)|MB_(?:ABORTRETRYIGNORE|DEFBUTTON[1-4]|ICON(?:EXCLAMATION|INFORMATION|QUESTION|STOP)|OK(?:CANCEL)?|RETRYCANCEL|RIGHT|SETFOREGROUND|TOPMOST|USERICON|YESNO(?:CANCEL)?)|SET|SHCTX|SW_(?:HIDE|SHOW(?:MAXIMIZED|MINIMIZED|NORMAL))|admin|all|auto|both|bottom|bzip2|checkbox|colored|current|false|force|hide|highest|if(?:diff|newer)|lastused|leave|left|listonly|lzma|nevershow|none|normal|off|on|pop|push|radiobuttons|right|show|silent|silentlog|smooth|textonly|top|true|try|user|zlib)\b"><token type="NameConstant"/></rule>
</state>
<state name="macro">
<rule pattern="\!(addincludedir(?:dir)?|addplugindir|appendfile|cd|define|delfilefile|echo(?:message)?|else|endif|error|execute|if(?:macro)?n?(?:def)?|include|insertmacro|macro(?:end)?|packhdr|search(?:parse|replace)|system|tempfilesymbol|undef|verbose|warning)\b"><token type="CommentPreproc"/></rule>
</state>
<state name="interpol">
<rule pattern="\$(R?[0-9])"><token type="NameBuiltinPseudo"/></rule>
<rule pattern="\$(ADMINTOOLS|APPDATA|CDBURN_AREA|COOKIES|COMMONFILES(?:32|64)|DESKTOP|DOCUMENTS|EXE(?:DIR|FILE|PATH)|FAVORITES|FONTS|HISTORY|HWNDPARENT|INTERNET_CACHE|LOCALAPPDATA|MUSIC|NETHOOD|PICTURES|PLUGINSDIR|PRINTHOOD|PROFILE|PROGRAMFILES(?:32|64)|QUICKLAUNCH|RECENT|RESOURCES(?:_LOCALIZED)?|SENDTO|SM(?:PROGRAMS|STARTUP)|STARTMENU|SYSDIR|TEMP(?:LATES)?|VIDEOS|WINDIR|\{NSISDIR\})"><token type="NameBuiltin"/></rule>
<rule pattern="\$(CMDLINE|INSTDIR|OUTDIR|LANGUAGE)"><token type="NameVariableGlobal"/></rule>
<rule pattern="\$[a-z_]\w*"><token type="NameVariable"/></rule>
</state>
<state name="str_double">
<rule pattern="&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
<rule pattern="\$(\\[nrt&quot;]|\$)"><token type="LiteralStringEscape"/></rule>
<rule><include state="interpol"/></rule>
<rule pattern="[^&quot;]+"><token type="LiteralStringDouble"/></rule>
</state>
<state name="str_backtick">
<rule pattern="`"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
<rule pattern="\$(\\[nrt&quot;]|\$)"><token type="LiteralStringEscape"/></rule>
<rule><include state="interpol"/></rule>
<rule pattern="[^`]+"><token type="LiteralStringDouble"/></rule>
</state>
</rules>
</lexer>

View File

@@ -41,6 +41,14 @@
<rule pattern="\b(as|assert|begin|class|constraint|do|done|downto|else|end|exception|external|false|for|fun|function|functor|if|in|include|inherit|initializer|lazy|let|match|method|module|mutable|new|object|of|open|private|raise|rec|sig|struct|then|to|true|try|type|value|val|virtual|when|while|with)\b">
<token type="Keyword"/>
</rule>
<rule pattern="({([a-z_]*)\|)([\s\S]+?)(?=\|\2})(\|\2})">
<bygroups>
<token type="LiteralStringAffix"/>
<token type="Ignore"/>
<token type="LiteralString"/>
<token type="LiteralStringAffix"/>
</bygroups>
</rule>
<rule pattern="(~|\}|\|]|\||\{&lt;|\{|`|_|]|\[\||\[&gt;|\[&lt;|\[|\?\?|\?|&gt;\}|&gt;]|&gt;|=|&lt;-|&lt;|;;|;|:&gt;|:=|::|:|\.\.|\.|-&gt;|-\.|-|,|\+|\*|\)|\(|&amp;&amp;|&amp;|#|!=)">
<token type="Operator"/>
</rule>

View File

@@ -51,6 +51,20 @@
<rule pattern = "\#[a-zA-Z_]+\b">
<token type = "NameDecorator"/>
</rule>
<rule pattern = "^\#\+\w+\s*$">
<token type = "NameAttribute"/>
</rule>
<rule pattern = "^(\#\+\w+)(\s+)(\!)?([A-Za-z0-9-_!]+)(?:(,)(\!)?([A-Za-z0-9-_!]+))*\s*$">
<bygroups>
<token type = "NameAttribute"/>
<token type = "TextWhitespace"/>
<token type = "Operator"/>
<token type = "Name"/>
<token type = "Punctuation"/>
<token type = "Operator"/>
<token type = "Name"/>
</bygroups>
</rule>
<rule pattern = "\@(\([a-zA-Z_]+\b\s*.*\)|\(?[a-zA-Z_]+\)?)">
<token type = "NameAttribute"/>
</rule>

57
lexers/snbt.xml Normal file
View File

@@ -0,0 +1,57 @@
<lexer>
<config>
<name>SNBT</name>
<alias>snbt</alias>
<filename>*.snbt</filename>
<mime_type>text/snbt</mime_type>
</config>
<rules>
<state name="root">
<rule pattern="\{"><token type="Punctuation"/><push state="compound"/></rule>
<rule pattern="[^\{]+"><token type="Text"/></rule>
</state>
<state name="whitespace">
<rule pattern="\s+"><token type="TextWhitespace"/></rule>
</state>
<state name="operators">
<rule pattern="[,:;]"><token type="Punctuation"/></rule>
</state>
<state name="literals">
<rule pattern="(true|false)"><token type="KeywordConstant"/></rule>
<rule pattern="-?\d+[eE]-?\d+"><token type="LiteralNumberFloat"/></rule>
<rule pattern="-?\d*\.\d+[fFdD]?"><token type="LiteralNumberFloat"/></rule>
<rule pattern="-?\d+[bBsSlLfFdD]?"><token type="LiteralNumberInteger"/></rule>
<rule pattern="&quot;"><token type="LiteralStringDouble"/><push state="literals.string_double"/></rule>
<rule pattern="&#x27;"><token type="LiteralStringSingle"/><push state="literals.string_single"/></rule>
</state>
<state name="literals.string_double">
<rule pattern="\\."><token type="LiteralStringEscape"/></rule>
<rule pattern="[^\\&quot;\n]+"><token type="LiteralStringDouble"/></rule>
<rule pattern="&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
</state>
<state name="literals.string_single">
<rule pattern="\\."><token type="LiteralStringEscape"/></rule>
<rule pattern="[^\\&#x27;\n]+"><token type="LiteralStringSingle"/></rule>
<rule pattern="&#x27;"><token type="LiteralStringSingle"/><pop depth="1"/></rule>
</state>
<state name="compound">
<rule pattern="[A-Z_a-z]+"><token type="NameAttribute"/></rule>
<rule><include state="operators"/></rule>
<rule><include state="whitespace"/></rule>
<rule><include state="literals"/></rule>
<rule pattern="\{"><token type="Punctuation"/><push/></rule>
<rule pattern="\["><token type="Punctuation"/><push state="list"/></rule>
<rule pattern="\}"><token type="Punctuation"/><pop depth="1"/></rule>
</state>
<state name="list">
<rule pattern="[A-Z_a-z]+"><token type="NameAttribute"/></rule>
<rule><include state="literals"/></rule>
<rule><include state="operators"/></rule>
<rule><include state="whitespace"/></rule>
<rule pattern="\["><token type="Punctuation"/><push/></rule>
<rule pattern="\{"><token type="Punctuation"/><push state="compound"/></rule>
<rule pattern="\]"><token type="Punctuation"/><pop depth="1"/></rule>
</state>
</rules>
</lexer>

View File

@@ -157,8 +157,20 @@
<rule pattern="(continue|returns|storage|memory|delete|return|throw|break|catch|while|else|from|new|try|for|if|is|as|do|in|_)\b">
<token type="Keyword"/>
</rule>
<rule pattern="assembly\b">
<rule pattern="(assembly)(\s+\()(.+)(\)\s+{)">
<bygroups>
<token type="Keyword"/>
<token type="Text"/>
<token type="LiteralString"/>
<token type="Text"/>
</bygroups>
<push state="assembly"/>
</rule>
<rule pattern="(assembly)(\s+{)">
<bygroups>
<token type="Keyword"/>
<token type="Text"/>
</bygroups>
<push state="assembly"/>
</rule>
<rule pattern="(contract|interface|enum|event|struct)(\s+)([a-zA-Z_]\w*)">
@@ -235,7 +247,7 @@
<token type="Punctuation"/>
<pop depth="1"/>
</rule>
<rule pattern="[(),]">
<rule pattern="[(),.]">
<token type="Punctuation"/>
</rule>
<rule pattern=":=|=:">

View File

@@ -51,6 +51,22 @@
</rule>
</state>
<state name="tag">
<rule>
<include state="jsx"/>
</rule>
<rule pattern=",">
<token type="Punctuation"/>
</rule>
<rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
<token type="LiteralStringDouble"/>
</rule>
<rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
<token type="LiteralStringSingle"/>
</rule>
<rule pattern="`">
<token type="LiteralStringBacktick"/>
<push state="interp"/>
</rule>
<rule>
<include state="commentsandwhitespace"/>
</rule>
@@ -171,7 +187,7 @@
</rule>
<rule pattern="(?=/)">
<token type="Text"/>
<push state="#pop" state="badregex"/>
<push state="badregex"/>
</rule>
<rule>
<pop depth="1"/>

107
lexers/typst.xml Normal file
View File

@@ -0,0 +1,107 @@
<lexer>
<config>
<name>Typst</name>
<alias>typst</alias>
<filename>*.typ</filename>
<mime_type>text/x-typst</mime_type>
</config>
<rules>
<state name="root">
<rule><include state="markup"/></rule>
</state>
<state name="into_code">
<rule pattern="(\#let|\#set|\#show)\b"><token type="KeywordDeclaration"/><push state="inline_code"/></rule>
<rule pattern="(\#import|\#include)\b"><token type="KeywordNamespace"/><push state="inline_code"/></rule>
<rule pattern="(\#if|\#for|\#while|\#export)\b"><token type="KeywordReserved"/><push state="inline_code"/></rule>
<rule pattern="#\{"><token type="Punctuation"/><push state="code"/></rule>
<rule pattern="#\("><token type="Punctuation"/><push state="code"/></rule>
<rule pattern="(#[a-zA-Z_][a-zA-Z0-9_-]*)(\[)"><bygroups><token type="NameFunction"/><token type="Punctuation"/></bygroups><push state="markup"/></rule>
<rule pattern="(#[a-zA-Z_][a-zA-Z0-9_-]*)(\()"><bygroups><token type="NameFunction"/><token type="Punctuation"/></bygroups><push state="code"/></rule>
<rule pattern="(\#true|\#false|\#none|\#auto)\b"><token type="KeywordConstant"/></rule>
<rule pattern="#[a-zA-Z_][a-zA-Z0-9_]*"><token type="NameVariable"/></rule>
<rule pattern="#0x[0-9a-fA-F]+"><token type="LiteralNumberHex"/></rule>
<rule pattern="#0b[01]+"><token type="LiteralNumberBin"/></rule>
<rule pattern="#0o[0-7]+"><token type="LiteralNumberOct"/></rule>
<rule pattern="#[0-9]+[\.e][0-9]+"><token type="LiteralNumberFloat"/></rule>
<rule pattern="#[0-9]+"><token type="LiteralNumberInteger"/></rule>
</state>
<state name="markup">
<rule><include state="comment"/></rule>
<rule pattern="^\s*=+.*$"><token type="GenericHeading"/></rule>
<rule pattern="[*][^*]*[*]"><token type="GenericStrong"/></rule>
<rule pattern="_[^_]*_"><token type="GenericEmph"/></rule>
<rule pattern="\$"><token type="Punctuation"/><push state="math"/></rule>
<rule pattern="`[^`]*`"><token type="LiteralStringBacktick"/></rule>
<rule pattern="^(\s*)(-)(\s+)"><bygroups><token type="TextWhitespace"/><token type="Punctuation"/><token type="TextWhitespace"/></bygroups></rule>
<rule pattern="^(\s*)(\+)(\s+)"><bygroups><token type="TextWhitespace"/><token type="Punctuation"/><token type="TextWhitespace"/></bygroups></rule>
<rule pattern="^(\s*)([0-9]+\.)"><bygroups><token type="TextWhitespace"/><token type="Punctuation"/></bygroups></rule>
<rule pattern="^(\s*)(/)(\s+)([^:]+)(:)"><bygroups><token type="TextWhitespace"/><token type="Punctuation"/><token type="TextWhitespace"/><token type="NameVariable"/><token type="Punctuation"/></bygroups></rule>
<rule pattern="&lt;[a-zA-Z_][a-zA-Z0-9_-]*&gt;"><token type="NameLabel"/></rule>
<rule pattern="@[a-zA-Z_][a-zA-Z0-9_-]*"><token type="NameLabel"/></rule>
<rule pattern="\\#"><token type="Text"/></rule>
<rule><include state="into_code"/></rule>
<rule pattern="```(?:.|\n)*?```"><token type="LiteralStringBacktick"/></rule>
<rule pattern="https?://[0-9a-zA-Z~/%#&amp;=\&#x27;,;.+?]*"><token type="GenericEmph"/></rule>
<rule pattern="(\-\-\-|\\|\~|\-\-|\.\.\.)\B"><token type="Punctuation"/></rule>
<rule pattern="\\\["><token type="Punctuation"/></rule>
<rule pattern="\\\]"><token type="Punctuation"/></rule>
<rule pattern="\["><token type="Punctuation"/><push/></rule>
<rule pattern="\]"><token type="Punctuation"/><pop depth="1"/></rule>
<rule pattern="[ \t]+\n?|\n"><token type="TextWhitespace"/></rule>
<rule pattern="((?![*_$`&lt;@\\#\] ]|https?://).)+"><token type="Text"/></rule>
</state>
<state name="math">
<rule><include state="comment"/></rule>
<rule pattern="(\\_|\\\^|\\\&amp;)"><token type="Text"/></rule>
<rule pattern="(_|\^|\&amp;|;)"><token type="Punctuation"/></rule>
<rule pattern="(\+|/|=|\[\||\|\]|\|\||\*|:=|::=|\.\.\.|&#x27;|\-|=:|!=|&gt;&gt;|&gt;=|&gt;&gt;&gt;|&lt;&lt;|&lt;=|&lt;&lt;&lt;|\-&gt;|\|\-&gt;|=&gt;|\|=&gt;|==&gt;|\-\-&gt;|\~\~&gt;|\~&gt;|&gt;\-&gt;|\-&gt;&gt;|&lt;\-|&lt;==|&lt;\-\-|&lt;\~\~|&lt;\~|&lt;\-&lt;|&lt;&lt;\-|&lt;\-&gt;|&lt;=&gt;|&lt;==&gt;|&lt;\-\-&gt;|&gt;|&lt;|\~|:|\|)"><token type="Operator"/></rule>
<rule pattern="\\"><token type="Punctuation"/></rule>
<rule pattern="\\\$"><token type="Punctuation"/></rule>
<rule pattern="\$"><token type="Punctuation"/><pop depth="1"/></rule>
<rule><include state="into_code"/></rule>
<rule pattern="([a-zA-Z][a-zA-Z0-9-]*)(\s*)(\()"><bygroups><token type="NameFunction"/><token type="TextWhitespace"/><token type="Punctuation"/></bygroups></rule>
<rule pattern="([a-zA-Z][a-zA-Z0-9-]*)(:)"><bygroups><token type="NameVariable"/><token type="Punctuation"/></bygroups></rule>
<rule pattern="([a-zA-Z][a-zA-Z0-9-]*)"><token type="NameVariable"/></rule>
<rule pattern="[0-9]+(\.[0-9]+)?"><token type="LiteralNumber"/></rule>
<rule pattern="\.{1,3}|\(|\)|,|\{|\}"><token type="Punctuation"/></rule>
<rule pattern="&quot;[^&quot;]*&quot;"><token type="LiteralStringDouble"/></rule>
<rule pattern="[ \t\n]+"><token type="TextWhitespace"/></rule>
</state>
<state name="comment">
<rule pattern="//.*$"><token type="CommentSingle"/></rule>
<rule pattern="/[*](.|\n)*?[*]/"><token type="CommentMultiline"/></rule>
</state>
<state name="code">
<rule><include state="comment"/></rule>
<rule pattern="\["><token type="Punctuation"/><push state="markup"/></rule>
<rule pattern="\(|\{"><token type="Punctuation"/><push state="code"/></rule>
<rule pattern="\)|\}"><token type="Punctuation"/><pop depth="1"/></rule>
<rule pattern="&quot;[^&quot;]*&quot;"><token type="LiteralStringDouble"/></rule>
<rule pattern=",|\.{1,2}"><token type="Punctuation"/></rule>
<rule pattern="="><token type="Operator"/></rule>
<rule pattern="(and|or|not)\b"><token type="OperatorWord"/></rule>
<rule pattern="=&gt;|&lt;=|==|!=|&gt;|&lt;|-=|\+=|\*=|/=|\+|-|\\|\*"><token type="Operator"/></rule>
<rule pattern="([a-zA-Z_][a-zA-Z0-9_-]*)(:)"><bygroups><token type="NameVariable"/><token type="Punctuation"/></bygroups></rule>
<rule pattern="([a-zA-Z_][a-zA-Z0-9_-]*)(\()"><bygroups><token type="NameFunction"/><token type="Punctuation"/></bygroups><push state="code"/></rule>
<rule pattern="(as|break|export|continue|else|for|if|in|return|while)\b"><token type="KeywordReserved"/></rule>
<rule pattern="(import|include)\b"><token type="KeywordNamespace"/></rule>
<rule pattern="(auto|none|true|false)\b"><token type="KeywordConstant"/></rule>
<rule pattern="([0-9.]+)(mm|pt|cm|in|em|fr|%)"><bygroups><token type="LiteralNumber"/><token type="KeywordReserved"/></bygroups></rule>
<rule pattern="0x[0-9a-fA-F]+"><token type="LiteralNumberHex"/></rule>
<rule pattern="0b[01]+"><token type="LiteralNumberBin"/></rule>
<rule pattern="0o[0-7]+"><token type="LiteralNumberOct"/></rule>
<rule pattern="[0-9]+[\.e][0-9]+"><token type="LiteralNumberFloat"/></rule>
<rule pattern="[0-9]+"><token type="LiteralNumberInteger"/></rule>
<rule pattern="(let|set|show)\b"><token type="KeywordDeclaration"/></rule>
<rule pattern="([a-zA-Z_][a-zA-Z0-9_-]*)"><token type="NameVariable"/></rule>
<rule pattern="[ \t\n]+"><token type="TextWhitespace"/></rule>
<rule pattern=":"><token type="Punctuation"/></rule>
</state>
<state name="inline_code">
<rule pattern=";\b"><token type="Punctuation"/><pop depth="1"/></rule>
<rule pattern="\n"><token type="TextWhitespace"/><pop depth="1"/></rule>
<rule><include state="code"/></rule>
</state>
</rules>
</lexer>

283
lexers/webvtt.xml Normal file
View File

@@ -0,0 +1,283 @@
<lexer>
<config>
<name>WebVTT</name>
<alias>vtt</alias>
<filename>*.vtt</filename>
<mime_type>text/vtt</mime_type>
</config>
<!--
The WebVTT spec refers to a WebVTT line terminator as either CRLF, CR or LF.
(https://www.w3.org/TR/webvtt1/#webvtt-line-terminator) However, with this
definition it is unclear whether CRLF is one line terminator (CRLF) or two
line terminators (CR and LF).
To work around this ambiguity, only CRLF and LF are considered as line terminators.
To my knowledge only classic Mac OS uses CR as line terminators, so the lexer should
still work for most files.
-->
<rules>
<!-- https://www.w3.org/TR/webvtt1/#webvtt-file-body -->
<state name="root">
<rule pattern="(\AWEBVTT)((?:[ \t][^\r\n]*)?(?:\r?\n){2,})">
<bygroups>
<token type="Keyword" />
<token type="Text" />
</bygroups>
</rule>
<rule pattern="(^REGION)([ \t]*$)">
<bygroups>
<token type="Keyword" />
<token type="Text" />
</bygroups>
<push state="region-settings-list" />
</rule>
<rule
pattern="(^STYLE)([ \t]*$)((?:(?!&#45;&#45;&gt;)[\s\S])*?)((?:\r?\n){2})">
<bygroups>
<token type="Keyword" />
<token type="Text" />
<using lexer="CSS" />
<token type="Text" />
</bygroups>
</rule>
<rule>
<include state="comment" />
</rule>
<rule
pattern="(?=((?![^\r\n]*&#45;&#45;&gt;)[^\r\n]*\r?\n)?(\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3}[ \t]+&#45;&#45;&gt;[ \t]+(\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})"
>
<push state="cues" />
</rule>
</state>
<!-- https://www.w3.org/TR/webvtt1/#webvtt-region-settings-list -->
<state name="region-settings-list">
<rule pattern="(?: |\t|\r?\n(?!\r?\n))+">
<token type="Text" />
</rule>
<rule pattern="(?:\r?\n){2}">
<token type="Text" />
<pop depth="1" />
</rule>
<rule pattern="(id)(:)(?!&#45;&#45;&gt;)(\S+)">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="Literal" />
</bygroups>
</rule>
<rule pattern="(width)(:)((?:[1-9]?\d|100)(?:\.\d+)?)(%)">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="Literal" />
<token type="KeywordType" />
</bygroups>
</rule>
<rule pattern="(lines)(:)(\d+)">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="Literal" />
</bygroups>
</rule>
<rule
pattern="(regionanchor|viewportanchor)(:)((?:[1-9]?\d|100)(?:\.\d+)?)(%)(,)((?:[1-9]?\d|100)(?:\.\d+)?)(%)">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="Literal" />
<token type="KeywordType" />
<token type="Punctuation" />
<token type="Literal" />
<token type="KeywordType" />
</bygroups>
</rule>
<rule pattern="(scroll)(:)(up)">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="KeywordConstant" />
</bygroups>
</rule>
</state>
<!-- https://www.w3.org/TR/webvtt1/#webvtt-comment-block -->
<state name="comment">
<rule
pattern="^NOTE( |\t|\r?\n)((?!&#45;&#45;&gt;)[\s\S])*?(?:(\r?\n){2}|\Z)">
<token type="Comment" />
</rule>
</state>
<!--
"Zero or more WebVTT cue blocks and WebVTT comment blocks separated from each other by one or more
WebVTT line terminators." (https://www.w3.org/TR/webvtt1/#file-structure)
-->
<state name="cues">
<rule
pattern="(?:((?!&#45;&#45;&gt;)[^\r\n]+)?(\r?\n))?((?:\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})([ \t]+)(&#45;&#45;&gt;)([ \t]+)((?:\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})([ \t]*)">
<bygroups>
<token type="Name" />
<token type="Text" />
<token type="LiteralDate" />
<token type="Text" />
<token type="Operator" />
<token type="Text" />
<token type="LiteralDate" />
<token type="Text" />
</bygroups>
<push state="cue-settings-list" />
</rule>
<rule>
<include state="comment" />
</rule>
</state>
<!-- https://www.w3.org/TR/webvtt1/#webvtt-cue-settings-list -->
<state name="cue-settings-list">
<rule pattern="[ \t]+">
<token type="Text" />
</rule>
<rule pattern="(vertical)(:)?(rl|lr)?">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="KeywordConstant" />
</bygroups>
</rule>
<rule
pattern="(line)(:)?(?:(?:((?:[1-9]?\d|100)(?:\.\d+)?)(%)|(-?\d+))(?:(,)(start|center|end))?)?">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="Literal" />
<token type="KeywordType" />
<token type="Literal" />
<token type="Punctuation" />
<token type="KeywordConstant" />
</bygroups>
</rule>
<rule
pattern="(position)(:)?(?:(?:((?:[1-9]?\d|100)(?:\.\d+)?)(%)|(-?\d+))(?:(,)(line-left|center|line-right))?)?">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="Literal" />
<token type="KeywordType" />
<token type="Literal" />
<token type="Punctuation" />
<token type="KeywordConstant" />
</bygroups>
</rule>
<rule pattern="(size)(:)?(?:((?:[1-9]?\d|100)(?:\.\d+)?)(%))?">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="Literal" />
<token type="KeywordType" />
</bygroups>
</rule>
<rule pattern="(align)(:)?(start|center|end|left|right)?">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="KeywordConstant" />
</bygroups>
</rule>
<rule pattern="(region)(:)?((?![^\r\n]*&#45;&#45;&gt;(?=[ \t]+?))[^ \t\r\n]+)?">
<bygroups>
<token type="Keyword" />
<token type="Punctuation" />
<token type="Literal" />
</bygroups>
</rule>
<rule
pattern="(?=\r?\n)">
<push state="cue-payload" />
</rule>
</state>
<!-- https://www.w3.org/TR/webvtt1/#cue-payload -->
<state name="cue-payload">
<rule pattern="(\r?\n){2,}">
<token type="Text" />
<pop depth="2" />
</rule>
<rule pattern="[^&lt;&amp;]+?">
<token type="Text" />
</rule>
<rule pattern="&amp;(#\d+|#x[0-9A-Fa-f]+|[a-zA-Z0-9]+);">
<token type="Text" />
</rule>
<rule pattern="(?=&lt;)">
<token type="Text" />
<push state="cue-span-tag" />
</rule>
</state>
<state name="cue-span-tag">
<rule
pattern="&lt;(?=c|i|b|u|ruby|rt|v|lang|(?:\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})">
<token type="Punctuation" />
<push state="cue-span-start-tag-name" />
</rule>
<rule pattern="(&lt;/)(c|i|b|u|ruby|rt|v|lang)">
<bygroups>
<token type="Punctuation" />
<token type="NameTag" />
</bygroups>
</rule>
<rule pattern="&gt;">
<token type="Punctuation" />
<pop depth="1" />
</rule>
</state>
<state name="cue-span-start-tag-name">
<rule pattern="(c|i|b|u|ruby|rt)|((?:\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})">
<bygroups>
<token type="NameTag" />
<token type="LiteralDate" />
</bygroups>
<push state="cue-span-classes-without-annotations" />
</rule>
<rule pattern="v|lang">
<token type="NameTag" />
<push state="cue-span-classes-with-annotations" />
</rule>
</state>
<state name="cue-span-classes-without-annotations">
<rule>
<include state="cue-span-classes" />
</rule>
<rule pattern="(?=&gt;)">
<pop depth="2" />
</rule>
</state>
<state name="cue-span-classes-with-annotations">
<rule>
<include state="cue-span-classes" />
</rule>
<rule pattern="(?=[ \t])">
<push state="cue-span-start-tag-annotations" />
</rule>
</state>
<state name="cue-span-classes">
<rule pattern="(\.)([^ \t\n\r&amp;&lt;&gt;\.]+)">
<bygroups>
<token type="Punctuation" />
<token type="NameTag" />
</bygroups>
</rule>
</state>
<state name="cue-span-start-tag-annotations">
<rule
pattern="[ \t](?:[^\n\r&amp;&gt;]|&amp;(?:#\d+|#x[0-9A-Fa-f]+|[a-zA-Z0-9]+);)+">
<token type="Text" />
</rule>
<rule pattern="(?=&gt;)">
<token type="Text" />
<pop depth="3" />
</rule>
</state>
</rules>
</lexer>

View File

@@ -53,7 +53,7 @@
<bygroups>
<token type="Punctuation"/>
<token type="LiteralStringDoc"/>
<token type="TextWhitespace"/>
<token type="Ignore"/>
</bygroups>
</rule>
<rule pattern="(false|False|FALSE|true|True|TRUE|null|Off|off|yes|Yes|YES|OFF|On|ON|no|No|on|NO|n|N|Y|y)\b">

View File

@@ -38,6 +38,12 @@ for fname in glob.glob("lexers/*.xml"):
lexer_by_filename[filename].add(lexer_name)
with open("src/constants/lexers.cr", "w") as f:
# Crystal doesn't come from a xml file
lexer_by_name["crystal"] = "crystal"
lexer_by_name["cr"] = "crystal"
lexer_by_filename["*.cr"] = ["crystal"]
lexer_by_mimetype["text/x-crystal"] = "crystal"
f.write("module Tartrazine\n")
f.write(" LEXERS_BY_NAME = {\n")
for k in sorted(lexer_by_name.keys()):

View File

@@ -1,5 +1,5 @@
name: tartrazine
version: 0.7.0
version: 0.12.0
authors:
- Roberto Alsina <roberto.alsina@gmail.com>
@@ -18,6 +18,10 @@ dependencies:
github: ralsina/sixteen
docopt:
github: chenkovsky/docopt.cr
stumpy_utils:
github: stumpycr/stumpy_utils
stumpy_png:
github: stumpycr/stumpy_png
crystal: ">= 1.13.0"

View File

@@ -1 +1 @@
.e {color: #aa0000;background-color: #ffaaaa;}.b {background-color: #f0f3f3;tab-size: 8;}.k {color: #006699;font-weight: bold;}.kp {font-weight: 600;}.kt {color: #007788;}.na {color: #330099;}.nb {color: #336666;}.nc {color: #00aa88;font-weight: bold;}.nc {color: #336600;}.nd {color: #9999ff;}.ne {color: #999999;font-weight: bold;}.ne {color: #cc0000;font-weight: bold;}.nf {color: #cc00ff;}.nl {color: #9999ff;}.nn {color: #00ccff;font-weight: bold;}.nt {color: #330099;font-weight: bold;}.nv {color: #003333;}.ls {color: #cc3300;}.lsd {font-style: italic;}.lse {color: #cc3300;font-weight: bold;}.lsi {color: #aa0000;}.lso {color: #cc3300;}.lsr {color: #33aaaa;}.lss {color: #ffcc33;}.ln {color: #ff6600;}.o {color: #555555;}.ow {color: #000000;font-weight: bold;}.c {color: #0099ff;font-style: italic;}.cs {font-weight: bold;}.cp {color: #009999;font-style: normal;}.gd {background-color: #ffcccc;border: 1px solid #cc0000;}.ge {font-style: italic;}.ge {color: #ff0000;}.gh {color: #003300;font-weight: bold;}.gi {background-color: #ccffcc;border: 1px solid #00cc00;}.go {color: #aaaaaa;}.gp {color: #000099;font-weight: bold;}.gs {font-weight: bold;}.gs {color: #003300;font-weight: bold;}.gt {color: #99cc66;}.gu {text-decoration: underline;}.tw {color: #bbbbbb;}.lh {}
.e {color: #aa0000;background-color: #ffaaaa;}.b {background-color: #f0f3f3;tab-size: 8;}.k {color: #006699;font-weight: 600;}.kp {}.kt {color: #007788;}.na {color: #330099;}.nb {color: #336666;}.nc {color: #00aa88;font-weight: 600;}.nc {color: #336600;}.nd {color: #9999ff;}.ne {color: #999999;font-weight: 600;}.ne {color: #cc0000;font-weight: 600;}.nf {color: #cc00ff;}.nl {color: #9999ff;}.nn {color: #00ccff;font-weight: 600;}.nt {color: #330099;font-weight: 600;}.nv {color: #003333;}.ls {color: #cc3300;}.lsd {font-style: italic;}.lse {color: #cc3300;font-weight: 600;}.lsi {color: #aa0000;}.lso {color: #cc3300;}.lsr {color: #33aaaa;}.lss {color: #ffcc33;}.ln {color: #ff6600;}.o {color: #555555;}.ow {color: #000000;font-weight: 600;}.c {color: #0099ff;font-style: italic;}.cs {font-weight: 600;}.cp {color: #009999;font-style: normal;}.gd {background-color: #ffcccc;border: 1px solid #cc0000;}.ge {font-style: italic;}.ge {color: #ff0000;}.gh {color: #003300;font-weight: 600;}.gi {background-color: #ccffcc;border: 1px solid #00cc00;}.go {color: #aaaaaa;}.gp {color: #000099;font-weight: 600;}.gs {font-weight: 600;}.gs {color: #003300;font-weight: 600;}.gt {color: #99cc66;}.gu {text-decoration: underline;}.tw {color: #bbbbbb;}.lh {}

View File

@@ -1 +1 @@
.b {color: #b7b7b7;background-color: #101010;font-weight: bold;tab-size: 8;}.lh {color: #8eaaaa;background-color: #232323;}.t {color: #b7b7b7;}.e {color: #de6e6e;}.c {color: #333333;}.cp {color: #876c4f;}.cpf {color: #5f8787;}.k {color: #d69094;}.kt {color: #de6e6e;}.na {color: #8eaaaa;}.nb {color: #de6e6e;}.nbp {color: #de6e6e;}.nc {color: #8eaaaa;}.nc {color: #dab083;}.nd {color: #dab083;}.nf {color: #8eaaaa;}.nn {color: #8eaaaa;}.nt {color: #d69094;}.nv {color: #8eaaaa;}.nvi {color: #de6e6e;}.ln {color: #dab083;}.o {color: #60a592;}.ow {color: #d69094;}.l {color: #5f8787;}.ls {color: #5f8787;}.lsi {color: #876c4f;}.lsr {color: #60a592;}.lss {color: #dab083;}
.b {color: #b7b7b7;background-color: #101010;font-weight: 600;tab-size: 8;}.lh {color: #8eaaaa;background-color: #232323;}.t {color: #b7b7b7;}.e {color: #de6e6e;}.c {color: #333333;}.cp {color: #876c4f;}.cpf {color: #5f8787;}.k {color: #d69094;}.kt {color: #de6e6e;}.na {color: #8eaaaa;}.nb {color: #de6e6e;}.nbp {color: #de6e6e;}.nc {color: #8eaaaa;}.nc {color: #dab083;}.nd {color: #dab083;}.nf {color: #8eaaaa;}.nn {color: #8eaaaa;}.nt {color: #d69094;}.nv {color: #8eaaaa;}.nvi {color: #de6e6e;}.ln {color: #dab083;}.o {color: #60a592;}.ow {color: #d69094;}.l {color: #5f8787;}.ls {color: #5f8787;}.lsi {color: #876c4f;}.lsr {color: #60a592;}.lss {color: #dab083;}

View File

@@ -0,0 +1 @@
puts "Hello Crystal!"

View File

@@ -0,0 +1 @@
[{"type":"Text","value":"puts "},{"type":"LiteralString","value":"\"Hello Crystal!\""},{"type":"Text","value":"\n"}]

View File

@@ -1,413 +0,0 @@
require "./constants/lexers"
require "./heuristics"
require "baked_file_system"
require "crystal/syntax_highlighter"
module Tartrazine
class LexerFiles
extend BakedFileSystem
bake_folder "../lexers", __DIR__
end
# Get the lexer object for a language name
# FIXME: support mimetypes
def self.lexer(name : String? = nil, filename : String? = nil, mimetype : String? = nil) : BaseLexer
return lexer_by_name(name) if name && name != "autodetect"
return lexer_by_filename(filename) if filename
return lexer_by_mimetype(mimetype) if mimetype
RegexLexer.from_xml(LexerFiles.get("/#{LEXERS_BY_NAME["plaintext"]}.xml").gets_to_end)
end
private def self.lexer_by_mimetype(mimetype : String) : BaseLexer
lexer_file_name = LEXERS_BY_MIMETYPE.fetch(mimetype, nil)
raise Exception.new("Unknown mimetype: #{mimetype}") if lexer_file_name.nil?
RegexLexer.from_xml(LexerFiles.get("/#{lexer_file_name}.xml").gets_to_end)
end
private def self.lexer_by_name(name : String) : BaseLexer
return CrystalLexer.new if name == "crystal"
lexer_file_name = LEXERS_BY_NAME.fetch(name.downcase, nil)
return create_delegating_lexer(name) if lexer_file_name.nil? && name.includes? "+"
raise Exception.new("Unknown lexer: #{name}") if lexer_file_name.nil?
RegexLexer.from_xml(LexerFiles.get("/#{lexer_file_name}.xml").gets_to_end)
end
private def self.lexer_by_filename(filename : String) : BaseLexer
if filename.ends_with?(".cr")
return CrystalLexer.new
end
candidates = Set(String).new
LEXERS_BY_FILENAME.each do |k, v|
candidates += v.to_set if File.match?(k, File.basename(filename))
end
case candidates.size
when 0
lexer_file_name = LEXERS_BY_NAME["plaintext"]
when 1
lexer_file_name = candidates.first
else
lexer_file_name = self.lexer_by_content(filename)
begin
return self.lexer(lexer_file_name)
rescue ex : Exception
raise Exception.new("Multiple lexers match the filename: #{candidates.to_a.join(", ")}, heuristics suggest #{lexer_file_name} but there is no matching lexer.")
end
end
RegexLexer.from_xml(LexerFiles.get("/#{lexer_file_name}.xml").gets_to_end)
end
private def self.lexer_by_content(fname : String) : String?
h = Linguist::Heuristic.from_yaml(LexerFiles.get("/heuristics.yml").gets_to_end)
result = h.run(fname, File.read(fname))
case result
when Nil
raise Exception.new "No lexer found for #{fname}"
when String
result.as(String)
when Array(String)
result.first
end
end
private def self.create_delegating_lexer(name : String) : BaseLexer
language, root = name.split("+", 2)
language_lexer = lexer(language)
root_lexer = lexer(root)
DelegatingLexer.new(language_lexer, root_lexer)
end
# Return a list of all lexers
def self.lexers : Array(String)
LEXERS_BY_NAME.keys.sort!
end
# A token, the output of the tokenizer
alias Token = NamedTuple(type: String, value: String)
abstract class BaseTokenizer
end
class Tokenizer < BaseTokenizer
include Iterator(Token)
property lexer : BaseLexer
property text : Bytes
property pos : Int32 = 0
@dq = Deque(Token).new
property state_stack = ["root"]
def initialize(@lexer : BaseLexer, text : String, secondary = false)
# Respect the `ensure_nl` config option
if text.size > 0 && text[-1] != '\n' && @lexer.config[:ensure_nl] && !secondary
text += "\n"
end
@text = text.to_slice
end
def next : Iterator::Stop | Token
if @dq.size > 0
return @dq.shift
end
if pos == @text.size
return stop
end
matched = false
while @pos < @text.size
@lexer.states[@state_stack.last].rules.each do |rule|
matched, new_pos, new_tokens = rule.match(@text, @pos, self)
if matched
@pos = new_pos
split_tokens(new_tokens).each { |token| @dq << token }
break
end
end
if !matched
if @text[@pos] == 10u8
@dq << {type: "Text", value: "\n"}
@state_stack = ["root"]
else
@dq << {type: "Error", value: String.new(@text[@pos..@pos])}
end
@pos += 1
break
end
end
self.next
end
# If a token contains a newline, split it into two tokens
def split_tokens(tokens : Array(Token)) : Array(Token)
split_tokens = [] of Token
tokens.each do |token|
if token[:value].includes?("\n")
values = token[:value].split("\n")
values.each_with_index do |value, index|
value += "\n" if index < values.size - 1
split_tokens << {type: token[:type], value: value}
end
else
split_tokens << token
end
end
split_tokens
end
end
alias BaseLexer = Lexer
abstract class Lexer
property config = {
name: "",
priority: 0.0,
case_insensitive: false,
dot_all: false,
not_multiline: false,
ensure_nl: false,
}
property states = {} of String => State
def tokenizer(text : String, secondary = false) : BaseTokenizer
Tokenizer.new(self, text, secondary)
end
end
# This implements a lexer for Pygments RegexLexers as expressed
# in Chroma's XML serialization.
#
# For explanations on what actions and states do
# the Pygments documentation is a good place to start.
# https://pygments.org/docs/lexerdevelopment/
class RegexLexer < BaseLexer
# Collapse consecutive tokens of the same type for easier comparison
# and smaller output
def self.collapse_tokens(tokens : Array(Tartrazine::Token)) : Array(Tartrazine::Token)
result = [] of Tartrazine::Token
tokens = tokens.reject { |token| token[:value] == "" }
tokens.each do |token|
if result.empty?
result << token
next
end
last = result.last
if last[:type] == token[:type]
new_token = {type: last[:type], value: last[:value] + token[:value]}
result.pop
result << new_token
else
result << token
end
end
result
end
def self.from_xml(xml : String) : Lexer
l = RegexLexer.new
lexer = XML.parse(xml).first_element_child
if lexer
config = lexer.children.find { |node|
node.name == "config"
}
if config
l.config = {
name: xml_to_s(config, name) || "",
priority: xml_to_f(config, priority) || 0.0,
not_multiline: xml_to_s(config, not_multiline) == "true",
dot_all: xml_to_s(config, dot_all) == "true",
case_insensitive: xml_to_s(config, case_insensitive) == "true",
ensure_nl: xml_to_s(config, ensure_nl) == "true",
}
end
rules = lexer.children.find { |node|
node.name == "rules"
}
if rules
# Rules contains states 🤷
rules.children.select { |node|
node.name == "state"
}.each do |state_node|
state = State.new
state.name = state_node["name"]
if l.states.has_key?(state.name)
raise Exception.new("Duplicate state: #{state.name}")
else
l.states[state.name] = state
end
# And states contain rules 🤷
state_node.children.select { |node|
node.name == "rule"
}.each do |rule_node|
case rule_node["pattern"]?
when nil
if rule_node.first_element_child.try &.name == "include"
rule = IncludeStateRule.new(rule_node)
else
rule = UnconditionalRule.new(rule_node)
end
else
rule = Rule.new(rule_node,
multiline: !l.config[:not_multiline],
dotall: l.config[:dot_all],
ignorecase: l.config[:case_insensitive])
end
state.rules << rule
end
end
end
end
l
end
end
# A lexer that takes two lexers as arguments. A root lexer
# and a language lexer. Everything is scalled using the
# language lexer, afterwards all `Other` tokens are lexed
# using the root lexer.
#
# This is useful for things like template languages, where
# you have Jinja + HTML or Jinja + CSS and so on.
class DelegatingLexer < Lexer
property language_lexer : BaseLexer
property root_lexer : BaseLexer
def initialize(@language_lexer : BaseLexer, @root_lexer : BaseLexer)
end
def tokenizer(text : String, secondary = false) : DelegatingTokenizer
DelegatingTokenizer.new(self, text, secondary)
end
end
# This Tokenizer works with a DelegatingLexer. It first tokenizes
# using the language lexer, and "Other" tokens are tokenized using
# the root lexer.
class DelegatingTokenizer < BaseTokenizer
include Iterator(Token)
@dq = Deque(Token).new
@language_tokenizer : BaseTokenizer
def initialize(@lexer : DelegatingLexer, text : String, secondary = false)
# Respect the `ensure_nl` config option
if text.size > 0 && text[-1] != '\n' && @lexer.config[:ensure_nl] && !secondary
text += "\n"
end
@language_tokenizer = @lexer.language_lexer.tokenizer(text, true)
end
def next : Iterator::Stop | Token
if @dq.size > 0
return @dq.shift
end
token = @language_tokenizer.next
if token.is_a? Iterator::Stop
return stop
elsif token.as(Token).[:type] == "Other"
root_tokenizer = @lexer.root_lexer.tokenizer(token.as(Token).[:value], true)
root_tokenizer.each do |root_token|
@dq << root_token
end
else
@dq << token.as(Token)
end
self.next
end
end
# A Lexer state. A state has a name and a list of rules.
# The state machine has a state stack containing references
# to states to decide which rules to apply.
struct State
property name : String = ""
property rules = [] of BaseRule
def +(other : State)
new_state = State.new
new_state.name = Random.base58(8)
new_state.rules = rules + other.rules
new_state
end
end
class CustomCrystalHighlighter < Crystal::SyntaxHighlighter
@tokens = [] of Token
def render_delimiter(&block)
@tokens << {type: "LiteralString", value: block.call.to_s}
end
def render_interpolation(&block)
@tokens << {type: "LiteralStringInterpol", value: "\#{"}
@tokens << {type: "Text", value: block.call.to_s}
@tokens << {type: "LiteralStringInterpol", value: "}"}
end
def render_string_array(&block)
@tokens << {type: "LiteralString", value: block.call.to_s}
end
# ameba:disable Metrics/CyclomaticComplexity
def render(type : TokenType, value : String)
case type
when .comment?
@tokens << {type: "Comment", value: value}
when .number?
@tokens << {type: "LiteralNumber", value: value}
when .char?
@tokens << {type: "LiteralStringChar", value: value}
when .symbol?
@tokens << {type: "LiteralStringSymbol", value: value}
when .const?
@tokens << {type: "NameConstant", value: value}
when .string?
@tokens << {type: "LiteralString", value: value}
when .ident?
@tokens << {type: "NameVariable", value: value}
when .keyword?, .self?
@tokens << {type: "NameKeyword", value: value}
when .primitive_literal?
@tokens << {type: "Literal", value: value}
when .operator?
@tokens << {type: "Operator", value: value}
when Crystal::SyntaxHighlighter::TokenType::DELIMITED_TOKEN, Crystal::SyntaxHighlighter::TokenType::DELIMITER_START, Crystal::SyntaxHighlighter::TokenType::DELIMITER_END
@tokens << {type: "LiteralString", value: value}
else
@tokens << {type: "Text", value: value}
end
end
end
class CrystalTokenizer < Tartrazine::BaseTokenizer
include Iterator(Token)
@hl = CustomCrystalHighlighter.new
@lexer : BaseLexer
@iter : Iterator(Token)
# delegate next, to: @iter
def initialize(@lexer : BaseLexer, text : String, secondary = false)
# Respect the `ensure_nl` config option
if text.size > 0 && text[-1] != '\n' && @lexer.config[:ensure_nl] && !secondary
text += "\n"
end
# Just do the tokenizing
@hl.highlight(text)
@iter = @hl.@tokens.each
end
def next : Iterator::Stop | Token
@iter.next
end
end
class CrystalLexer < BaseLexer
def tokenizer(text : String, secondary = false) : BaseTokenizer
CrystalTokenizer.new(self, text, secondary)
end
end
end

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,5 @@
require "./spec_helper"
require "digest/sha1"
# These are the testcases from Pygments
testcases = Dir.glob("#{__DIR__}/tests/**/*txt").sort
@@ -46,8 +47,13 @@ known_bad = {
"#{__DIR__}/tests/bash_session/test_newline_in_ls_no_ps2.txt",
"#{__DIR__}/tests/bash_session/test_newline_in_ls_ps2.txt",
"#{__DIR__}/tests/bash_session/test_virtualenv.txt",
"#{__DIR__}/tests/mcfunction/commenting.txt",
"#{__DIR__}/tests/mcfunction/coordinates.txt",
"#{__DIR__}/tests/mcfunction/data.txt",
"#{__DIR__}/tests/mcfunction/difficult_1.txt",
"#{__DIR__}/tests/mcfunction/multiline.txt",
"#{__DIR__}/tests/mcfunction/selectors.txt",
"#{__DIR__}/tests/mcfunction/simple.txt",
}
# Tests that fail because of a limitation in PCRE2
@@ -103,6 +109,7 @@ describe Tartrazine do
)
end
end
describe "to_ansi" do
it "should do basic highlighting" do
ansi = Tartrazine.to_ansi("puts 'Hello, World!'", "ruby")
@@ -114,11 +121,29 @@ describe Tartrazine do
)
else
ansi.should eq(
"\e[38;2;171;70;66mputs\e[0m\e[38;2;216;216;216m \e[0m'Hello, World!'"
"\e[38;2;171;70;66mputs\e[0m\e[38;2;216;216;216m \e[0m\e[38;2;161;181;108m'Hello, World!'\e[0m"
)
end
end
end
describe "to_svg" do
it "should do basic highlighting" do
svg = Tartrazine.to_svg("puts 'Hello, World!'", "ruby", standalone: false)
svg.should eq(
"<text x=\"0\" y=\"19\" xml:space=\"preserve\"><tspan fill=\"#ab4642\">puts</tspan><tspan fill=\"#d8d8d8\"> </tspan><tspan fill=\"#a1b56c\">&#39;Hello, World!&#39;</tspan></text>"
)
end
end
describe "to_png" do
it "should do basic highlighting" do
png = Digest::SHA1.hexdigest(Tartrazine.to_png("puts 'Hello, World!'", "ruby"))
png.should eq(
"62d419dcd263fffffc265a0f04c156dc2530c362"
)
end
end
end
# Helper that creates lexer and tokenizes

View File

@@ -471,7 +471,7 @@ module Tartrazine
"application/x-fennel" => "fennel",
"application/x-fish" => "fish",
"application/x-forth" => "forth",
"application/x-gdscript" => "gdscript",
"application/x-gdscript" => "gdscript3",
"application/x-hcl" => "hcl",
"application/x-hy" => "hy",
"application/x-javascript" => "javascript",
@@ -594,7 +594,7 @@ module Tartrazine
"text/x-fortran" => "fortran",
"text/x-fsharp" => "fsharp",
"text/x-gas" => "gas",
"text/x-gdscript" => "gdscript",
"text/x-gdscript" => "gdscript3",
"text/x-gherkin" => "gherkin",
"text/x-gleam" => "gleam",
"text/x-glslsrc" => "glsl",

View File

@@ -17,12 +17,19 @@ module Tartrazine
end
def format(text : String, lexer : Lexer) : String
raise Exception.new("Not implemented")
outp = String::Builder.new("")
format(text, lexer, outp)
outp.to_s
end
# Return the styles, if the formatter supports it.
def style_defs : String
raise Exception.new("Not implemented")
end
# Is this line in the highlighted ranges?
def highlighted?(line : Int) : Bool
highlight_lines.any?(&.includes?(line))
end
end
end

View File

@@ -20,12 +20,6 @@ module Tartrazine
"#{i + 1}".rjust(4).ljust(5)
end
def format(text : String, lexer : BaseLexer) : String
outp = String::Builder.new("")
format(text, lexer, outp)
outp.to_s
end
def format(text : String, lexer : BaseLexer, outp : IO) : Nil
tokenizer = lexer.tokenizer(text)
i = 0
@@ -40,8 +34,6 @@ module Tartrazine
end
def colorize(text : String, token : String) : String
style = theme.styles.fetch(token, nil)
return text if style.nil?
if theme.styles.has_key?(token)
s = theme.styles[token]
else

View File

@@ -106,8 +106,7 @@ module Tartrazine
# These are true/false/nil
outp << "border: none;" if style.border == false
outp << "font-weight: bold;" if style.bold
outp << "font-weight: #{@weight_of_bold};" if style.bold == false
outp << "font-weight: #{@weight_of_bold};" if style.bold
outp << "font-style: italic;" if style.italic
outp << "font-style: normal;" if style.italic == false
outp << "text-decoration: underline;" if style.underline
@@ -134,10 +133,5 @@ module Tartrazine
end
class_prefix + Abbreviations[token]
end
# Is this line in the highlighted ranges?
def highlighted?(line : Int) : Bool
highlight_lines.any?(&.includes?(line))
end
end
end

117
src/formatters/png.cr Normal file
View File

@@ -0,0 +1,117 @@
require "../formatter"
require "compress/gzip"
require "digest/sha1"
require "stumpy_png"
require "stumpy_utils"
module Tartrazine
def self.to_png(text : String, language : String,
theme : String = "default-dark",
line_numbers : Bool = false) : String
buf = IO::Memory.new
Tartrazine::Png.new(
theme: Tartrazine.theme(theme),
line_numbers: line_numbers
).format(text, Tartrazine.lexer(name: language), buf)
buf.to_s
end
class FontFiles
extend BakedFileSystem
bake_folder "../../fonts", __DIR__
end
class Png < Formatter
include StumpyPNG
property? line_numbers : Bool = false
@font_regular : PCFParser::Font
@font_bold : PCFParser::Font
@font_oblique : PCFParser::Font
@font_bold_oblique : PCFParser::Font
@font_width = 15
@font_height = 24
def initialize(@theme : Theme = Tartrazine.theme("default-dark"), @line_numbers : Bool = false)
@font_regular = load_font("/courier-regular.pcf.gz")
@font_bold = load_font("/courier-bold.pcf.gz")
@font_oblique = load_font("/courier-oblique.pcf.gz")
@font_bold_oblique = load_font("/courier-bold-oblique.pcf.gz")
end
private def load_font(name : String) : PCFParser::Font
compressed = FontFiles.get(name)
uncompressed = Compress::Gzip::Reader.open(compressed) do |gzip|
gzip.gets_to_end
end
PCFParser::Font.new(IO::Memory.new uncompressed)
end
private def line_label(i : Int32) : String
"#{i + 1}".rjust(4).ljust(5)
end
def format(text : String, lexer : BaseLexer, outp : IO) : Nil
# Create canvas of correct size
lines = text.split("\n")
canvas_height = lines.size * @font_height
canvas_width = lines.max_of(&.size)
canvas_width += 5 if line_numbers?
canvas_width *= @font_width
bg_color = RGBA.from_hex("##{theme.styles["Background"].background.try &.hex}")
canvas = Canvas.new(canvas_width, canvas_height, bg_color)
tokenizer = lexer.tokenizer(text)
x = 0
y = @font_height
i = 0
if line_numbers?
canvas.text(x, y, line_label(i), @font_regular, RGBA.from_hex("##{theme.styles["Background"].color.try &.hex}"))
x += 5 * @font_width
end
tokenizer.each do |token|
font, color = token_style(token[:type])
# These fonts are very limited
t = token[:value].gsub(/[^[:ascii:]]/, "?")
canvas.text(x, y, t.rstrip("\n"), font, color)
if token[:value].includes?("\n")
x = 0
y += @font_height
i += 1
if line_numbers?
canvas.text(x, y, line_label(i), @font_regular, RGBA.from_hex("##{theme.styles["Background"].color.try &.hex}"))
x += 4 * @font_width
end
end
x += token[:value].size * @font_width
end
StumpyPNG.write(canvas, outp)
end
def token_style(token : String) : {PCFParser::Font, RGBA}
if theme.styles.has_key?(token)
s = theme.styles[token]
else
# Themes don't contain information for each specific
# token type. However, they may contain information
# for a parent style. Worst case, we go to the root
# (Background) style.
s = theme.styles[theme.style_parents(token).reverse.find { |parent|
theme.styles.has_key?(parent)
}]
end
color = RGBA.from_hex("##{theme.styles["Background"].color.try &.hex}")
color = RGBA.from_hex("##{s.color.try &.hex}") if s.color
return {@font_bold_oblique, color} if s.bold && s.italic
return {@font_bold, color} if s.bold
return {@font_oblique, color} if s.italic
return {@font_regular, color}
end
end
end

129
src/formatters/svg.cr Normal file
View File

@@ -0,0 +1,129 @@
require "../constants/token_abbrevs.cr"
require "../formatter"
require "html"
module Tartrazine
def self.to_svg(text : String, language : String,
theme : String = "default-dark",
standalone : Bool = true,
line_numbers : Bool = false) : String
Tartrazine::Svg.new(
theme: Tartrazine.theme(theme),
standalone: standalone,
line_numbers: line_numbers
).format(text, Tartrazine.lexer(name: language))
end
class Svg < Formatter
property highlight_lines : Array(Range(Int32, Int32)) = [] of Range(Int32, Int32)
property line_number_id_prefix : String = "line-"
property line_number_start : Int32 = 1
property tab_width = 8
property? line_numbers : Bool = false
property? linkable_line_numbers : Bool = true
property? standalone : Bool = false
property weight_of_bold : Int32 = 600
property fs : Int32
property ystep : Int32
property theme : Theme
def initialize(@theme : Theme = Tartrazine.theme("default-dark"), *,
@highlight_lines = [] of Range(Int32, Int32),
@class_prefix : String = "",
@line_number_id_prefix = "line-",
@line_number_start = 1,
@tab_width = 8,
@line_numbers : Bool = false,
@linkable_line_numbers : Bool = true,
@standalone : Bool = false,
@weight_of_bold : Int32 = 600,
@font_family : String = "monospace",
@font_size : String = "14px")
if font_size.ends_with? "px"
@fs = font_size[0...-2].to_i
else
@fs = font_size.to_i
end
@ystep = @fs + 5
end
def format(text : String, lexer : BaseLexer, io : IO) : Nil
pre, post = wrap_standalone
io << pre if standalone?
format_text(text, lexer, io)
io << post if standalone?
end
# Wrap text into a full HTML document, including the CSS for the theme
def wrap_standalone
output = String.build do |outp|
outp << %(<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.0//EN" "http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<svg xmlns="http://www.w3.org/2000/svg">
<g font-family="#{self.@font_family}" font-size="#{self.@font_size}">)
end
{output.to_s, "</g></svg>"}
end
private def line_label(i : Int32, x : Int32, y : Int32) : String
line_label = "#{i + 1}".rjust(4).ljust(5)
line_style = highlighted?(i + 1) ? "font-weight=\"#{@weight_of_bold}\"" : ""
line_id = linkable_line_numbers? ? "id=\"#{line_number_id_prefix}#{i + 1}\"" : ""
%(<text #{line_style} #{line_id} x="#{4*ystep}" y="#{y}" text-anchor="end">#{line_label}</text>)
end
def format_text(text : String, lexer : BaseLexer, outp : IO)
x = 0
y = ystep
i = 0
line_x = x
line_x += 5 * ystep if line_numbers?
tokenizer = lexer.tokenizer(text)
outp << line_label(i, x, y) if line_numbers?
outp << %(<text x="#{line_x}" y="#{y}" xml:space="preserve">)
tokenizer.each do |token|
if token[:value].ends_with? "\n"
outp << "<tspan #{get_style(token[:type])}>#{HTML.escape(token[:value][0...-1])}</tspan>"
outp << "</text>"
x = 0
y += ystep
i += 1
outp << line_label(i, x, y) if line_numbers?
outp << %(<text x="#{line_x}" y="#{y}" xml:space="preserve">)
else
outp << "<tspan#{get_style(token[:type])}>#{HTML.escape(token[:value])}</tspan>"
x += token[:value].size * ystep
end
end
outp << "</text>"
end
# Given a token type, return the style.
def get_style(token : String) : String
if !theme.styles.has_key? token
# Themes don't contain information for each specific
# token type. However, they may contain information
# for a parent style. Worst case, we go to the root
# (Background) style.
parent = theme.style_parents(token).reverse.find { |dad|
theme.styles.has_key?(dad)
}
theme.styles[token] = theme.styles[parent]
end
output = String.build do |outp|
style = theme.styles[token]
outp << " fill=\"##{style.color.try &.hex}\"" if style.color
# No support for background color or border in SVG
outp << " font-weight=\"#{@weight_of_bold}\"" if style.bold
outp << " font-weight=\"normal\"" if style.bold == false
outp << " font-style=\"italic\"" if style.italic
outp << " font-style=\"normal\"" if style.italic == false
outp << " text-decoration=\"underline\"" if style.underline
outp << " text-decoration=\"none" if style.underline == false
end
output
end
end
end

View File

@@ -6,11 +6,21 @@ require "crystal/syntax_highlighter"
module Tartrazine
class LexerFiles
extend BakedFileSystem
macro bake_selected_lexers
{% for lexer in env("TT_LEXERS").split "," %}
bake_file {{ lexer }}+".xml", {{ read_file "#{__DIR__}/../lexers/" + lexer + ".xml" }}
{% end %}
end
{% if flag?(:nolexers) %}
bake_selected_lexers
{% else %}
bake_folder "../lexers", __DIR__
{% end %}
end
# Get the lexer object for a language name
# FIXME: support mimetypes
def self.lexer(name : String? = nil, filename : String? = nil, mimetype : String? = nil) : BaseLexer
return lexer_by_name(name) if name && name != "autodetect"
return lexer_by_filename(filename) if filename
@@ -33,6 +43,8 @@ module Tartrazine
raise Exception.new("Unknown lexer: #{name}") if lexer_file_name.nil?
RegexLexer.from_xml(LexerFiles.get("/#{lexer_file_name}.xml").gets_to_end)
rescue ex : BakedFileSystem::NoSuchFileError
raise Exception.new("Unknown lexer: #{name}")
end
private def self.lexer_by_filename(filename : String) : BaseLexer
@@ -84,7 +96,8 @@ module Tartrazine
# Return a list of all lexers
def self.lexers : Array(String)
LEXERS_BY_NAME.keys.sort!
file_map = LexerFiles.files.map(&.path)
LEXERS_BY_NAME.keys.select { |k| file_map.includes?("/#{k}.xml") }.sort!
end
# A token, the output of the tokenizer

View File

@@ -4,6 +4,10 @@ require "./tartrazine"
HELP = <<-HELP
tartrazine: a syntax highlighting tool
You can use the CLI to generate HTML, terminal, JSON or SVG output
from a source file using different themes.
Keep in mind that not all formatters support all features.
Usage:
tartrazine (-h, --help)
tartrazine FILE -f html [-t theme][--standalone][--line-numbers]
@@ -11,6 +15,10 @@ Usage:
tartrazine -f html -t theme --css
tartrazine FILE -f terminal [-t theme][-l lexer][--line-numbers]
[-o output]
tartrazine FILE -f svg [-t theme][--standalone][--line-numbers]
[-l lexer][-o output]
tartrazine FILE -f png [-t theme][--line-numbers]
[-l lexer][-o output]
tartrazine FILE -f json [-o output]
tartrazine --list-themes
tartrazine --list-lexers
@@ -18,7 +26,7 @@ Usage:
tartrazine --version
Options:
-f <formatter> Format to use (html, terminal, json)
-f <formatter> Format to use (html, terminal, json, svg)
-t <theme> Theme to use, see --list-themes [default: default-dark]
-l <lexer> Lexer (language) to use, see --list-lexers. Use more than
one lexer with "+" (e.g. jinja+yaml) [default: autodetect]
@@ -71,6 +79,15 @@ if options["-f"]
formatter.theme = theme
when "json"
formatter = Tartrazine::Json.new
when "svg"
formatter = Tartrazine::Svg.new
formatter.standalone = options["--standalone"] != nil
formatter.line_numbers = options["--line-numbers"] != nil
formatter.theme = theme
when "png"
formatter = Tartrazine::Png.new
formatter.line_numbers = options["--line-numbers"] != nil
formatter.theme = theme
else
puts "Invalid formatter: #{formatter}"
exit 1

View File

@@ -11,7 +11,20 @@ module Tartrazine
struct ThemeFiles
extend BakedFileSystem
macro bake_selected_themes
{% if env("TT_THEMES") %}
{% for theme in env("TT_THEMES").split "," %}
bake_file {{ theme }}+".xml", {{ read_file "#{__DIR__}/../styles/" + theme + ".xml" }}
{% end %}
{% end %}
end
{% if flag?(:nothemes) %}
bake_selected_themes
{% else %}
bake_folder "../styles", __DIR__
{% end %}
end
def self.theme(name : String) : Theme

View File

@@ -1,44 +1,39 @@
<style name="github">
<entry type="Error" style="#a61717 bg:#e3d2d2"/>
<entry type="Error" style="#f6f8fa bg:#82071e"/>
<entry type="Background" style="bg:#ffffff"/>
<entry type="Keyword" style="bold #000000"/>
<entry type="KeywordType" style="bold #445588"/>
<entry type="NameAttribute" style="#008080"/>
<entry type="NameBuiltin" style="#0086b3"/>
<entry type="NameBuiltinPseudo" style="#999999"/>
<entry type="NameClass" style="bold #445588"/>
<entry type="NameConstant" style="#008080"/>
<entry type="NameDecorator" style="bold #3c5d5d"/>
<entry type="NameEntity" style="#800080"/>
<entry type="NameException" style="bold #990000"/>
<entry type="NameFunction" style="bold #990000"/>
<entry type="Keyword" style="#cf222e"/>
<entry type="KeywordType" style="#cf222e"/>
<entry type="NameAttribute" style="#1f2328"/>
<entry type="NameBuiltin" style="#6639ba"/>
<entry type="NameBuiltinPseudo" style="#6a737d"/>
<entry type="NameClass" style="#1f2328"/>
<entry type="NameConstant" style="#0550ae"/>
<entry type="NameDecorator" style="#0550ae"/>
<entry type="NameEntity" style="#6639ba"/>
<entry type="NameFunction" style="#6639ba"/>
<entry type="NameLabel" style="bold #990000"/>
<entry type="NameNamespace" style="#555555"/>
<entry type="NameTag" style="#000080"/>
<entry type="NameVariable" style="#008080"/>
<entry type="NameVariableClass" style="#008080"/>
<entry type="NameVariableGlobal" style="#008080"/>
<entry type="NameVariableInstance" style="#008080"/>
<entry type="LiteralString" style="#dd1144"/>
<entry type="LiteralStringRegex" style="#009926"/>
<entry type="LiteralStringSymbol" style="#990073"/>
<entry type="LiteralNumber" style="#009999"/>
<entry type="Operator" style="bold #000000"/>
<entry type="Comment" style="italic #999988"/>
<entry type="CommentMultiline" style="italic #999988"/>
<entry type="CommentSingle" style="italic #999988"/>
<entry type="CommentSpecial" style="bold italic #999999"/>
<entry type="CommentPreproc" style="bold #999999"/>
<entry type="GenericDeleted" style="#000000 bg:#ffdddd"/>
<entry type="GenericEmph" style="italic #000000"/>
<entry type="GenericError" style="#aa0000"/>
<entry type="GenericHeading" style="#999999"/>
<entry type="GenericInserted" style="#000000 bg:#ddffdd"/>
<entry type="GenericOutput" style="#888888"/>
<entry type="GenericPrompt" style="#555555"/>
<entry type="GenericStrong" style="bold"/>
<entry type="GenericSubheading" style="#aaaaaa"/>
<entry type="GenericTraceback" style="#aa0000"/>
<entry type="NameNamespace" style="#24292e"/>
<entry type="NameOther" style="#1f2328"/>
<entry type="NameTag" style="#0550ae"/>
<entry type="NameVariable" style="#953800"/>
<entry type="NameVariableClass" style="#953800"/>
<entry type="NameVariableGlobal" style="#953800"/>
<entry type="NameVariableInstance" style="#953800"/>
<entry type="LiteralString" style="#0a3069"/>
<entry type="LiteralStringRegex" style="#0a3069"/>
<entry type="LiteralStringSymbol" style="#032f62"/>
<entry type="LiteralNumber" style="#0550ae"/>
<entry type="Operator" style="#0550ae"/>
<entry type="Comment" style="#57606a"/>
<entry type="CommentMultiline" style="#57606a"/>
<entry type="CommentSingle" style="#57606a"/>
<entry type="CommentSpecial" style="#57606a"/>
<entry type="CommentPreproc" style="#57606a"/>
<entry type="GenericDeleted" style="#82071e bg:#ffebe9"/>
<entry type="GenericEmph" style="#1f2328"/>
<entry type="GenericInserted" style="#116329 bg:#dafbe1"/>
<entry type="GenericOutput" style="#1f2328"/>
<entry type="GenericUnderline" style="underline"/>
<entry type="TextWhitespace" style="#bbbbbb"/>
<entry type="Punctuation" style="#1f2328"/>
<entry type="TextWhitespace" style="#ffffff"/>
</style>

10
styles/onesenterprise.xml Normal file
View File

@@ -0,0 +1,10 @@
<style name="onesenterprise">
<entry type="Keyword" style="#ff0000"/>
<entry type="Name" style="#0000ff"/>
<entry type="LiteralString" style="#000000"/>
<entry type="Operator" style="#ff0000"/>
<entry type="Punctuation" style="#ff0000"/>
<entry type="Comment" style="#008000"/>
<entry type="CommentPreproc" style="#963200"/>
<entry type="Text" style="#000000"/>
</style>

42
styles/pygments.xml Normal file
View File

@@ -0,0 +1,42 @@
<style name="pygments">
<entry type="Error" style="border:#ff0000"/>
<entry type="Keyword" style="bold #008000"/>
<entry type="KeywordPseudo" style="nobold"/>
<entry type="KeywordType" style="nobold #b00040"/>
<entry type="NameAttribute" style="#7d9029"/>
<entry type="NameBuiltin" style="#008000"/>
<entry type="NameClass" style="bold #0000ff"/>
<entry type="NameConstant" style="#880000"/>
<entry type="NameDecorator" style="#aa22ff"/>
<entry type="NameEntity" style="bold #999999"/>
<entry type="NameException" style="bold #d2413a"/>
<entry type="NameFunction" style="#0000ff"/>
<entry type="NameLabel" style="#a0a000"/>
<entry type="NameNamespace" style="bold #0000ff"/>
<entry type="NameTag" style="bold #008000"/>
<entry type="NameVariable" style="#19177c"/>
<entry type="LiteralString" style="#ba2121"/>
<entry type="LiteralStringDoc" style="italic"/>
<entry type="LiteralStringEscape" style="bold #bb6622"/>
<entry type="LiteralStringInterpol" style="bold #bb6688"/>
<entry type="LiteralStringOther" style="#008000"/>
<entry type="LiteralStringRegex" style="#bb6688"/>
<entry type="LiteralStringSymbol" style="#19177c"/>
<entry type="LiteralNumber" style="#666666"/>
<entry type="Operator" style="#666666"/>
<entry type="OperatorWord" style="bold #aa22ff"/>
<entry type="Comment" style="italic #408080"/>
<entry type="CommentPreproc" style="noitalic #bc7a00"/>
<entry type="GenericDeleted" style="#a00000"/>
<entry type="GenericEmph" style="italic"/>
<entry type="GenericError" style="#ff0000"/>
<entry type="GenericHeading" style="bold #000080"/>
<entry type="GenericInserted" style="#00a000"/>
<entry type="GenericOutput" style="#888888"/>
<entry type="GenericPrompt" style="bold #000080"/>
<entry type="GenericStrong" style="bold"/>
<entry type="GenericSubheading" style="bold #800080"/>
<entry type="GenericTraceback" style="#0044dd"/>
<entry type="GenericUnderline" style="underline"/>
<entry type="TextWhitespace" style="#bbbbbb"/>
</style>