Old API | Change | New API |
---|---|---|
_G | |
|
bit32 | Removed | N/A (use bitwise operators) |
buffer | |
|
brace_match(pos) | Changed | brace_match(pos, 0) |
lexer | |
|
_foldsymbols | Replaced | add_fold_point() |
_rules | Replaced | add_rule() |
_tokenstyles | Replaced | add_style() |
embed_lexer(parent, child, …) | Renamed | parent:embed(child, …) |
_RULES[id] | Replaced | get_rule(id) |
_RULES[id] = rule | Replaced | modify_rule(id, rule) |
N/A | Added | new() |
word_match(list, wchars, icase) | Changed | word_match(words, icase) |
ui | |
|
set_theme | Renamed | buffer.set_theme() |
textadept.editing | |
|
match_brace | Replaced | N/A (menu function) |
N/A | Added | paste() |
N/A | Added | paste_reindents |
textadept.session | |
|
default_session | Removed |
Textadept 10 no longer uses a ~/.textadept/properties.lua file. Instead, all
buffer
settings are made in ~/.textadept/init.lua, and apply to the first
and any subsequent buffers. (In Textadept 9, any buffer
settings made in
~/.textadept/init.lua only applied to the first buffer, so a
~/.textadept/properties.lua was required in order to define buffer
settings
that would affect subsequent buffers.)
Simply copying the contents of your ~/.textadept/properties.lua into ~/.textadept/init.lua should be sufficient.
Lexers are now written in a more object-oriented way. Legacy lexers are still supported, but it is recommended that you migrate them.
The terminal version’s key sequence for Ctrl+Space
is now 'c '
instead of
'c@'
.
Textadept now uses C++11’s ECMAScript regex syntax instead of TRE.
Textadept now requires Mac OSX 10.6 (Snow Leopard) at a minimum. The previous minimum version was 10.5 (Leopard).
The LuaJIT version of Textadept has been removed. Any LuaJIT-specific features used in external modules will no longer function.
Legacy lexers are of the form:
local l = require('lexer')
local token, word_match = l.token, l.word_match
local P, R, S = lpeg.P, lpeg.R, lpeg.S
local M = {_NAME = '?'}
[... token and pattern definitions ...]
M._rules = {
{'rule', pattern},
[...]
}
M._tokenstyles = {
'token' = 'style',
[...]
}
M._foldsymbols = {
_patterns = {...},
['token'] = {['start'] = 1, ['end'] = -1},
[...]
}
return M
While such legacy lexers will be handled just fine without any changes, it is recommended that you migrate yours. The migration process is fairly straightforward:
l
with lexer
, as it’s better practice and
results in less confusion.local M = {_NAME = '?'}
with local lex = lexer.new('?')
, where
?
is the name of your legacy lexer. At the end of the lexer, change
return M
to return lex
.lex:add_rule()
.lex:add_style()
.lexer.word_match()
to a
space-separated string of words.lexer.embed(M, child, ...)
and
lexer.embed(parent, M, ...)
with
lex:embed
(child, ...)
and parent:embed(lex, ...)
,
respectively.lex:add_fold_point()
. No need to mess with Lua
patterns anymore.M._FOLDBYINDENTATION
, M._LEXBYLINE
,
M._lexer
, etc. should be added as table options to lexer.new()
.lexer._RULES
should be changed to use lexer.get_rule()
and
lexer.modify_rule()
.As an example, consider the following sample legacy lexer:
local l = require('lexer')
local token, word_match = l.token, l.word_match
local P, R, S = lpeg.P, lpeg.R, lpeg.S
local M = {_NAME = 'legacy'}
local ws = token(l.WHITESPACE, l.space^1)
local comment = token(l.COMMENT, '#' * l.nonnewline^0)
local string = token(l.STRING, l.delimited_range('"'))
local number = token(l.NUMBER, l.float + l.integer)
local keyword = token(l.KEYWORD, word_match{'foo', 'bar', 'baz'})
local custom = token('custom', P('quux'))
local identifier = token(l.IDENTIFIER, l.word)
local operator = token(l.OPERATOR, S('+-*/%^=<>,.()[]{}'))
M._rules = {
{'whitespace', ws},
{'keyword', keyword},
{'custom', custom},
{'identifier', identifier},
{'string', string},
{'comment', comment},
{'number', number},
{'operator', operator}
}
M._tokenstyles = {
'custom' = l.STYLE_KEYWORD..',bold'
}
M._foldsymbols = {
_patterns = {'[{}]'},
[l.OPERATOR] = {['{'] = 1, ['}'] = -1}
}
return M
Following the migration steps would yield:
local lexer = require('lexer')
local token, word_match = lexer.token, lexer.word_match
local P, R, S = lpeg.P, lpeg.R, lpeg.S
local lex = lexer.new('legacy')
lex:add_rule('whitespace', token(lexer.WHITESPACE, lexer.space^1))
lex:add_rule('keyword', token(lexer.KEYWORD, word_match[[foo bar baz]]))
lex:add_rule('custom', token('custom', P('quux')))
lex:add_style('custom', lexer.STYLE_KEYWORD..',bold')
lex:add_rule('identifier', token(lexer.IDENTIFIER, lexer.word))
lex:add_rule('string', token(lexer.STRING, lexer.delimited_range('"')))
lex:add_rule('comment', token(lexer.COMMENT, '#' * lexer.nonnewline^0))
lex:add_rule('number', token(lexer.NUMBER, lexer.float + lexer.integer))
lex:add_rule('operator', token(lexer.OPERATOR, S('+-*/%^=<>,.()[]{}')))
lex:add_fold_point(lexer.OPERATOR, '{', '}')
return lex
There might be some slight overhead when initializing a lexer, but loading a
file from disk into Scintilla is usually more expensive. On modern computer
systems, I see no difference in speed between Lua lexers and Scintilla’s C++
ones. Optimize lexers for speed by re-arranging lexer.add_rule()
calls so
that the most common rules match first. Do keep in mind that order matters
for similar rules.
In some cases, folding may be far more expensive than lexing, particularly
in lexers with a lot of potential fold points. If your lexer is exhibiting
signs of slowness, try disabling folding in your text editor first. If that
speeds things up, you can try reducing the number of fold points you added,
overriding lexer.fold()
with your own implementation, or simply eliminating
folding support from your lexer.
Embedded preprocessor languages like PHP cannot completely embed in their parent languages in that the parent’s tokens do not support start and end rules. This mostly goes unnoticed, but code like
<div id="<?php echo $id; ?>">
will not style correctly.
Errors in lexers can be tricky to debug. Lexers print Lua errors to
io.stderr
and _G.print()
statements to io.stdout
. Running your editor
from a terminal is the easiest way to see errors as they occur.
Poorly written lexers have the ability to crash Scintilla (and thus its containing application), so unsaved data might be lost. However, I have only observed these crashes in early lexer development, when syntax errors or pattern errors are present. Once the lexer actually starts styling text (either correctly or incorrectly, it does not matter), I have not observed any crashes.
--------------------- QUOTATION end