From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.io/gmane.comp.tex.context/101671 Path: news.gmane.org!.POSTED!not-for-mail From: cryo shock Newsgroups: gmane.comp.tex.context Subject: Migrating ConTeXt's Textadept settings from v9 to v10. Date: Tue, 21 Aug 2018 11:39:24 +0200 Message-ID: Reply-To: mailing list for ConTeXt users NNTP-Posting-Host: blaine.gmane.org Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6503692364966003007==" X-Trace: blaine.gmane.org 1534844281 4949 195.159.176.226 (21 Aug 2018 09:38:01 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Tue, 21 Aug 2018 09:38:01 +0000 (UTC) To: mailing list for ConTeXt users Original-X-From: ntg-context-bounces@ntg.nl Tue Aug 21 11:37:57 2018 Return-path: Envelope-to: gctc-ntg-context-518@m.gmane.org Original-Received: from zapf.boekplan.nl ([5.39.185.232] helo=zapf.ntg.nl) by blaine.gmane.org with esmtp (Exim 4.84_2) (envelope-from ) id 1fs36q-00019B-Vp for gctc-ntg-context-518@m.gmane.org; Tue, 21 Aug 2018 11:37:57 +0200 Original-Received: from localhost (localhost [127.0.0.1]) by zapf.ntg.nl (Postfix) with ESMTP id A5BAD53BDB; Tue, 21 Aug 2018 11:39:48 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at zapf.boekplan.nl Original-Received: from zapf.ntg.nl ([127.0.0.1]) by localhost (zapf.ntg.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 69Q0hw0ahMOO; Tue, 21 Aug 2018 11:39:47 +0200 (CEST) Original-Received: from zapf.ntg.nl (localhost [127.0.0.1]) by zapf.ntg.nl (Postfix) with ESMTP id C364D53A9D; Tue, 21 Aug 2018 11:39:47 +0200 (CEST) Original-Received: from localhost (localhost [127.0.0.1]) by zapf.ntg.nl (Postfix) with ESMTP id AA0EB53A9D for ; Tue, 21 Aug 2018 11:39:46 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at zapf.boekplan.nl Original-Received: from zapf.ntg.nl ([127.0.0.1]) by localhost (zapf.ntg.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QeuwMq3EUKGA for ; Tue, 21 Aug 2018 11:39:45 +0200 (CEST) Original-Received: from mail-qk0-f178.google.com (mail-qk0-f178.google.com [209.85.220.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by zapf.ntg.nl (Postfix) with ESMTPS id 7004E52B5B for ; Tue, 21 Aug 2018 11:39:35 +0200 (CEST) Original-Received: by mail-qk0-f178.google.com with SMTP id f17-v6so5541738qkh.4 for ; Tue, 21 Aug 2018 02:39:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=vUOvVp6YD6RU4KuxGPOKRlVb/055E6YpmbcBqmgyG5g=; b=pc5rRG29x8aiT8trrKv40rQl3ECEPmZHO4hNvhitzr07asoahS4k2YqRqhAFDaa8Sj VhaUzozVOHy6wEJe3G/X9M+aLdDV/O0A8CL+j4V9rrAqP8uSerJE2MqnD+8oA0oBtA4M n2Jy3HuLQSdqFoKcnxXvQyDzVSEuYKcX/0y/HJa/l1g3xLcRhnv3NEnJGzqiohDAgI3l nnjEPTpbJJqUnnrrFaP8M7zO7IHOQ46pUIfxMA5YWdETCpZQwVjXuIDa/DtumjnVQWNo zzGc38050nE9uyamQIasMrJSrMoktDB+GPoQ1jq3KkiwOt/uOmUa4n8mpEvQutyl/Sx1 0Pow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=vUOvVp6YD6RU4KuxGPOKRlVb/055E6YpmbcBqmgyG5g=; b=QpumUUulxrBwEntm0/7vHtAlV6OdIaQyWceyQVvcU+o1kXQD27EUVwZAsFOM3xvXEq wXVR4tlfEQN4Teo1gXA3iboDi0/y4/yTbz45dwnl5ImfqiD0pD8vm8r/8zI9zaqN2TU8 V4Z6mEIL/DgcHQ1EhT+M/NGLoHOsXNm1wP20t+MPrQZjBFjB0p/6t01ExaEvnBmM3L4q eLmuhhCw19KWo/U3e4TMp2JfirTLTvmv2PuVyl8RRpKf2vpY5tSAslxZqQRqHJgzhIyz TkSrW4vkkYzs1weTOYnWcp54+WqSKrDWFGlKo8rd4RXFCiVH+gRDTdnStRm1+jKN4Xv2 JqPg== X-Gm-Message-State: AOUpUlEK6J5BMQOYdd2nR0qkMTmk+TfwYdDaznpmDjTn0wZ3ZJxFwaR3 OGYG9rdgpLrmzJQp9sYOyJJRHWltm0CGn25qwraUbOE= X-Google-Smtp-Source: AA+uWPxoq1p9UeKV7NAq525ZEUX0rn7bybeeTAyjMbofmWoLPbx2a9RrlLhLp4v3r+I+4lDYHXSOv/rJPXbXg6n4T+c= X-Received: by 2002:a37:4647:: with SMTP id t68-v6mr44013367qka.260.1534844373905; Tue, 21 Aug 2018 02:39:33 -0700 (PDT) X-BeenThere: ntg-context@ntg.nl X-Mailman-Version: 2.1.16 Precedence: list List-Id: mailing list for ConTeXt users List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ntg-context-bounces@ntg.nl Original-Sender: "ntg-context" Xref: news.gmane.org gmane.comp.tex.context:101671 Archived-At: --===============6503692364966003007== Content-Type: multipart/alternative; boundary="000000000000b135630573eecd7b" --000000000000b135630573eecd7b Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Hans, I'd surely love help you to adapt ConTeXt's settings from v9 to v10. I hope that the following quotations will help you enough. I also hope that the format doesn't get mixed up when I send this mail. --------------------- QUOTATION start source: https://foicica.com/textadept/manual.html#Migration.Guides Migration Guides Textadept 9 to 10 API Changes Old API Change New API *_G* bit32 Removed N/A *(use bitwise operators)* *buffer* brace_match(pos) Changed brace_match (pos, 0) *lexer* _foldsymbols Replaced add_fold_point() _rules Replaced add_rule() _tokenstyles Replaced add_style() embed_lexer(parent, child, =E2=80=A6) Renamed parent:embed (child, =E2=80=A6) _RULES[id] Replaced get_rule (id) _RULES[id] =3D rule Replaced modify_rule (id, rule) N/A Added new() word_match(list, wchars, icase) Changed word_match (words, icase) *ui* set_theme Renamed buffer.set_theme() *textadept.editing* match_brace Replaced N/A *(menu function)* N/A Added paste() N/A Added paste_reindents *textadept.session* default_session Removed Configuration Changes Textadept 10 no longer uses a *~/.textadept/properties.lua* file. Instead, all buffer settings are made in *~/.textadept/init.lua*, and apply to the first and any subsequent buffers. (In Textadept 9, any buffer settings made in *~/.textadept/init.lua* only applied to the first buffer, so a *~/.textadept/properties.lua* was required in order to define buffer settings that would affect subsequent buffers.) Simply copying the contents of your *~/.textadept/properties.lua* into *~/.textadept/init.lua* should be sufficient. Lexer Changes Lexers are now written in a more object-oriented way. Legacy lexers are still supported, but it is recommended that you migrate them . Key Bindings Changes The terminal version=E2=80=99s key sequence for Ctrl+Space is now 'c ' inst= ead of 'c@'. Regex Changes Textadept now uses C++11=E2=80=99s ECMAScript regex syntax instead of TRE . Mac OSX System Requirements Textadept now requires Mac OSX 10.6 (Snow Leopard) at a minimum. The previous minimum version was 10.5 (Leopard). LuaJIT Changes The LuaJIT version of Textadept has been removed. Any LuaJIT-specific features used in external modules will no longer function. --------------------- QUOTATION end I also found this here: --------------------- QUOTATION start source: https://foicica.com/textadept/api.html#lexer.Migrating.Legacy.Lexer= s Migrating Legacy Lexers Legacy lexers are of the form: local l =3D require('lexer') local token, word_match =3D l.token, l.word_match local P, R, S =3D lpeg.P, lpeg.R, lpeg.S local M =3D {_NAME =3D '?'} [... token and pattern definitions ...] M._rules =3D { {'rule', pattern}, [...] } M._tokenstyles =3D { 'token' =3D 'style', [...] } M._foldsymbols =3D { _patterns =3D {...}, ['token'] =3D {['start'] =3D 1, ['end'] =3D -1}, [...] } return M While such legacy lexers will be handled just fine without any changes, it is recommended that you migrate yours. The migration process is fairly straightforward: 1. Replace all instances of l with lexer, as it=E2=80=99s better practic= e and results in less confusion. 2. Replace local M =3D {_NAME =3D '?'} with local lex =3D lexer.new('?')= , where ? is the name of your legacy lexer. At the end of the lexer, change return M to return lex. 3. Instead of defining rules towards the end of your lexer, define your rules as you define your tokens and patterns using lex:add_rule() . 4. Similarly, any custom token names should have their styles immediately defined using lex:add_style() . 5. Convert any table arguments passed to lexer.word_match() to a space-separated string of words. 6. Replace any calls to lexer.embed(M, child, ...) and lexer.embed(paren= t, M, ...) with lex:embed (child, ...) and parent:embed(lex, ...), respectively. 7. Define fold points with simple calls to lex:add_fold_point() . No need to mess with Lua patterns anymore. 8. Any legacy lexer options such as M._FOLDBYINDENTATION, M._LEXBYLINE, M._lexer, etc. should be added as table options to lexer.new() . 9. Any external lexer rule fetching and/or modifications via lexer._RULE= S should be changed to use lexer.get_rule() and lexer.modify_rule() . As an example, consider the following sample legacy lexer: local l =3D require('lexer') local token, word_match =3D l.token, l.word_match local P, R, S =3D lpeg.P, lpeg.R, lpeg.S local M =3D {_NAME =3D 'legacy'} local ws =3D token(l.WHITESPACE, l.space^1) local comment =3D token(l.COMMENT, '#' * l.nonnewline^0) local string =3D token(l.STRING, l.delimited_range('"')) local number =3D token(l.NUMBER, l.float + l.integer) local keyword =3D token(l.KEYWORD, word_match{'foo', 'bar', 'baz'}) local custom =3D token('custom', P('quux')) local identifier =3D token(l.IDENTIFIER, l.word) local operator =3D token(l.OPERATOR, S('+-*/%^=3D<>,.()[]{}')) M._rules =3D { {'whitespace', ws}, {'keyword', keyword}, {'custom', custom}, {'identifier', identifier}, {'string', string}, {'comment', comment}, {'number', number}, {'operator', operator} } M._tokenstyles =3D { 'custom' =3D l.STYLE_KEYWORD..',bold' } M._foldsymbols =3D { _patterns =3D {'[{}]'}, [l.OPERATOR] =3D {['{'] =3D 1, ['}'] =3D -1} } return M Following the migration steps would yield: local lexer =3D require('lexer') local token, word_match =3D lexer.token, lexer.word_match local P, R, S =3D lpeg.P, lpeg.R, lpeg.S local lex =3D lexer.new('legacy') lex:add_rule('whitespace', token(lexer.WHITESPACE, lexer.space^1)) lex:add_rule('keyword', token(lexer.KEYWORD, word_match[[foo bar baz]])) lex:add_rule('custom', token('custom', P('quux'))) lex:add_style('custom', lexer.STYLE_KEYWORD..',bold') lex:add_rule('identifier', token(lexer.IDENTIFIER, lexer.word)) lex:add_rule('string', token(lexer.STRING, lexer.delimited_range('"'))) lex:add_rule('comment', token(lexer.COMMENT, '#' * lexer.nonnewline^0)) lex:add_rule('number', token(lexer.NUMBER, lexer.float + lexer.integer)) lex:add_rule('operator', token(lexer.OPERATOR, S('+-*/%^=3D<>,.()[]{}'))) lex:add_fold_point(lexer.OPERATOR, '{', '}') return lex Considerations Performance There might be some slight overhead when initializing a lexer, but loading a file from disk into Scintilla is usually more expensive. On modern computer systems, I see no difference in speed between Lua lexers and Scintilla=E2=80=99s C++ ones. Optimize lexers for speed by re-arranging lexer.add_rule() calls so that the most common rules match first. Do keep in mind that order matters for similar rules. In some cases, folding may be far more expensive than lexing, particularly in lexers with a lot of potential fold points. If your lexer is exhibiting signs of slowness, try disabling folding in your text editor first. If that speeds things up, you can try reducing the number of fold points you added, overriding lexer.fold() with your own implementation, or simply eliminating folding support from your lexer. Limitations Embedded preprocessor languages like PHP cannot completely embed in their parent languages in that the parent=E2=80=99s tokens do not support start a= nd end rules. This mostly goes unnoticed, but code like
"> will not style correctly. Troubleshooting Errors in lexers can be tricky to debug. Lexers print Lua errors to io.stderr and _G.print() statements to io.stdout. Running your editor from a terminal is the easiest way to see errors as they occur. Risks Poorly written lexers have the ability to crash Scintilla (and thus its containing application), so unsaved data might be lost. However, I have only observed these crashes in early lexer development, when syntax errors or pattern errors are present. Once the lexer actually starts styling text (either correctly or incorrectly, it does not matter), I have not observed any crashes. --------------------- QUOTATION end If you could use anything else, let me know. Cheers, L. --000000000000b135630573eecd7b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Hans, I'd surely love help you to adapt ConTeX= t's settings from v9 to v10. I hope that the following quotations will = help you enough. I also hope that the format doesn't get mixed up when = I send this mail.

--------------------- QUOTATION start

Migration Guides

Textadept 9 to 10

API Changes

Old API Change New API
_G

bit32 Removed N/A (use bitwise operators)
buffer

brace_match(pos) Changed b= race_match(pos, 0)
lexer

_foldsymbols Replaced add_fold_point()
_rules Replaced add_r= ule()
_tokenstyles Replaced add_= style()
embed_lexer(parent, child, =E2=80=A6) Renamed parent:e= mbed(child, =E2=80=A6)
_RULES[id] Replaced get_r= ule(id)
_RULES[id] =3D rule Replaced mo= dify_rule(id, rule)
N/A Added new()<= /td>
word_match(list, wchars, icase) Changed wor= d_match(words, icase)
ui

set_theme Renamed buf= fer.set_theme()
textadept.editing

match_brace Replaced N/A (menu function)
N/A Added paste()
N/A Added paste_reindents
textadept.session

default_session Removed

Configuration Changes

Textadept 10 no longer uses a ~/.textadept/properties.lua file.= Instead, all buffer settings are made in ~/.textadept/init.lua, an= d apply to the first and any subsequent buffers. (In Textadept 9, any buffer settin= gs made in ~/.textadept/init.lua only applied to the first buffer, so a ~/.textadept/properties.lua was required in order to define = buffer settings that would affect subsequent buffers.)

Simply copying the contents of your ~/.textadept/properties.lua= into ~/.textadept/init.lua should be sufficient.

Lexer Changes

Lexers are now written in a more object-oriented way. Legacy lexers are = still supported, but it is recommended that you migrate them.

Key Bindings Changes

The terminal version=E2=80=99s key sequence for Ctrl+Space = is now 'c ' instead of 'c@'.

Regex Changes

Textadept now uses C++11=E2=80=99s ECMAScript regex syntax instead of TRE.

Mac OSX System Requirements

Textadept now requires Mac OSX 10.6 (Snow Leopard) at a minimum. The pre= vious minimum version was 10.5 (Leopard).

LuaJIT Changes

The LuaJIT version of Textadept has been removed. Any LuaJIT-specific fe= atures used in external modules will no longer function.


--------------------- QUOTATION end

I also found this here:

= --------------------- QUOTATION start

Migrating Legacy Lexers

Legacy lexers are of the form:

local l =3D require('lexer')
local token, word_match =3D l.token, l.word_match
local P, R, S =3D lpeg.P, lpeg.R, lpeg.S

local M =3D {_NAME =3D '?'}

[... token and pattern definitions ...]

M._rules =3D {
  {'rule', pattern},
  [...]
}

M._tokenstyles =3D {
  'token' =3D 'style',
  [...]
}

M._foldsymbols =3D {
  _patterns =3D {...},
  ['token'] =3D {['start'] =3D 1, ['end'] =3D -1},
  [...]
}

return M

While such legacy lexers will be handled just fine without any changes, = it is recommended that you migrate yours. The migration process is fairly straightforward:

  1. Replace all instances of l with lexer, as= it=E2=80=99s better practice and results in less confusion.
  2. Replace local M =3D {_NAME =3D = 9;?'} with local lex =3D lexer.new('?'), wh= ere ? is the name of your legacy lexer. At the end of the lexer, c= hange return M to return lex.
  3. Instead of defini= ng rules towards the end of your lexer, define your rules as you define your tokens and patterns using lex= :add_rule().
  4. Similarly, any custom token names should ha= ve their styles immediately defined using lex:add_style().
  5. Convert any table arguments= passed to lexer.word_match() to a space-separated string of words.
  6. Replace any calls to lexer.e= mbed(M, child, ...) and lexer.embed(parent, M, ...) with lex:em= bed(child, ...) and parent:embed(lex, ...), respectively.
  7. Define fold points with simple calls to lex:add_fold_point(). No need to mess with Lua patterns anymore.
  8. Any legacy lexer options such as M._FOLDBYI= NDENTATION, M._LEXBYLINE, M._lexer, etc. should be added as table options to lexer.new()<= /a>.
  9. Any external lexer rule fetching and/or modifications via lexer._RULES should be changed to use lexer.get_rule() and = lexer.modify_rule().

As an example, consider the following sample legacy lexer:

local l =3D require('lexer')
local token, word_match =3D l.token, l.word_match
local P, R, S =3D lpeg.P, lpeg.R, lpeg.S

local M =3D {_NAME =3D 'legacy'}

local ws =3D token(l.WHITESPACE, l.space^1)
local comment =3D token(l.COMMENT, '#' * l.nonnewline^0)
local string =3D token(l.STRING, l.delimited_range('"'))
local number =3D token(l.NUMBER, l.float + l.integer)
local keyword =3D token(l.KEYWORD, word_match{'foo', 'bar',=
 'baz'})
local custom =3D token('custom', P('quux'))
local identifier =3D token(l.IDENTIFIER, l.word)
local operator =3D token(l.OPERATOR, S('+-*/%^=3D<>,.()[]{}')=
)

M._rules =3D {
  {'whitespace', ws},
  {'keyword', keyword},
  {'custom', custom},
  {'identifier', identifier},
  {'string', string},
  {'comment', comment},
  {'number', number},
  {'operator', operator}
}

M._tokenstyles =3D {
  'custom' =3D l.STYLE_KEYWORD..',bold'
}

M._foldsymbols =3D {
  _patterns =3D {'[{}]'},
  [l.OPERATOR] =3D {['{'] =3D 1, ['}'] =3D -1}
}

return M

Following the migration steps would yield:

local lexer =3D require('lexer')
local token, word_match =3D lexer.token, lexer.word_match
local P, R, S =3D lpeg.P, lpeg.R, lpeg.S

local lex =3D lexer.new('legacy')

lex:add_rule('whitespace', token(lexer.WHITESPACE, lexer.space^1))
lex:add_rule('keyword', token(lexer.KEYWORD, word_match[[foo bar ba=
z]]))
lex:add_rule('custom', token('custom', P('quux')))
lex:add_style('custom', lexer.STYLE_KEYWORD..',bold')
lex:add_rule('identifier', token(lexer.IDENTIFIER, lexer.word))
lex:add_rule('string', token(lexer.STRING, lexer.delimited_range(&#=
39;"')))
lex:add_rule('comment', token(lexer.COMMENT, '#' * lexer.no=
nnewline^0))
lex:add_rule('number', token(lexer.NUMBER, lexer.float + lexer.inte=
ger))
lex:add_rule('operator', token(lexer.OPERATOR, S('+-*/%^=3D<=
>,.()[]{}')))

lex:add_fold_point(lexer.OPERATOR, '{', '}')

return lex

Considerations

Performance

There might be some slight overhead when initializing a lexer, but loadi= ng a file from disk into Scintilla is usually more expensive. On modern computer systems, I see no difference in speed between Lua lexers and Scintilla=E2= =80=99s C++ ones. Optimize lexers for speed by re-arranging lexer.add_rule() calls so that the most common rules match first. Do keep in mind that order matters for similar rules.

In some cases, folding may be far more expensive than lexing, particular= ly in lexers with a lot of potential fold points. If your lexer is exhibiting signs of slowness, try disabling folding in your text editor first. If that speeds things up, you can try reducing the number of fold points you added, overriding lexer.fold() with your own implementation, or simpl= y eliminating folding support from your lexer.

Limitations

Embedded preprocessor languages like PHP cannot completely embed in thei= r parent languages in that the parent=E2=80=99s tokens do not support start a= nd end rules. This mostly goes unnoticed, but code like

<div id=3D"<?php echo $id; ?>">

will not style correctly.

Troubleshooting

Errors in lexers can be tricky to debug. Lexers print Lua errors to io.stderr and _G.print() statements to io.s= tdout. Running your editor from a terminal is the easiest way to see errors as they occur.

Risks

Poorly written lexers have the ability to crash Scintilla (and thus its containing application), so unsaved data might be lost. However, I have onl= y observed these crashes in early lexer development, when syntax errors or pattern errors are present. Once the lexer actually starts styling text (either correctly or incorrectly, it does not matter), I have not observed any crashes.

--------------------- QUOTATION end

If you could use anything else, let me know.

= Cheers, L.
--000000000000b135630573eecd7b-- --===============6503692364966003007== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX18KSWYgeW91ciBxdWVzdGlvbiBpcyBvZiBpbnRlcmVz dCB0byBvdGhlcnMgYXMgd2VsbCwgcGxlYXNlIGFkZCBhbiBlbnRyeSB0byB0aGUgV2lraSEKCm1h aWxsaXN0IDogbnRnLWNvbnRleHRAbnRnLm5sIC8gaHR0cDovL3d3dy5udGcubmwvbWFpbG1hbi9s aXN0aW5mby9udGctY29udGV4dAp3ZWJwYWdlICA6IGh0dHA6Ly93d3cucHJhZ21hLWFkZS5ubCAv IGh0dHA6Ly9jb250ZXh0LmFhbmhldC5uZXQKYXJjaGl2ZSAgOiBodHRwczovL2JpdGJ1Y2tldC5v cmcvcGhnL2NvbnRleHQtbWlycm9yL2NvbW1pdHMvCndpa2kgICAgIDogaHR0cDovL2NvbnRleHRn YXJkZW4ubmV0Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f --===============6503692364966003007==--