.. | ||
lib | ||
types | ||
index.js | ||
package.json | ||
readme.md |
remark-parse
Parser for unified. Parses Markdown to mdast syntax trees. Used in the remark processor but can be used on its own as well. Can be extended to change how Markdown is parsed.
Sponsors
Gatsby 🥇 |
Vercel 🥇 |
Netlify |
|||
Holloway |
ThemeIsle 🥉 |
BoostIO 🥉 |
Expo 🥉 |
You? |
Install
npm:
npm install remark-parse
Use
var unified = require('unified')
var createStream = require('unified-stream')
var markdown = require('remark-parse')
var remark2rehype = require('remark-rehype')
var html = require('rehype-stringify')
var processor = unified()
.use(markdown, {commonmark: true})
.use(remark2rehype)
.use(html)
process.stdin.pipe(createStream(processor)).pipe(process.stdout)
See unified for more examples »
Contents
API
processor().use(parse[, options])
Configure the processor
to read Markdown as input and
process mdast
syntax trees.
options
Options can be passed directly, or passed later through processor.data()
.
options.gfm
GFM mode (boolean
, default: true
).
hello ~~hi~~ world
Turns on:
options.commonmark
CommonMark mode (boolean
, default:
false
).
This is a paragraph and this is also part of the preceding paragraph.
Allows:
- Empty lines to split block quotes
- Parentheses (
(
and)
) around link and image titles - Any escaped ASCII punctuation character
- Closing parenthesis (
)
) as an ordered list marker - URL definitions in block quotes
Disallows:
- Indented code blocks directly following a paragraph
- ATX headings (
# Hash headings
) without spacing after opening hashes or and before closing hashes - Setext headings (
Underline headings\n---
) when following a paragraph - Newlines in link and image titles
- White space in link and image URLs in auto-links (links in brackets,
<
and>
) - Lazy block quote continuation, lines not preceded by a greater than
character (
>
), for lists, code, and thematic breaks
options.pedantic
⚠️ Pedantic was previously used to mimic old-style Markdown mode: no tables, no fenced code, and with many bugs. It’s currently still “working”, but please do not use it, it’ll be removed in the future.
options.blocks
Blocks (Array.<string>
, default: list of block
HTML elements).
<block>foo
</block>
Defines which HTML elements are seen as block level.
parse.Parser
Access to the parser, if you need it.
Extending the Parser
Typically, using transformers to manipulate a syntax tree produces the desired output. Sometimes, such as when introducing new syntactic entities with a certain precedence, interfacing with the parser is necessary.
If the remark-parse
plugin is used, it adds a Parser
constructor function to the processor
. Other plugins can
add tokenizers to its prototype to change how Markdown is parsed.
The below plugin adds a tokenizer for at-mentions.
.exports = mentions
module
function mentions() {
var Parser = this.Parser
var tokenizers = Parser.prototype.inlineTokenizers
var methods = Parser.prototype.inlineMethods
// Add an inline tokenizer (defined in the following example).
.mention = tokenizeMention
tokenizers
// Run it just before `text`.
.splice(methods.indexOf('text'), 0, 'mention')
methods }
Parser#blockTokenizers
Map of names to tokenizers
(Object.<Function>
). These tokenizers (such as
fencedCode
, table
, and paragraph
)
eat from the start of a value to a line ending.
See #blockMethods
below for a list of methods that are
included by default.
Parser#blockMethods
List of blockTokenizers
names
(Array.<string>
). Specifies the order in which
tokenizers run.
Precedence of default block methods is as follows:
blankLine
indentedCode
fencedCode
blockquote
atxHeading
thematicBreak
list
setextHeading
html
definition
table
paragraph
Parser#inlineTokenizers
Map of names to tokenizers
(Object.<Function>
). These tokenizers (such as
url
, reference
, and emphasis
) eat
from the start of a value. To increase performance, they depend on locators.
See #inlineMethods
below for a list of methods that are
included by default.
Parser#inlineMethods
List of inlineTokenizers
names
(Array.<string>
). Specifies the order in which
tokenizers run.
Precedence of default inline methods is as follows:
escape
autoLink
url
email
html
link
reference
strong
emphasis
deletion
code
break
text
function tokenizer(eat, value, silent)
There are two types of tokenizers: block level and inline level. Both are functions, and work the same, but inline tokenizers must have a locator.
The following example shows an inline tokenizer that is added by the mentions plugin above.
.notInLink = true
tokenizeMention.locator = locateMention
tokenizeMention
function tokenizeMention(eat, value, silent) {
var match = /^@(\w+)/.exec(value)
if (match) {
if (silent) {
return true
}
return eat(match[0])({
type: 'link',
url: 'https://social-network/' + match[1],
children: [{type: 'text', value: match[0]}]
})
} }
Tokenizers test whether a document starts with a certain syntactic entity. In silent mode, they return whether that test passes. In normal mode, they consume that token, a process which is called “eating”.
Locators enable inline tokenizers to function faster by providing where the next entity may occur.
Signatures
Node? = tokenizer(eat, value)
boolean? = tokenizer(eat, value, silent)
Parameters
eat
(Function
) — Eat, when applicable, an entityvalue
(string
) — Value which may start an entitysilent
(boolean
, optional) — Whether to detect or consume
Properties
locator
(Function
) — Required for inline tokenizersonlyAtStart
(boolean
) — Whether nodes can only be found at the beginning of the documentnotInBlock
(boolean
) — Whether nodes cannot be in block quotes or listsnotInList
(boolean
) — Whether nodes cannot be in listsnotInLink
(boolean
) — Whether nodes cannot be in links
Returns
boolean?
, in silent mode — whether a node can be found at the start ofvalue
Node?
, In normal mode — If it can be found at the start ofvalue
tokenizer.locator(value, fromIndex)
Locators are required for inline tokenizers. Their role is to keep parsing performant.
The following example shows a locator that is added by the mentions tokenizer above.
function locateMention(value, fromIndex) {
return value.indexOf('@', fromIndex)
}
Locators enable inline tokenizers to function faster by providing information on where the next entity may occur. Locators may be wrong, it’s OK if there actually isn’t a node to be found at the index they return.
Parameters
value
(string
) — Value which may contain an entityfromIndex
(number
) — Position to start searching at
Returns
number
— Index at which an entity may start, and
-1
otherwise.
eat(subvalue)
var add = eat('foo')
Eat subvalue
, which is a string at the start of the tokenized
value
.
Parameters
subvalue
(string
) - Value to eat
Returns
add
.
add(node[, parent])
var add = eat('foo')
add({type: 'text', value: 'foo'})
Add positional
information to node
and add node
to
parent
.
Parameters
node
(Node
) - Node to patch position on and to addparent
(Parent
, optional) - Place to addnode
to in the syntax tree. Defaults to the currently processed node
Returns
Node
—
The given node
.
add.test()
Get the positional
information that would be patched on node
by
add
.
Returns
add.reset(node[, parent])
add
, but resets the internal position. Useful for
example in lists, where the same content is first eaten for a list, and
later for list items.
Parameters
node
(Node
) - Node to patch position on and insertparent
(Node
, optional) - Place to addnode
to in the syntax tree. Defaults to the currently processed node
Returns
Node
—
The given node.
Turning off a tokenizer
In some situations, you may want to turn off a tokenizer to avoid parsing that syntactic feature.
Preferably, use the remark-disable-tokenizers
plugin to turn off tokenizers.
Alternatively, this can be done by replacing the tokenizer from
blockTokenizers
(or blockMethods
) or
inlineTokenizers
(or inlineMethods
).
The following example turns off indented code blocks:
.Parser.prototype.blockTokenizers.indentedCode = indentedCode
remarkParse
function indentedCode() {
return true
}
Security
As Markdown is sometimes used for HTML, and improper use of HTML can
open you up to a cross-site
scripting (XSS) attack, use of remark can also be unsafe. When going
to HTML, use remark in combination with the rehype
ecosystem, and use rehype-sanitize
to make the tree safe.
Use of remark plugins could also open you up to other attacks. Carefully assess each plugin and the risks involved in using them.
Contribute
See contributing.md
in remarkjs/.github
for ways to get started. See support.md
for ways to get help. Ideas for new plugins and tools can be posted in
remarkjs/ideas
.
A curated list of awesome remark resources can be found in awesome remark.
This project has a code of conduct. By interacting with this repository, organization, or community you agree to abide by its terms.