Fast, consistent tokenization of natural language text
This is a package for converting natural language text into tokens. It includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, tweets, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the
Rcpp packages for fast yet correct tokenization in UTF-8 encoding.
- Website: https://lincolnmullen.com/software/tokenizers/
- Licenses: Expat
- Package source: gnu/packages/cran.scm
- Builds: See build status
- Issues: See known issues
r-tokenizers 0.2.3 as follows:
guix install email@example.com
Or install the latest version:
guix install r-tokenizers
You can also install packages in augmented, pure or containerized environments for development or simply to try them out without polluting you user profile. See the
guix shell documentation for more information.