This package browser is in early development. Mind the rough edges.

r-tokenizers 0.3.0

Fast, consistent tokenization of natural language text

This is a package for converting natural language text into tokens. It includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, tweets, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the stringi and Rcpp packages for fast yet correct tokenization in UTF-8 encoding.

Installation

Install r-tokenizers 0.3.0 as follows:

guix install r-tokenizers@0.3.0

Or install the latest version:

guix install r-tokenizers

You can also install packages in augmented, pure or containerized environments for development or simply to try them out without polluting your user profile. See the guix shell documentation for more information.