NAV
cURL Python Ruby Go Node.js PHP

Introduction

Get entities using the en_core_web_lg pre-trained model:

curl "https://api.nlpcloud.io/v1/en_core_web_lg/entities" \
  -H "Authorization: Token 4eC39HqLyjWDarjtT1zdp7dc" \
  -X POST \
  -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
import nlpcloud

client = nlpcloud.Client("en_core_web_lg", "4eC39HqLyjWDarjtT1zdp7dc")
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
require 'nlpcloud'

client = NLPCloud::Client.new('en_core_web_lg','4eC39HqLyjWDarjtT1zdp7dc')
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
package main

import "github.com/nlpcloud/nlpcloud-go"

func main() {
    client := nlpcloud.NewClient("en_core_web_lg", "4eC39HqLyjWDarjtT1zdp7dc")
    // Returns an Entities struct.
    client.Entities("John Doe has been working for Microsoft in Seattle since 1999.")
}
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('en_core_web_lg','4eC39HqLyjWDarjtT1zdp7dc')

// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
  .then(function (response) {
    console.log(response.data);
  })
  .catch(function (err) {
    console.error(err.response.status);
    console.error(err.response.data.detail);
  });
<?php
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('en_core_web_lg','4eC39HqLyjWDarjtT1zdp7dc');
# Returns a json object.
$client->entities('John Doe has been working for Microsoft in Seattle since 1999.);
?>

Get entities using your own model with ID 7894:

curl "https://api.nlpcloud.io/v1/custom_model/7894/entities" \
  -H "Authorization: Token 4eC39HqLyjWDarjtT1zdp7dc" \
  -X POST \
  -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
import nlpcloud

client = nlpcloud.Client("custom_model/7894", "4eC39HqLyjWDarjtT1zdp7dc")
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
require 'nlpcloud'

client = NLPCloud::Client.new('custom_model/7894','4eC39HqLyjWDarjtT1zdp7dc')
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
package main

import "github.com/nlpcloud/nlpcloud-go"

func main() {
    client := nlpcloud.NewClient("custom_model/7894", "4eC39HqLyjWDarjtT1zdp7dc")
    // Returns an Entities struct.
    client.Entities("John Doe has been working for Microsoft in Seattle since 1999.")
}
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('custom_model/7894','4eC39HqLyjWDarjtT1zdp7dc')

client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
  .then(function (response) {
    console.log(response.data);
  })
  .catch(function (err) {
    console.error(err.response.status);
    console.error(err.response.data.detail);
  });
<?php
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('custom_model/7894','4eC39HqLyjWDarjtT1zdp7dc');
# Returns a json object.
$client->entities('John Doe has been working for Microsoft in Seattle since 1999.);
?>

Output:

{
  "entities": [
    {
      "start": 0,
      "end": 8,
      "type": "PERSON",
      "text": "John Doe"
    },
    {
      "start": 30,
      "end": 39,
      "type": "ORG",
      "text": "Microsoft"
    },
    {
      "start": 43,
      "end": 50,
      "type": "GPE",
      "text": "Seattle"
    },
    {
      "start": 57,
      "end": 61,
      "type": "DATE",
      "text": "1999"
    }
  ]
}

Welcome to the NLP Cloud API documentation.

All your Natural Language Processing tasks in one single API, suited for production:

Use Case Model Used
Named Entity Recognition (NER): extract and tag relevant entities from a text like name, company, country... (see endpoint) All the large spaCy models are available (15 languages) .
Classification: send a text with possible labels, and let the model apply the right labels to your sentence (see endpoint) We are using the Facebook's Bart Large MNLI model with PyTorch and Hugging Face transformers
Summarization: send a text, and get a smaller text keeping essential information only (see endpoint) We are using the Facebook's Bart Large CNN model with PyTorch and Hugging Face transformers
Question answering: send a piece of text as a context, and ask questions about anything related to this context (see endpoint) We are using the Deepset's Roberta Base Squad 2 model with PyTorch and Hugging Face transformers
Sentiment analysis: determine whether a text is rather positive or negative (see endpoint) We are using the DistilBERT Base Uncased Finetuned SST-2 model with PyTorch and using Hugging Face transformers
Translation: translate text from one language to another (see endpoint) Several Helsinki NLP's Opus MT models are available (6 languages) with PyTorch and Hugging Face transformers
Language Detection: detect one or several languages from a text (see endpoint) We are simply using Python's Langdetect library.
Part-Of-Speech (POS) tagging: assign parts of speech to each word of your text (see endpoint) All the large spaCy models are available (15 languages)
Tokenization: extract tokens from a text (see endpoint) All the large spaCy models are available (15 languages)

All these models can be used for free with a maximum of 3 requests per minute. For more requests, (i.e. for production use), please see the paid plans.

If not done yet, please retrieve a free API token from your dashboard. Also do not hesitate to contact us: [email protected].

If you have feedbacks about the API, the documentation, or the client libraries, please let us know!

See on the right a full example retrieving entities from a block of text, using both the spaCy pre-trained en_core_web_lg model, and your own custom_model/7894 model. And the same example below using Postman:

Authentication example with Postman

NER example with Postman

You can upload your own spaCy and Hugging Face transformers-based models in your dashboard.

If you have a large batch of requests to process you can also use batch processing.

Here are the current versions of the libraries used behind the hood:

Lib Version
spaCy 3.0.1
PyTorch 1.7.1
TensorFlow 2.4.1
Transformers 4.3.2
LangDetect 1.0.8

Set Up

Client Installation

If you are using one of our client libraries, here is how to install them.

Python

Install with pip.

pip install nlpcloud

More details on the source repo: https://github.com/nlpcloud/nlpcloud-python

Ruby

Install with gem.

gem install nlpcloud

More details on the source repo: https://github.com/nlpcloud/nlpcloud-ruby

Go

Install with go get.

go get -u github.com/nlpcloud/nlpcloud-go

More details on the source repo: https://github.com/nlpcloud/nlpcloud-go

Node.js

Install with NPM.

npm install nlpcloud --save

More details on the source repo: https://github.com/nlpcloud/nlpcloud-js

PHP

Install with Composer.

Create a composer.json file containing at least the following:

{"require": {"nlpcloud/nlpcloud-client": "*"}}

Then launch the following:

composer install

More details on the source repo: https://github.com/nlpcloud/nlpcloud-php

Authentication

Replace with your token:

curl "https://api.nlpcloud.io/v1/<model>/<endpoint>" \
  -H "Authorization: Token <token>"
import nlpcloud

client = nlpcloud.Client("<model>", "<token>")
require 'nlpcloud'

client = NLPCloud::Client.new('<model>','<token>')
package main

import "github.com/nlpcloud/nlpcloud-go"

func main() {
    client := nlpcloud.NewClient("<model>", "<token>")
}
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model>','<token>')
use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model>','<token>');

Add your API token after the Token keyword in an Authorization header. You should include this header in all your requests: Authorization: Token <token>.

Here is an example using Postman (Postman is automatically adding headers to the requests. You should at least keep the Host header, otherwise you will get a 400 error.):

Authentication example with Postman

If not done yet, please get a free API token in your dashboard.

All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail.

Versioning

Replace with the right API version:

curl "https://api.nlpcloud.io/<version>/<model>/<endpoint>"
# The latest API version is automatically set by the library.
# The latest API version is automatically set by the library.
// The latest API version is automatically set by the library.
// The latest API version is automatically set by the library.
// The latest API version is automatically set by the library.

The latest API version is v1.

The API version comes right after the domain name, and before the model name.

Encoding

POST JSON data:

curl "https://api.nlpcloud.io/v1/<model>/<endpoint>" \
  -X POST \
  -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
# Encoding is automatically handled by the library.
# Encoding is automatically handled by the library.
// Encoding is automatically handled by the library.
// Encoding is automatically handled by the library.
// Encoding is automatically handled by the library.

You should send JSON encoded data in POST requests.

Here is an example using Postman:

Encoding with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

Models

Replace with the right pre-trained model:

curl "https://api.nlpcloud.io/v1/<model>/<endpoint>"
# Set the model during client initialization.
client = nlpcloud.Client("<model>", "<token>")
client = NLPCloud::Client.new('<model>','<token>')
client := nlpcloud.NewClient("<model>", "<token>")
const client = new NLPCloudClient('<model>', '<token>')
$client = new \NLPCloud\NLPCloud('<model>','<token>');

Example: pre-trained spaCy's en_core_web_lg model for Named Entity Recognition (NER):

curl "https://api.nlpcloud.io/v1/en_core_web_lg/entities"
client = nlpcloud.Client("en_core_web_lg", "<token>")
client = NLPCloud::Client.new('en_core_web_lg','<token>')
client := nlpcloud.NewClient("en_core_web_lg", "<token>")
const client = new NLPCloudClient('en_core_web_lg', '<token>')
$client = new \NLPCloud\NLPCloud('en_core_web_lg','<your token>');

Example: your own spaCy model with ID 7894 for Named Entity Recognition (NER):

curl "https://api.nlpcloud.io/v1/custom_model/7894/entities"
client = nlpcloud.Client("custom_model/7894", "<token>")
client = NLPCloud::Client.new('custom_model/7894','<token>')
client := nlpcloud.NewClient("custom_model/7894", "<token>")
const client = new NLPCloudClient('custom_model/7894', '<token>')
$client = new \NLPCloud\NLPCloud('custom_model/7894','<your token>');

We selected the best state-of-the-art pre-trained models from spaCy and Hugging Face in order to perform Named Entity Recognition (NER), text classification, text summarization, sentiment analysis, question answering, and Part-of-Speech (POS) tagging.

You can also also use your own spaCy and Hugging Face transformers-based models by uploading your models in your dashboard.

The name of the model comes right after the API version, and before the name of the endpoint.

If you are using your own spaCy or transformers-based model, the model name is made up of 2 things: custom_model and the ID of your model. For example if your model ID is 7894, you should use custom_model/7894. Your model ID appears in your dashboard once you upload the model and the instance creation is finished.

Here are examples on the right performing Named Entity Recognition (NER) with spaCy's en_core_web_lg model and another example doing the same thing with your own spaCy model with ID 7894 (ID of the custom model can be retrieved from your dashboard).

Models List

Here is a comprehensive list of all the pre-trained models supported by the NLP Cloud API:

Name Description Versions
en_core_web_lg: spaCy's English Large See on spaCy spaCy 3.0.1
fr_core_web_lg: spaCy's French Large See on spaCy spaCy 3.0.1
zh_core_web_lg: spaCy's Chinese Large See on spaCy spaCy 3.0.1
da_core_news_lg: spaCy's Danish Large See on spaCy spaCy 3.0.1
nl_core_news_lg: spaCy's Dutch Large See on spaCy spaCy 3.0.1
de_core_news_lg: spaCy's German Large See on spaCy spaCy 3.0.1
el_core_news_lg: spaCy's Greek Large See on spaCy spaCy 3.0.1
it_core_news_lg: spaCy's Italian Large See on spaCy spaCy 3.0.1
ja_core_news_lg: spaCy's Japanese Large See on spaCy spaCy 3.0.1
lt_core_news_lg: spaCy's Lithuanian Large See on spaCy spaCy 3.0.1
nb_core_news_lg: spaCy's Norwegian okmål Large See on spaCy spaCy 3.0.1
pl_core_news_lg: spaCy's Polish Large See on spaCy spaCy 3.0.1
pt_core_news_lg: spaCy's Portuguese Large See on spaCy spaCy 3.0.1
ro_core_news_lg: spaCy's Romanian Large See on spaCy spaCy 3.0.1
es_core_news_lg: spaCy's Spanish Large See on spaCy spaCy 3.0.1
bart-large-mnli: Facebook's Bart Large MNLI See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
bart-large-cnn: Facebook's Bart Large CNN See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
roberta-base-squad2: Deepset's Roberta Base Squad 2 See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
distilbert-base-uncased-finetuned-sst-2-english: Distilbert Finetuned SST 2 See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-en-fr: Helsinki NLP's Opus MT English to French See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-fr-en: Helsinki NLP's Opus MT French to English See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-en-es: Helsinki NLP's Opus MT English to Spanish See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-es-en: Helsinki NLP's Opus MT Spanish to English See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-en-de: Helsinki NLP's Opus MT English to German See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-de-en: Helsinki NLP's Opus MT German to English See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-en-nl: Helsinki NLP's Opus MT English to Dutch See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-nl-en: Helsinki NLP's Opus MT Dutch to English See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-en-zh: Helsinki NLP's Opus MT English to Chinese See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-zh-en: Helsinki NLP's Opus MT Chinese to English See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-en-ru: Helsinki NLP's Opus MT English to Russian See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
opus-mt-ru-en: Helsinki NLP's Opus MT Russian to English See on Hugging Face PyTorch 1.7.1 / Transformers 4.3.2
python-langdetect: Python LangDetect library See on Pypi LangDetect 1.0.8

Upload Your Transformers-Based Model

You can use your own transformers-based models.

Hugging Face transformers-based models can either be based on PyTorch or TensorFlow. Here is a how-to for each case. If you are unsure which library your model is using, please contact us so we can give you a hand.

If you experience difficulties, do not hesitate to contact us, it will be a pleasure to help!

PyTorch Transformers

Save your model to disk as .pt file

torch.save(model, 'model.pt')

Save your model to disk using the PyTorch torch.save(model, 'model.pt') command.

Then upload your .pt file in your dashboard.

TensorFlow Transformers

Save your model to disk in SavedModel format

model.save('/path/to/exported/model')

Turn your exported model into a zipped archive

zip -r zipped_archive /path/to/exported/model

Save your model to disk using the SavedModel format: model.save('/path/to/exported/model').

Then compress the newly created directory using Zip.

Finally, upload your Zip file in your dashboard.

Upload Your spaCy Model

Export in Python script:

nlp.to_disk("/path")

Package:

python -m spacy package /path/to/exported/model /path/to/packaged/model

Archive as .tar.gz:

# Go to /path/to/packaged/model
python setup.py sdist

Or archive as .whl:

# Go to /path/to/packaged/model
python setup.py bdist_wheel

You can use your own spaCy models.

Upload your custom spaCy model in your dashboard, but first you need to export it and package it as a Python module.

Here is what you should do:

  1. Export your model to disk using the spaCy to_disk("/path") command.
  2. Package your exported model using the spacy package command.
  3. Archive your packaged model either as a .tar.gz archive using python setup.py sdist or as a Python wheel using python setup.py bdist_wheel (both formats are accepted).
  4. Retrieve you archive in the newly created dist folder and upload it in your dashboard.

If you experience difficulties, do not hesitate to contact us, it will be a pleasure to help!

Endpoints

Entities

Input:

curl "https://api.nlpcloud.io/v1/<spacy_model_name>/entities" \
  -H "Authorization: Token <token>" \
  -X POST \
  -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
import nlpcloud

client = nlpcloud.Client("<spacy_model_name>", "<token>")
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
require 'nlpcloud'

client = NLPCloud::Client.new('<spacy_model_name>','<token>')
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<spacy_model_name>", "<token>")
// Return an Entities struct.
client.Entities("John Doe has been working for Microsoft in Seattle since 1999.")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<spacy_model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.entities('John Doe has been working for Microsoft in Seattle since 1999.')
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<spacy_model_name>','<token>');
# Returns a json object.
$client.entities("John Doe has been working for Microsoft in Seattle since 1999.")

Output (using the en_core_web_lg (English) model for the example):

{
  "entities": [
    {
      "start": 0,
      "end": 8,
      "type": "PERSON",
      "text": "John Doe"
    },
    {
      "start": 30,
      "end": 39,
      "type": "ORG",
      "text": "Microsoft"
    },
    {
      "start": 43,
      "end": 50,
      "type": "GPE",
      "text": "Seattle"
    },
    {
      "start": 57,
      "end": 61,
      "type": "DATE",
      "text": "1999"
    }
  ]
}

This endpoint uses any spaCy model to perform Named Entity Recognition (NER). It can be either a spaCy pre-trained model or your own spaCy or transformers-based custom model. Give a block of text to the model and it will try to extract entitites from it like persons, organizations, countries...

See the spaCy named entity recognition documentation for more details.

Here are all the spaCy pre-trained models you can use (see the models section for more details) :

Each spaCy pre-trained model has a list of supported built-in entities it is able to extract. For example, the list of entities for the en_core_web_lg model can be found here:

Here is an example using Postman:

NER example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/<spacy_model_name>/entities

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text you want to analyze

Output

This endpoint returns a JSON array of entities. Each entity is an object made up of the following:

Key Type Description
text string The content of the entity
type string The type of entity (PERSON, ORG, etc.)
start integer The position of the 1st character of the entity (starting at 0)
end integer The position of the 1st character after the entity

Classification

Input:

curl "https://api.nlpcloud.io/v1/bart-large-mnli/classification" \
  -H "Authorization: Token <token>" \
  -X POST \
  -d '{
    "text":"John Doe is a Go Developer at Google. He has been working there for 10 years and has been awarded employee of the year",
    "labels":["job", "nature", "space"],
    "multi_class": true
  }'
import nlpcloud

client = nlpcloud.Client("bart-large-mnli", "<token>")
# Returns a json object.
client.classification("""John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.""",
  ["job", "nature", "space"],
  True)
require 'nlpcloud'

client = NLPCloud::Client.new('bart-large-mnli','<token>')
# Returns a json object.
client.classification("John Doe is a Go Developer at Google.
  He has been working there for 10 years and has been 
  awarded employee of the year.",
  ["job", "nature", "space"],
  true)
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("bart-large-mnli", "<token>")
// Return a Classification struct.
client.Classification(`John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.`,
  []string{"job", "nature", "space"},
  true)
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('bart-large-mnli','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.classification(`John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.`,
  ["job", "nature", "space"],
  true)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('bart-large-mnli','<token>');
# Returns a json object.
$client.classification("John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.",
  array("job", "nature", "space"),
  true)

Output:

{
  "labels":["job", "space", "nature"],
  "scores":[0.9258800745010376, 0.1938474327325821, 0.010988450609147549]
}

This endpoint uses Facebook's Bart Large MNLI model to perform classification on a piece of text. It can also use your own transformers-based custom model (replace bart-large-mnli with the ID of your model in the URL).

Pass your text along with a list of labels. The model will give a score to each label. The higher the score, the more likely the text is related to this label.

You also need to say if you want more than one label to apply to your text, by passing the multi_class boolean.

Here is an example using Postman:

Classification example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/bart-large-mnli/classification

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text you want to analyze
labels array A list of labels you want to classify your text with
multi_class boolean Whether multiple labels should be applied to your text

Output

This endpoint returns a JSON object containing a list of labels along with a list of scores. Order matters. For example, the second score in the list corresponds to the second label.

Key Type Description
labels array of strings The labels you passed in your request
scores array of floats The scores applied to each label. Each score goes from 0 to 1. The higher the better

Sentiment Analysis

Input:

curl "https://api.nlpcloud.io/v1/distilbert-base-uncased-finetuned-sst-2-english/sentiment" \
  -H "Authorization: Token <token>" \
  -X POST -d '{"text":"NLP Cloud proposes an amazing service!"}'
import nlpcloud

client = nlpcloud.Client("distilbert-base-uncased-finetuned-sst-2-english", "<token>")
# Returns a json object.
client.sentiment("NLP Cloud proposes an amazing service!")
require 'nlpcloud'

client = NLPCloud::Client.new('distilbert-base-uncased-finetuned-sst-2-english','<token>')
# Returns a json object.
client.sentiment("NLP Cloud proposes an amazing service!")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("distilbert-base-uncased-finetuned-sst-2-english", "<token>")
// Return a Sentiment struct.
client.Sentiment("NLP Cloud proposes an amazing service!")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('distilbert-base-uncased-finetuned-sst-2-english','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.sentiment('NLP Cloud proposes an amazing service!')
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('distilbert-base-uncased-finetuned-sst-2-english','<token>');
# Returns a json object.
$client.sentiment("NLP Cloud proposes an amazing service!")

Output:

{
  "scored_labels":[
    {
      "label":"POSITIVE",
      "score":0.9996881484985352
    }
  ]
}

This endpoint uses The Distilbert Base Uncased Finetuned SST 2 model to perform sentiment analysis on a piece of text. It can also use your own transformers-based custom model (replace distilbert-base-uncased-finetuned-sst-2-english with the ID of your model in the URL).

Pass your text and let the model apply a POSITIVE or NEGATIVE label, with a score. The higher the score, the more accurate the label is.

Here is an example using Postman:

Sentiment analysis example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/distilbert-base-uncased-finetuned-sst-2-english/sentiment

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text you want to analyze

Output

This endpoint returns a JSON object containing a list of labels called scored_labels.

Key Type Description
scored_labels array of objects The returned scored labels. It can be one or two scored labels.

Each score label is an object made up of the following elements:

Key Type Description
label string POSITIVE or NEGATIVE
score float The score applied to the label. It goes from 0 to 1. The higher the score, the more important the sentiment is.

Question Answering

Input:

curl "https://api.nlpcloud.io/v1/roberta-base-squad2/question" \
  -H "Authorization: Token <token>" \
  -X POST -d '{
    "context":"French president Emmanuel Macron said the country was at war with an invisible, elusive enemy, and the measures were unprecedented, but circumstances demanded them.",
    "question":"Who is the French president?"
  }'
import nlpcloud

client = nlpcloud.Client("roberta-base-squad2", "<token>")
# Returns a json object.
client.question("""French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.""",
  "Who is the French president?")
require 'nlpcloud'

client = NLPCloud::Client.new('roberta-base-squad2','<token>')
# Returns a json object.
client.question("French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.",
  "Who is the French president?")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("roberta-base-squad2", "<token>")
// Return an Question struct.
client.Question(`French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.`,
  "Who is the French president?")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('roberta-base-squad2','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.question(`French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.`,
  `Who is the French president?`)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('roberta-base-squad2','<token>');
# Returns a json object.
$client.question("French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.",
  "Who is the French president?")

Output:

{
  "answer":"Emmanuel Macron",
  "score":0.9595934152603149,
  "start":17,
  "end":32
}

This endpoint uses Deepset's Roberta Base Squad 2 model to answer questions based on a context. It can also use your own transformers-based custom model (replace roberta-base-squad2 with the ID of your model in the URL).

Pass your context, and your question, and the model will return the answer along with the score (the higher the score, the more accurate the answer is), and the position of the answer in the context.

Here is an example using Postman:

Question answering example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/roberta-base-squad2/question

POST Values

These values must be encoded as JSON.

Key Type Description
context string The block of text that the model will use in order to find an answer to your question
question string The question you want to ask

Output

This endpoint returns a JSON object containing the following elements:

Key Type Description
answer string The answer to your question
score float The accuracy of the answer. It goes from 0 to 1. The higher the score, the more accurate the answer is.
start integer Position of the starting character of the response in your context.
end integer Position of the ending character of the response in your context.

Summarization

Input:

curl "https://api.nlpcloud.io/v1/bart-large-cnn/summarization" \
  -H "Authorization: Token <token>" \
  -X POST -d '{"text":"The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."}'
import nlpcloud

client = nlpcloud.Client("bart-large-cnn", "<token>")
# Returns a json object.
client.summarization("""The tower is 324 metres (1,063 ft) tall, 
  about the same height as an 81-storey building, and the tallest structure in Paris. 
  Its base is square, measuring 125 metres (410 ft) on each side. During its construction, 
  the Eiffel Tower surpassed the Washington Monument to become the tallest man-made 
  structure in the world, a title it held for 41 years until the Chrysler Building 
  in New York City was finished in 1930. It was the first structure to reach a 
  height of 300 metres. Due to the addition of a broadcasting aerial at the top of 
  the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). 
  Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure 
  in France after the Millau Viaduct.")
require 'nlpcloud'

client = NLPCloud::Client.new('bart-large-cnn','<token>')
# Returns a json object.
client.summarization("The tower is 324 metres (1,063 ft) tall,  
  about the same height as an 81-storey building, and the tallest structure in Paris. 
  Its base is square, measuring 125 metres (410 ft) on each side. During its construction, 
  the Eiffel Tower surpassed the Washington Monument to become the tallest man-made 
  structure in the world, a title it held for 41 years until the Chrysler Building 
  in New York City was finished in 1930. It was the first structure to reach a 
  height of 300 metres. Due to the addition of a broadcasting aerial at the top of 
  the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). 
  Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure 
  in France after the Millau Viaduct.")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("bart-large-cnn", "<token>")
// Return a Summarization struct.
client.Summarization(`The tower is 324 metres (1,063 ft) tall, 
  about the same height as an 81-storey building, and the tallest structure in Paris. 
  Its base is square, measuring 125 metres (410 ft) on each side. During its construction, 
  the Eiffel Tower surpassed the Washington Monument to become the tallest man-made 
  structure in the world, a title it held for 41 years until the Chrysler Building 
  in New York City was finished in 1930. It was the first structure to reach a 
  height of 300 metres. Due to the addition of a broadcasting aerial at the top of 
  the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). 
  Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure 
  in France after the Millau Viaduct.")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('bart-large-cnn','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.summarization(`The tower is 324 metres (1,063 ft) tall, 
  about the same height as an 81-storey building, and the tallest structure in Paris. 
  Its base is square, measuring 125 metres (410 ft) on each side. During its construction, 
  the Eiffel Tower surpassed the Washington Monument to become the tallest man-made 
  structure in the world, a title it held for 41 years until the Chrysler Building 
  in New York City was finished in 1930. It was the first structure to reach a 
  height of 300 metres. Due to the addition of a broadcasting aerial at the top of 
  the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). 
  Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure 
  in France after the Millau Viaduct.`)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('bart-large-cnn','<token>');
# Returns a json object.
$client.summarization("The tower is 324 metres (1,063 ft) tall, 
  about the same height as an 81-storey building, and the tallest structure in Paris. 
  Its base is square, measuring 125 metres (410 ft) on each side. During its construction, 
  the Eiffel Tower surpassed the Washington Monument to become the tallest man-made 
  structure in the world, a title it held for 41 years until the Chrysler Building 
  in New York City was finished in 1930. It was the first structure to reach a 
  height of 300 metres. Due to the addition of a broadcasting aerial at the top of 
  the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). 
  Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure 
  in France after the Millau Viaduct.")

Output:

{
  "summary_text": "The tower is 324 metres (1,063 ft) tall, 
  about the same height as an 81-storey building. Its base is square, 
  measuring 125 metres (410 ft) on each side. During its construction, 
  the Eiffel Tower surpassed the Washington Monument to become the 
  tallest man-made structure in the world."
}

This endpoint uses Facebook's Bart Large CNN model to summarize a block of text. This is an "extractive" summarization, and not an "abstractive" one, which means that no new sentence is generated but some useless sentences are removed. It can also use your own transformers-based custom model (replace bart-large-cnn with the ID of your model in the URL).

Pass your block of text, and the model will return a summary.

Here is an example using Postman:

Summarization example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/bart-large-cnn/summarization

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text that you want to summarize

Output

This endpoint returns a JSON object containing the following elements:

Key Type Description
summary_text string The summary of your text

Translation

Input:

curl "https://api.nlpcloud.io/v1/<translation_model_name>/translation" \
  -H "Authorization: Token <token>" \
  -X POST -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
import nlpcloud

client = nlpcloud.Client("<translation_model_name>", "<token>")
# Returns a json object.
client.translation("John Doe has been working for Microsoft in Seattle since 1999.")
require 'nlpcloud'

client = NLPCloud::Client.new('<translation_model_name>','<token>')
# Returns a json object.
client.translation("John Doe has been working for Microsoft in Seattle since 1999.")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<translation_model_name>", "<token>")
// Return a Translation struct.
client.Translation("John Doe has been working for Microsoft in Seattle since 1999.")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<translation_model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.translation(`John Doe has been working for Microsoft in Seattle since 1999.`)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<translation_model_name>','<token>');
# Returns a json object.
$client.translation("John Doe has been working for Microsoft in Seattle since 1999.")

Output (using the opus-mt-en-fr (English to French) model for the example):

{
  "translation_text": "John Doe travaille pour Microsoft à Seattle depuis 1999."
}

This endpoint uses Helsinki NLP's Opus MT models to translate text. Pass your block of text, and the model will return a translation. It can also use your own transformers-based custom model (replace the model name with the ID of your model in the URL).

Here are all the Helsinki NLP's Opus MT pre-trained models you can use:

We are planning to add many more models for translation in the future depending on customer requests. So if your use case is not listed above, please let us know and we will add it promptly (it should take about 1 day).

Here is an example of English to French traduction with the opus-mt-en-fr model, using Postman:

Translation example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/<translation_model_name>/translation

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text that you want to translate

Output

This endpoint returns a JSON object containing the following elements:

Key Type Description
translation_text string The translation of your text

Language Detection

Input:

curl "https://api.nlpcloud.io/v1/python-langdetect/langdetection" \
  -H "Authorization: Token <token>" \
  -X POST -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999. Et il parle aussi un peu français."}'
# Not implemented yet.
# Not implemented yet.
// Not implemented yet.
// Not implemented yet.
# Not implemented yet.

Output:

{
  "languages": [
    {
      "en": 0.7142834369645996
    },
    {
      "fr": 0.28571521669868466
    }
  ]
}

This endpoint uses Python's LangDetect library to detect languages from a text. It returns an array with all the languages detected in the text and their likelihood. The results are sorted by likelihood, so the first language in the array is the most likely. The languages follow the 2 characters ISO codes.

This endpoint is not using deep learning under the hood so the response time is extremely fast.

Here is an example of language detection using Postman:

Language detection example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/python-langdetect/langdetection

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text containing one or more languages your want to detect

Output

This endpoint returns a JSON object called languages. Each object contains a detected language and its likelihood. The languages are sorted with the most likely first:

Key Type Description
languages array of objects. Each object has a string as key and float as value The list of detected languages (in 2 characters ISO format) with their likelihood

Dependencies

Input:

curl "https://api.nlpcloud.io/v1/<spacy_model_name>/dependencies" \
  -H "Authorization: Token <token>" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google"}'
import nlpcloud

client = nlpcloud.Client("<spacy_model_name>", "<token>")
# Returns a json object.
client.dependencies("John Doe is a Go Developer at Google")
require 'nlpcloud'

client = NLPCloud::Client.new('<spacy_model_name>','<token>')
# Returns a json object.
client.dependencies("John Doe is a Go Developer at Google")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<spacy_model_name>", "<token>")
// Return a Dependencies struct.
client.Dependencies("John Doe is a Go Developer at Google")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<spacy_model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.dependencies('John Doe is a Go Developer at Google')
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<spacy_model_name>','<token>');
# Returns a json object.
$client.dependencies("John Doe is a Go Developer at Google")

Output:

{
  "words": [
    {
      "text": "John",
      "tag": "NNP"
    },
    {
      "text": "Doe",
      "tag": "NNP"
    },
    {
      "text": "is",
      "tag": "VBZ"
    },
    {
      "text": "a",
      "tag": "DT"
    },
    {
      "text": "Go",
      "tag": "NNP"
    },
    {
      "text": "Developer",
      "tag": "NN"
    },
    {
      "text": "at",
      "tag": "IN"
    },
    {
      "text": "Google",
      "tag": "NNP"
    }
  ],
  "arcs": [
    {
      "start": 0,
      "end": 1,
      "label": "compound",
      "text": "John",
      "dir": "left"
    },
    {
      "start": 1,
      "end": 2,
      "label": "nsubj",
      "text": "Doe",
      "dir": "left"
    },
    {
      "start": 3,
      "end": 5,
      "label": "det",
      "text": "a",
      "dir": "left"
    },
    {
      "start": 4,
      "end": 5,
      "label": "compound",
      "text": "Go",
      "dir": "left"
    },
    {
      "start": 2,
      "end": 5,
      "label": "attr",
      "text": "Developer",
      "dir": "right"
    },
    {
      "start": 5,
      "end": 6,
      "label": "prep",
      "text": "at",
      "dir": "right"
    },
    {
      "start": 6,
      "end": 7,
      "label": "pobj",
      "text": "Google",
      "dir": "right"
    }
  ]
}

This endpoint uses any spaCy model (it can be either a spaCy pre-trained model or your own spaCy custom model) to perform Part-of-Speech (POS) tagging and returns dependencies (arcs) extracted from the passed in text.

See the spaCy dependency parsing documentation for more details.

Here are all the spaCy models you can use (see the models section for more details) :

Each spaCy pre-trained model has a list of supported built-in part-of-speech tags and dependency labels. For example, the list of tags and dependency labels for the en_core_web_lg model can be found here:

For more details about what these abbreviations mean, see spaCy's glossary.

HTTP Request

POST https://api.nlpcloud.io/v1/<spacy_model_name>/dependencies

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text you want to analyze

Output

This endpoint returns 2 objects: words and arcs.

words contains an array of the following elements:

Key Type Description
text string The content of the word
tag string The part of speech tag for the word (https://spacy.io/api/annotation#pos-tagging)

arcs contains an array of the following elements:

Key Type Description
text string The content of the word
label string The syntactic dependency connecting child to head (https://spacy.io/api/annotation#pos-tagging)
start integer Position of the word if direction of the arc is left. Position of the head if direction of the arc is right.
end integer Position of the head if direction of the arc is left. Position of the word if direction of the arc is right.
dir string Direction of the dependency arc (left or right)

Sentence Dependencies

Input:

curl "https://api.nlpcloud.io/v1/<spacy_model_name>/sentence-dependencies" \
  -H "Authorization: Token <token>" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google. Before that, he worked at Microsoft."}'
import nlpcloud

client = nlpcloud.Client("<spacy_model_name>", "<token>")
# Returns json object.
client.sentence_dependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
require 'nlpcloud'

client = NLPCloud::Client.new('<spacy_model_name>','<token>')
# Returns json object.
client.sentence_dependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<spacy_model_name>", "<token>")
// Return a SentenceDependencies struct.
client.SentenceDependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<spacy_model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.sentenceDependencies('John Doe is a Go Developer at Google. Before that, he worked at Microsoft.')
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<spacy_model_name>','<token>');
# Returns a json object.
$client.sentenceDependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")

Output:

{
  "sentence_dependencies": [
    {
      "sentence": "John Doe is a Go Developer at Google.",
      "dependencies": {
        "words": [
          {
            "text": "John",
            "tag": "NNP"
          },
          {
            "text": "Doe",
            "tag": "NNP"
          },
          {
            "text": "is",
            "tag": "VBZ"
          },
          {
            "text": "a",
            "tag": "DT"
          },
          {
            "text": "Go",
            "tag": "NNP"
          },
          {
            "text": "Developer",
            "tag": "NN"
          },
          {
            "text": "at",
            "tag": "IN"
          },
          {
            "text": "Google",
            "tag": "NNP"
          },
          {
            "text": ".",
            "tag": "."
          }
        ],
        "arcs": [
          {
            "start": 0,
            "end": 1,
            "label": "compound",
            "text": "John",
            "dir": "left"
          },
          {
            "start": 1,
            "end": 2,
            "label": "nsubj",
            "text": "Doe",
            "dir": "left"
          },
          {
            "start": 3,
            "end": 5,
            "label": "det",
            "text": "a",
            "dir": "left"
          },
          {
            "start": 4,
            "end": 5,
            "label": "compound",
            "text": "Go",
            "dir": "left"
          },
          {
            "start": 2,
            "end": 5,
            "label": "attr",
            "text": "Developer",
            "dir": "right"
          },
          {
            "start": 5,
            "end": 6,
            "label": "prep",
            "text": "at",
            "dir": "right"
          },
          {
            "start": 6,
            "end": 7,
            "label": "pobj",
            "text": "Google",
            "dir": "right"
          },
          {
            "start": 2,
            "end": 8,
            "label": "punct",
            "text": ".",
            "dir": "right"
          }
        ]
      }
    },
    {
      "sentence": "Before that, he worked at Microsoft.",
      "dependencies": {
        "words": [
          {
            "text": "Before",
            "tag": "IN"
          },
          {
            "text": "that",
            "tag": "DT"
          },
          {
            "text": ",",
            "tag": ","
          },
          {
            "text": "he",
            "tag": "PRP"
          },
          {
            "text": "worked",
            "tag": "VBD"
          },
          {
            "text": "at",
            "tag": "IN"
          },
          {
            "text": "Microsoft",
            "tag": "NNP"
          },
          {
            "text": ".",
            "tag": "."
          }
        ],
        "arcs": [
          {
            "start": 9,
            "end": 13,
            "label": "prep",
            "text": "Before",
            "dir": "left"
          },
          {
            "start": 9,
            "end": 10,
            "label": "pobj",
            "text": "that",
            "dir": "right"
          },
          {
            "start": 11,
            "end": 13,
            "label": "punct",
            "text": ",",
            "dir": "left"
          },
          {
            "start": 12,
            "end": 13,
            "label": "nsubj",
            "text": "he",
            "dir": "left"
          },
          {
            "start": 13,
            "end": 14,
            "label": "prep",
            "text": "at",
            "dir": "right"
          },
          {
            "start": 14,
            "end": 15,
            "label": "pobj",
            "text": "Microsoft",
            "dir": "right"
          },
          {
            "start": 13,
            "end": 16,
            "label": "punct",
            "text": ".",
            "dir": "right"
          }
        ]
      }
    }
  ]
}

This endpoint uses a spaCy model (it can be either a spaCy pre-trained model or your own spaCy custom model) to perform Part-of-Speech (POS) tagging and returns dependencies (arcs) extracted from the passed in text, for several sentences.

See the spaCy dependency parsing documentation for more details.

Here are all the spaCy models you can use (see the models section for more details) :

Each spaCy pre-trained model has a list of supported built-in part-of-speech tags and dependency labels. For example, the list of tags and dependency labels for the en_core_web_lg model can be found here:

For more details about what these abbreviations mean, see spaCy's glossary.

HTTP Request

POST https://api.nlpcloud.io/v1/<spacy_model_name>/sentence-dependencies

POST Values

These values must be encoded as JSON.

Parameter Type Description
text string The block of text containing parts of speech to extract

Output

This endpoint returns a sentence_dependencies object containing an array of sentence dependencies objects. Each sentence dependency object contains the following:

Key Type Description
sentence string The sentence being analyzed
dependencies object An object containing the words and arcs

words contains an array of the following elements:

Key Type Description
text string The content of the word
tag string The part of speech tag for the word (https://spacy.io/api/annotation#pos-tagging)

arcs contains an array of the following elements:

Key Type Description
text string The content of the word
label string The syntactic dependency connecting child to head (https://spacy.io/api/annotation#pos-tagging)
start integer Position of the word if direction of the arc is left. Position of the head if direction of the arc is right.
end integer Position of the head if direction of the arc is left. Position of the word if direction of the arc is right.
dir string Direction of the dependency arc (left or right)

Tokens

Input:

curl "https://api.nlpcloud.io/v1/<spacy_model_name>/tokens" \
  -H "Authorization: Token <token>" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google."}'
# Not available yet.
# Not available yet.
//  Not available yet.
// Not available yet.
# Not available yet.

Output:

{
  "tokens": [
    {
      "start": 0,
      "end": 4,
      "index": 1,
      "text": "John",
      "ws_after": true
    },
    {
      "start": 5,
      "end": 7,
      "index": 2,
      "text": "is",
      "ws_after": true
    },
    {
      "start": 8,
      "end": 9,
      "index": 3,
      "text": "a",
      "ws_after": true
    },
    {
      "start": 10,
      "end": 12,
      "index": 4,
      "text": "Go",
      "ws_after": true
    },
    {
      "start": 13,
      "end": 22,
      "index": 5,
      "text": "Developer",
      "ws_after": true
    },
    {
      "start": 23,
      "end": 25,
      "index": 6,
      "text": "at",
      "ws_after": true
    },
    {
      "start": 26,
      "end": 32,
      "index": 7,
      "text": "Google",
      "ws_after": false
    },
    {
      "start": 32,
      "end": 33,
      "index": 8,
      "text": ".",
      "ws_after": false
    }
  ]
}

This endpoint uses a spaCy model (it can be either a spaCy pre-trained model or your own spaCy custom model) to tokenize a passed in text.

See the spaCy tokenization documentation for more details.

Here are all the spaCy models you can use (see the models section for more details):

It returns a list of tokens. Each token is an object made up of several elements. See below for the details.

HTTP Request

POST https://api.nlpcloud.io/v1/<spacy_model_name>/tokens

POST Values

These values must be encoded as JSON.

Parameter Type Description
text string The block of text containing the tokens to extract

Output

This endpoint returns a tokens object containing an array of token objects. Each token object contains the following:

Key Type Description
text string The content of the extracted token
start int The position of the 1st character of the token (starting at 0)
end int The position of the 1st character after the token
index int The position of the token in the sentence (starting at 1)
ws_after boolean Says whether there is a whitespace after the token, or not

Library Versions

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/versions"
# Returns a json object.
client.lib_versions()
# Returns a json object.
client.lib_versions()
// Returns a LibVersion struct.
client.LibVersions()
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.libVersions()
# Returns a json object.
$client.libVersions()

This endpoint returns the versions of the libraries used behind the hood with the model.

Output:

// Example for the bart-large-mnli model:
{
  "pytorch": "1.7.1",
  "transformers": "4.3.2"
}

HTTP Request

GET https://api.nlpcloud.io/v1/<model_name>/versions

Batch Processing

If you have a large batch of requests to process, it is possible to process them all at once thanks to batch processing.

Batch processing is available in your dashboard, in the "Batch Processing" section.

See more details below, and if you have any questions please don't hesitate to ask us for help, it will be a pleasure to assist.

Input

In order to launch a batch processing job, you should create a comma separated CSV file containing all the requests you want to process.

The column names should be the names of the fields expected by the API endpoint you are using. For example if you are sending a batch to the en_core_web_lg/entities endpoint, you should have 1 column called text.

Each new row is a new request to process.

In addition to the file, you should also specify the API endpoint you want to use.

Examples

CSV file for batch processing on the en_core_web_lg/entities endpoint:

text
John Doe is working for Microsoft
Max paid his shoes ten dollars
The meeting will take place in NYC at 10am

CSV file for batch processing on the bart-large-mnli/classification endpoint:

text, labels, multi_class
John Doe is working for Microsoft, "['job', 'space', 'nature]", true
Max paid his shoes ten dollars, "['job', 'space', 'nature]", true
The meeting will take place in NYC at 10am, "['job', 'space', 'nature]", true

See CSV files examples on the right.

Output

The output file will be your initial file, but with an additional column called result containing the JSON result from the API.

Lifecycle

Once your file is uploaded, a first formatting analysis is performed. If the CSV file has formatting issues, you will receive an email to inform you.

Once the batch process starts, if the processing takes more than one day, you receive one email per day informing you about the progress made on your file.

If an error occurs, the batch processing stops and you receive an email with the results generated until the error occurs, along with the line number that caused the error in your CSV file, and the error itself.

Once the batch processing is finished, you receive an email containing a link for you to download the result file.

Rate Limiting

Rate limiting depends on the plan you subscribed to.

For example for the free plan, you can create up to 3 requests per minute. If you reach the limit, the API will return a 429 HTTP error.

Errors

The NLP Cloud API uses the following error HTTP codes:

Code Meaning
400 Bad Request -- Your request is invalid.
401 Unauthorized -- Your API token is wrong.
403 Forbidden -- You do not have the sufficient rights to access the resource. Please make sure you subscribed to the proper plan that grants you access to this resource.
404 Not Found -- The specified resource could not be found.
405 Method Not Allowed -- You tried to access a resource with an invalid method.
406 Not Acceptable -- You requested a format that isn't json.
429 Too Many Requests -- You made too many requests in a short while, please slow down.
500 Internal Server Error -- Sorry, we had a problem with our server. Please try again later.
503 Service Unavailable -- Sorry, we are temporarily offline for maintenance. Please try again later.

If any problem, do not hesitate to contact us: [email protected].