NAV
cURL Python Ruby Go Node.js PHP

Introduction

Get entities using the en_core_web_sm pre-trained model:

curl "https://api.nlpcloud.io/v1/en_core_web_sm/entities" \
  -H "Authorization: Token 4eC39HqLyjWDarjtT1zdp7dc" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google"}'
import nlpcloud

client = nlpcloud.Client("en_core_web_sm", "4eC39HqLyjWDarjtT1zdp7dc")
# Returns a json object.
client.entities("John Doe is a Go Developer at Google")
require 'nlpcloud'

client = NLPCloud::Client.new('en_core_web_sm','4eC39HqLyjWDarjtT1zdp7dc')
# Returns a json object.
client.entities("John Doe is a Go Developer at Google")
package main

import "github.com/nlpcloud/nlpcloud-go"

func main() {
    client := nlpcloud.NewClient("en_core_web_sm", "4eC39HqLyjWDarjtT1zdp7dc")
    // Returns an Entities struct.
    client.Entities("John Doe is a Go Developer at Google")
}
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('en_core_web_sm','4eC39HqLyjWDarjtT1zdp7dc')

// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.entities("John Doe is a Go Developer at Google")
  .then(function (response) {
    console.log(response.data);
  })
  .catch(function (err) {
    console.error(err.response.status);
    console.error(err.response.data.detail);
  });
<?php
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('en_core_web_sm','4eC39HqLyjWDarjtT1zdp7dc');
# Returns a json object.
$client->entities('John Doe is a Go Developer at Google);
?>

Get entities using your own model with ID 7894:

curl "https://api.nlpcloud.io/v1/custom_model/7894/entities" \
  -H "Authorization: Token 4eC39HqLyjWDarjtT1zdp7dc" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google"}'
import nlpcloud

client = nlpcloud.Client("custom_model/7894", "4eC39HqLyjWDarjtT1zdp7dc")
# Returns a json object.
client.entities("John Doe is a Go Developer at Google")
require 'nlpcloud'

client = NLPCloud::Client.new('custom_model/7894','4eC39HqLyjWDarjtT1zdp7dc')
# Returns a json object.
client.entities("John Doe is a Go Developer at Google")
package main

import "github.com/nlpcloud/nlpcloud-go"

func main() {
    client := nlpcloud.NewClient("custom_model/7894", "4eC39HqLyjWDarjtT1zdp7dc")
    // Returns an Entities struct.
    client.Entities("John Doe is a Go Developer at Google")
}
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('custom_model/7894','4eC39HqLyjWDarjtT1zdp7dc')

client.entities("John Doe is a Go Developer at Google")
  .then(function (response) {
    console.log(response.data);
  })
  .catch(function (err) {
    console.error(err.response.status);
    console.error(err.response.data.detail);
  });
<?php
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('custom_model/7894','4eC39HqLyjWDarjtT1zdp7dc');
# Returns a json object.
$client->entities('John Doe is a Go Developer at Google);
?>

Output:

[
  {
    "end": 8,
    "start": 0,
    "text": "John Doe",
    "type": "PERSON"
  },
  {
    "end": 25,
    "start": 13,
    "text": "Go Developer",
    "type": "POSITION"
  },
  {
    "end": 35,
    "start": 30,
    "text": "Google",
    "type": "ORG"
  },
]

Welcome to the NLP Cloud API documentation. If you have feedbacks about the API, the documentation, or the client libraries, please let us know!

If not done yet, please retrieve a free API token in your dashboard. Also do not hesitate to contact us: [email protected].

See on the right a full example retrieving entities from a block of text, using both the spaCy pre-trained en_core_web_sm model, and your own custom_model/7894 model. You can upload your own spaCy models in your dashboard.

The current spaCy version used by the API for the pre-trained models is 3.0.1. For your custom models, we are automatically detecting the spaCy version from your model.

For automatic discovery of the API with Open API (v3), please use this openapi.json file.

Set Up

Client Installation

If you are using one of our client libraries, here is how to install them.

Python

Install with pip.

pip install nlpcloud

More details on the source repo: https://github.com/nlpcloud/nlpcloud-python

Ruby

Install with gem.

gem install nlpcloud

More details on the source repo: https://github.com/nlpcloud/nlpcloud-ruby

Go

Install with go get.

go get -u github.com/nlpcloud/nlpcloud-go

More details on the source repo: https://github.com/nlpcloud/nlpcloud-go

Node.js

Install with NPM.

npm install nlpcloud --save

More details on the source repo: https://github.com/nlpcloud/nlpcloud-js

PHP

Install with Composer.

Create a composer.json file containing at least the following:

{
    "require": {
        "nlpcloud/nlpcloud-client": "*"
    }
}

Then launch the following:

composer install

More details on the source repo: https://github.com/nlpcloud/nlpcloud-php

Authentication

Replace with your token:

curl "https://api.nlpcloud.io/v1/en_core_web_sm/entities" \
  -H "Authorization: Token <token>"
import nlpcloud

client = nlpcloud.Client("en_core_web_sm", "<token>")
require 'nlpcloud'

client = NLPCloud::Client.new('en_core_web_sm','<token>')
package main

import "github.com/nlpcloud/nlpcloud-go"

func main() {
    client := nlpcloud.NewClient("en_core_web_sm", "<token>")
}
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('en_core_web_sm','<token>')
use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model>','<your token>');

Add your API token after the Token keyword in an Authorization header. You should include this header in all your requests: Authorization: Token <token>.

If not done yet, please get a free API token in your dashboard.

Be sure to keep this token secure! Do not share it in publicly accessible areas such as GitHub, client-side code, and so forth.

All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail.

For the sake of simplicity, this token will be omitted in the rest of the documentation.

Versioning

Replace with the right API version:

curl "https://api.nlpcloud.io/<version>/en_core_web_sm/entities"
# The latest API version is automatically set by the library.
# The latest API version is automatically set by the library.
// The latest API version is automatically set by the library.
// The latest API version is automatically set by the library.
// The latest API version is automatically set by the library.

The latest API version is v1.

The API version comes right after the domain name, and before the model name.

Encoding

POST JSON data:

curl "https://api.nlpcloud.io/v1/custom_model_1/entities" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google"}'
# Encoding is automatically handled by the library.
# Encoding is automatically handled by the library.
// Encoding is automatically handled by the library.
// Encoding is automatically handled by the library.
// Encoding is automatically handled by the library.

You should send JSON encoded data in POST request.

Models

Replace with the right pre-trained model:

curl "https://api.nlpcloud.io/v1/<model>/entities"
# Set the model during client initialization.
client = nlpcloud.Client("<model>", "<token>")
client = NLPCloud::Client.new('<model>','<token>')
client := nlpcloud.NewClient("<model>", "<token>")
const client = new NLPCloudClient('<model>', '<token>')
$client = new \NLPCloud\NLPCloud('<model>','<your token>');

Example: pre-trained en_core_web_sm model:

curl "https://api.nlpcloud.io/v1/en_core_web_sm/entities"
client = nlpcloud.Client("en_core_web_sm", "<token>")
client = NLPCloud::Client.new('en_core_web_sm','<token>')
client := nlpcloud.NewClient("en_core_web_sm", "<token>")
const client = new NLPCloudClient('en_core_web_sm', '<token>')
$client = new \NLPCloud\NLPCloud('en_core_web_sm','<your token>');

Example: your own model with ID 7894:

curl "https://api.nlpcloud.io/v1/custom_model/7894/entities"
client = nlpcloud.Client("custom_model/7894", "<token>")
client = NLPCloud::Client.new('custom_model/7894','<token>')
client := nlpcloud.NewClient("custom_model/7894", "<token>")
const client = new NLPCloudClient('custom_model/7894', '<token>')
$client = new \NLPCloud\NLPCloud('custom_model/7894','<your token>');

All the spaCy pre-trained models are available. You can also also use your own models by uploading your models in your dashboard (see this section about how to export and package your models).

The name of the model comes right after the API version, and before the name of the endpoint.

If you are using your own model, the model name is made up of 2 things: custom_model and the ID of your model. For example if your model ID is 7894, you should use custom_model/7894. Your model ID appears in your dashboard once you upload the model and the instance creation is finished.

Here are examples on the right where we are retrieving the spaCy version used for the en_core_web_sm model and another example for your own model with ID 7894.

See the comprehensive list of pre-trained models below.

Upload Model

Export in Python script:

nlp.to_disk("/path")

Package:

python -m spacy package /path/to/exported/model /path/to/packaged/model

Archive as .tar.gz:

# Go to /path/to/packaged/model
python setup.py sdist

Or archive as .whl:

# Go to /path/to/packaged/model
python setup.py bdist_wheel

In order to upload your custom model in your dashboard, you first need to export it and package it as a Python module.

Here is what you should do:

  1. Export your model to disk using the spaCy to_disk("/path") command.
  2. Package your exported model using the spacy package command.
  3. Archive your packaged model either as a .tar.gz archive using python setup.py sdist or as a Python wheel using python setup.py bdist_wheel (both formats are accepted).
  4. Retrieve you archive in the newly created dist folder and upload it in your dashboard.

If you experience difficulties, do not hesitate to contact us, it will be a pleasure to help!

Enpoints

Entities

Input:

curl "https://api.nlpcloud.io/v1/en_core_web_sm/entities" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google"}'
# Returns a json object.
client.entities("John Doe is a Go Developer at Google")
# Returns a json object.
client.entities("John Doe is a Go Developer at Google")
// Return an Entities struct.
client.Entities("John Doe is a Go Developer at Google")
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.entities('John Doe is a Go Developer at Google')
# Returns a json object.
$client.entities("John Doe is a Go Developer at Google")

Output:

[
  {
    "end": 8,
    "start": 0,
    "text": "John Doe",
    "type": "PERSON"
  },
  {
    "end": 25,
    "start": 13,
    "text": "Go Developer",
    "type": "POSITION"
  },
  {
    "end": 35,
    "start": 30,
    "text": "Google",
    "type": "ORG"
  },
]

This endpoint returns entities extracted from the passed in text.

See the spaCy named entity recognition documentation for more details.

You should POST your block of text as a JSON object with text as the key and your text as the value.

HTTP Request

POST https://api.nlpcloud.io/v1/en_core_web_sm/entities

POST Parameters

Parameter Description
text The block of text containing entities to extract

Output

A list of entities. Each entity is made up of the following:

Key Description
start The position of the 1st character of the entity
end The position of the last character of the entity
text The content of the entity
type The type of entity (PERSON, ORG, etc.)

Dependencies

Input:

curl "https://api.nlpcloud.io/v1/en_core_web_sm/dependencies" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google"}'
# Returns a json object.
client.dependencies("John Doe is a Go Developer at Google")
# Returns a json object.
client.dependencies("John Doe is a Go Developer at Google")
// Return a Dependencies struct.
client.Dependencies("John Doe is a Go Developer at Google")
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.dependencies('John Doe is a Go Developer at Google')
# Returns a json object.
$client.dependencies("John Doe is a Go Developer at Google")

Output:

{
  "words": [
    {
      "text": "John",
      "tag": "NNP"
    },
    {
      "text": "Doe",
      "tag": "NNP"
    },
    {
      "text": "is",
      "tag": "VBZ"
    },
    {
      "text": "a",
      "tag": "DT"
    },
    {
      "text": "Go",
      "tag": "NNP"
    },
    {
      "text": "Developer",
      "tag": "NN"
    },
    {
      "text": "at",
      "tag": "IN"
    },
    {
      "text": "Google",
      "tag": "NNP"
    }
  ],
  "arcs": [
    {
      "start": 0,
      "end": 1,
      "label": "compound",
      "text": "John",
      "dir": "left"
    },
    {
      "start": 1,
      "end": 2,
      "label": "nsubj",
      "text": "Doe",
      "dir": "left"
    },
    {
      "start": 3,
      "end": 5,
      "label": "det",
      "text": "a",
      "dir": "left"
    },
    {
      "start": 4,
      "end": 5,
      "label": "compound",
      "text": "Go",
      "dir": "left"
    },
    {
      "start": 2,
      "end": 5,
      "label": "attr",
      "text": "Developer",
      "dir": "right"
    },
    {
      "start": 5,
      "end": 6,
      "label": "prep",
      "text": "at",
      "dir": "right"
    },
    {
      "start": 6,
      "end": 7,
      "label": "pobj",
      "text": "Google",
      "dir": "right"
    }
  ]
}

This endpoint performs part of speech tagging and returns dependencies (arcs) extracted from the passed in text.

See the spaCy dependency parsing documentation for more details.

You should POST your block of text as a JSON object with text as the key and your text as the value.

HTTP Request

POST https://api.nlpcloud.io/v1/en_core_web_sm/dependencies

POST Parameters

Parameter Description
text The block of text containing entities to extract

Output

2 objects, words and arcs.

words contains an array of the following:

Key Description
text The content of the word
tag The part of speech tag for the word (https://spacy.io/api/annotation#pos-tagging)

arcs contains an array of the following:

Key Description
text The content of the word
label The syntactic dependency connecting child to head (https://spacy.io/api/annotation#pos-tagging)
start Position of the word if direction of the arc is left. Position of the head if direction of the arc is right.
end Position of the head if direction of the arc is left. Position of the word if direction of the arc is right.
dir Direction of the dependency arc (left or right)

Sentence Dependencies

Input:

curl "https://api.nlpcloud.io/v1/en_core_web_sm/sentence-dependencies" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google. Before that, he worked at Microsoft."}'
# Returns json object.
client.sentence_dependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
# Returns json object.
client.sentence_dependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
// Return a SentenceDependencies struct.
client.SentenceDependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.sentenceDependencies('John Doe is a Go Developer at Google')
# Returns a json object.
$client.sentenceDependencies("John Doe is a Go Developer at Google")

Output:

{
  "sentence_dependencies": [
    {
      "sentence": "John Doe is a Go Developer at Google.",
      "dependencies": {
        "words": [
          {
            "text": "John",
            "tag": "NNP"
          },
          {
            "text": "Doe",
            "tag": "NNP"
          },
          {
            "text": "is",
            "tag": "VBZ"
          },
          {
            "text": "a",
            "tag": "DT"
          },
          {
            "text": "Go",
            "tag": "NNP"
          },
          {
            "text": "Developer",
            "tag": "NN"
          },
          {
            "text": "at",
            "tag": "IN"
          },
          {
            "text": "Google",
            "tag": "NNP"
          },
          {
            "text": ".",
            "tag": "."
          }
        ],
        "arcs": [
          {
            "start": 0,
            "end": 1,
            "label": "compound",
            "text": "John",
            "dir": "left"
          },
          {
            "start": 1,
            "end": 2,
            "label": "nsubj",
            "text": "Doe",
            "dir": "left"
          },
          {
            "start": 3,
            "end": 5,
            "label": "det",
            "text": "a",
            "dir": "left"
          },
          {
            "start": 4,
            "end": 5,
            "label": "compound",
            "text": "Go",
            "dir": "left"
          },
          {
            "start": 2,
            "end": 5,
            "label": "attr",
            "text": "Developer",
            "dir": "right"
          },
          {
            "start": 5,
            "end": 6,
            "label": "prep",
            "text": "at",
            "dir": "right"
          },
          {
            "start": 6,
            "end": 7,
            "label": "pobj",
            "text": "Google",
            "dir": "right"
          },
          {
            "start": 2,
            "end": 8,
            "label": "punct",
            "text": ".",
            "dir": "right"
          }
        ]
      }
    },
    {
      "sentence": "Before that, he worked at Microsoft.",
      "dependencies": {
        "words": [
          {
            "text": "Before",
            "tag": "IN"
          },
          {
            "text": "that",
            "tag": "DT"
          },
          {
            "text": ",",
            "tag": ","
          },
          {
            "text": "he",
            "tag": "PRP"
          },
          {
            "text": "worked",
            "tag": "VBD"
          },
          {
            "text": "at",
            "tag": "IN"
          },
          {
            "text": "Microsoft",
            "tag": "NNP"
          },
          {
            "text": ".",
            "tag": "."
          }
        ],
        "arcs": [
          {
            "start": 9,
            "end": 13,
            "label": "prep",
            "text": "Before",
            "dir": "left"
          },
          {
            "start": 9,
            "end": 10,
            "label": "pobj",
            "text": "that",
            "dir": "right"
          },
          {
            "start": 11,
            "end": 13,
            "label": "punct",
            "text": ",",
            "dir": "left"
          },
          {
            "start": 12,
            "end": 13,
            "label": "nsubj",
            "text": "he",
            "dir": "left"
          },
          {
            "start": 13,
            "end": 14,
            "label": "prep",
            "text": "at",
            "dir": "right"
          },
          {
            "start": 14,
            "end": 15,
            "label": "pobj",
            "text": "Microsoft",
            "dir": "right"
          },
          {
            "start": 13,
            "end": 16,
            "label": "punct",
            "text": ".",
            "dir": "right"
          }
        ]
      }
    }
  ]
}

This endpoint performs part of speech tagging and returns dependencies (arcs) extracted from the passed in text, for several sentences.

See the spaCy dependency parsing documentation for more details.

You should POST your block of text as a JSON object with text as the key and your text as the value.

HTTP Request

POST https://api.nlpcloud.io/v1/en_core_web_sm/sentence-dependencies

POST Parameters

Parameter Description
text The block of text containing entities to extract

Output

A sentence_dependencies object containing an array of sentence dependencies objects. Each sentence dependency object contains the following:

Key Description
sentence The sentence being analyzed
dependencies An object containing the words and arcs

words contains an array of the following:

Key Description
text The content of the word
tag The part of speech tag for the word (https://spacy.io/api/annotation#pos-tagging)

arcs contains an array of the following:

Key Description
text The content of the word
label The syntactic dependency connecting child to head (https://spacy.io/api/annotation#pos-tagging)
start Position of the word if direction of the arc is left. Position of the head if direction of the arc is right.
end Position of the head if direction of the arc is left. Position of the word if direction of the arc is right.
dir Direction of the dependency arc (left or right)

Library Versions

Input:

curl "https://api.nlpcloud.io/v1/en_core_web_sm/version"
# Returns a json object.
client.lib_versions()
# Returns a json object.
client.lib_versions()
// Returns a LibVersion struct.
client.LibVersions()
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.libVersions()
# Returns a json object.
$client.libVersions()

This endpoint returns the versions of the libraries used behind the hood with the model (for example the spaCy version used).

Output:

{"spacy": "3.0.1"}

HTTP Request

GET https://api.nlpcloud.io/v1/en_core_web_sm/version

Rate Limiting

Requests are not rate limited for paid plans.

For the free plan, you can create up to 5 requests per minute. If you reach this limit, the API will return a 429 HTTP error.

Please note that these quotas are subject to change. If that is the case, you will be informed in advance by email.

All Models

Here is a comprehensive list of all the spaCy pre-trained models supported by the NLP Cloud API:

Name Description
en_core_web_sm English small
en_core_web_lg English large
fr_core_news_sm French small
fr_core_web_lg French large
zh_core_web_sm Chinese small
zh_core_web_lg Chinese large
da_core_news_sm Danish small
da_core_news_lg Danish large
nl_core_news_sm Dutch small
nl_core_news_lg Dutch large
de_core_news_sm German small
de_core_news_lg German large
el_core_news_sm Greek small
el_core_news_lg Greed large
it_core_news_sm Italian small
it_core_news_lg Italian large
ja_core_news_sm Japanese small
ja_core_news_lg Japanese large
lt_core_news_sm Lithuanian small
lt_core_news_lg Lithuanian large
nb_core_news_sm Norwegian Bokmål small
nb_core_news_lg Norwegian Bokmål large
pl_core_news_sm Polish small
pl_core_news_lg Polish large
pt_core_news_sm Portuguese small
pt_core_news_lg Portuguese large
ro_core_news_sm Romanian small
ro_core_news_lg Romanian large
es_core_news_sm Spanish small
es_core_news_lg Spanish large

Errors

The NLP Cloud API uses the following error HTTP codes:

Code Meaning
400 Bad Request -- Your request is invalid.
401 Unauthorized -- Your API token is wrong.
403 Forbidden -- You do not have the sufficient rights to access the resource. Please make sure you subscribed to the proper plan that grants you access to this resource.
404 Not Found -- The specified resource could not be found.
405 Method Not Allowed -- You tried to access a resource with an invalid method.
406 Not Acceptable -- You requested a format that isn't json.
429 Too Many Requests -- You made too many requests in a short while, please slow down.
500 Internal Server Error -- Sorry, we had a problem with our server. Please try again later.
503 Service Unavailable -- Sorry, we are temporarily offline for maintenance. Please try again later.

If any problem, do not hesitate to contact us: [email protected].