NAV
cURL Python Ruby Go Node.js PHP C#

Introduction

Get entities using the en_core_web_lg pre-trained model:

curl "https://api.nlpcloud.io/v1/en_core_web_lg/entities" \
  -H "Authorization: Token 4eC39HqLyjWDarjtT1zdp7dc" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
import nlpcloud

client = nlpcloud.Client("en_core_web_lg", "4eC39HqLyjWDarjtT1zdp7dc")
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
require 'nlpcloud'

client = NLPCloud::Client.new('en_core_web_lg','4eC39HqLyjWDarjtT1zdp7dc')
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
package main

import "github.com/nlpcloud/nlpcloud-go"

func main() {
    client := nlpcloud.NewClient("en_core_web_lg", "4eC39HqLyjWDarjtT1zdp7dc", false)
    // Returns an Entities struct.
    client.Entities("John Doe has been working for Microsoft in Seattle since 1999.")
}
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('en_core_web_lg','4eC39HqLyjWDarjtT1zdp7dc')

// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
  .then(function (response) {
    console.log(response.data);
  })
  .catch(function (err) {
    console.error(err.response.status);
    console.error(err.response.data.detail);
  });
<?php
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('en_core_web_lg','4eC39HqLyjWDarjtT1zdp7dc');
# Returns a json object.
$client->entities('John Doe has been working for Microsoft in Seattle since 1999.);
?>
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Get entities using your own model with ID 7894:

curl "https://api.nlpcloud.io/v1/custom-model/7894/entities" \
  -H "Authorization: Token 4eC39HqLyjWDarjtT1zdp7dc" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
import nlpcloud

client = nlpcloud.Client("custom-model/7894", "4eC39HqLyjWDarjtT1zdp7dc")
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
require 'nlpcloud'

client = NLPCloud::Client.new('custom-model/7894','4eC39HqLyjWDarjtT1zdp7dc')
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
package main

import "github.com/nlpcloud/nlpcloud-go"

func main() {
    client := nlpcloud.NewClient("custom-model/7894", "4eC39HqLyjWDarjtT1zdp7dc", false)
    // Returns an Entities struct.
    client.Entities("John Doe has been working for Microsoft in Seattle since 1999.")
}
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('custom-model/7894','4eC39HqLyjWDarjtT1zdp7dc')

client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
  .then(function (response) {
    console.log(response.data);
  })
  .catch(function (err) {
    console.error(err.response.status);
    console.error(err.response.data.detail);
  });
<?php
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('custom-model/7894','4eC39HqLyjWDarjtT1zdp7dc');
# Returns a json object.
$client->entities('John Doe has been working for Microsoft in Seattle since 1999.);
?>
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output:

{
  "entities": [
    {
      "start": 0,
      "end": 8,
      "type": "PERSON",
      "text": "John Doe"
    },
    {
      "start": 30,
      "end": 39,
      "type": "ORG",
      "text": "Microsoft"
    },
    {
      "start": 43,
      "end": 50,
      "type": "GPE",
      "text": "Seattle"
    },
    {
      "start": 57,
      "end": 61,
      "type": "DATE",
      "text": "1999"
    }
  ]
}

Welcome to the NLP Cloud API documentation.

All your Natural Language Processing tasks in one single API, suited for production:

Use Case Model Used
Named Entity Recognition (NER): extract and tag relevant entities from a text, like name, company, country... in many languages (see endpoint) All the large spaCy models are available (15 languages) .
Classification: send a text with possible categories, and let the model categorize the text for you in many languages (see endpoint) We are using the Facebook's Bart Large MNLI and Joe Davison's XLM Roberta Large XNLI models with PyTorch and Hugging Face transformers
Summarization: send a text, and get a smaller text keeping essential information only, in many languages (see endpoint) We are using Facebook's Bart Large CNN, Google's Pegasus XSUM, and Michau's T5 Base EN Generate Headline, with PyTorch and Hugging Face transformers
Question answering: send a piece of text as a context, and ask questions about anything related to this context, in many languages (see endpoint) We are using the Deepset's Roberta Base Squad 2 model with PyTorch and Hugging Face transformers
Sentiment analysis: determine whether a text is rather positive or negative (see endpoint) We are using DistilBERT Base Uncased Finetuned SST-2, Théophile Blard's TF Allociné, Sagorsarker's Codeswitch SpaEng Sentiment Analysis Lince, Daigo's Bert Base Japanese Sentiment, Oliver Guhr's German Sentiment Bert, and Prosus AI's Finbert, with PyTorch, Tensorflow, and Hugging Face transformers
Text generation: start a sentence and let the model generate the rest for you, in many languages (see endpoint) We are using the GPT-J and GPT Neo 2.7B models with PyTorch and Hugging Face transformers. They are powerful open-source equivalents of "OpenAI GPT-3".
Translation: translate text from one language to another (see endpoint) Several Helsinki NLP's Opus MT models are available (7 languages) with PyTorch and Hugging Face transformers
Language Detection: detect one or several languages from a text (see endpoint) We are simply using Python's Langdetect library.
Part-Of-Speech (POS) tagging: assign parts of speech to each word of your text, in many languages (see endpoint) All the large spaCy models are available (15 languages)
Tokenization: extract tokens from a text, in many languages (see endpoint) All the large spaCy models are available (15 languages)
Lemmatization: extract lemmas from a text, in many languages (see endpoint) All the large spaCy models are available (15 languages)

All these models can be used for free with a maximum of 3 requests per minute (except GPT-J and GPT-Neo 2.7B that require a paid plan because of the huge computation costs involved). For more requests, (i.e. for production use), please see the paid plans.

If not done yet, please retrieve a free API token from your dashboard and don't hesitate to easily test models on the playground. Also do not hesitate to contact us: [email protected].

We do recommend to subscribe a GPU plan for better performance, especially for computation-intensive models based on Transformers like summarization, classification, and text generation. We do our best to provide affordable GPU prices.

See on the right a full example retrieving entities from a block of text, using both the spaCy pre-trained en_core_web_lg model, and your own custom-model/7894 model. And the same example below using Postman:

Authentication example with Postman

NER example with Postman

You can upload your own spaCy and Hugging Face transformers-based models in your dashboard. You can also fine-tune your own models.

In addition to this documentation, you can also read this introduction article and watch this introduction video.

We welcome every feedbacks about the API, the documentation, or the client libraries, please let us know!

Set Up

Client Installation

If you are using one of our client libraries, here is how to install them.

Python

Install with pip.

pip install nlpcloud

More details on the source repo: https://github.com/nlpcloud/nlpcloud-python

Ruby

Install with gem.

gem install nlpcloud

More details on the source repo: https://github.com/nlpcloud/nlpcloud-ruby

Go

Install with go get.

go get -u github.com/nlpcloud/nlpcloud-go

More details on the source repo: https://github.com/nlpcloud/nlpcloud-go

Node.js

Install with NPM.

npm install nlpcloud --save

More details on the source repo: https://github.com/nlpcloud/nlpcloud-js

PHP

Install with Composer.

Create a composer.json file containing at least the following:

{"require": {"nlpcloud/nlpcloud-client": "*"}}

Then launch the following:

composer install

More details on the source repo: https://github.com/nlpcloud/nlpcloud-php

C#

This is a client built by the community.

See this repo from DaveCS1 for more details about installation and usage: https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Authentication

Replace with your token:

curl "https://api.nlpcloud.io/v1/<model>/<endpoint>" \
  -H "Authorization: Token <token>"
import nlpcloud

client = nlpcloud.Client("<model>", "<token>")
require 'nlpcloud'

client = NLPCloud::Client.new('<model>','<token>')
package main

import "github.com/nlpcloud/nlpcloud-go"

func main() {
    client := nlpcloud.NewClient("<model>", "<token>", false)
}
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model>','<token>')
use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model>','<token>');
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Add your API token after the Token keyword in an Authorization header. You should include this header in all your requests: Authorization: Token <token>. Alternatively you can also use Bearer instead of Token: Authorization: Bearer <token>.

Here is an example using Postman (Postman is automatically adding headers to the requests. You should at least keep the Host header, otherwise you will get a 400 error.):

Authentication example with Postman

If not done yet, please get a free API token in your dashboard.

All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail.

Versioning

Replace with the right API version:

curl "https://api.nlpcloud.io/<version>/<model>/<endpoint>"
# The latest API version is automatically set by the library.
# The latest API version is automatically set by the library.
// The latest API version is automatically set by the library.
// The latest API version is automatically set by the library.
// The latest API version is automatically set by the library.
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

The latest API version is v1.

The API version comes right after the domain name, and before the model name.

Encoding

POST JSON data:

curl "https://api.nlpcloud.io/v1/<model>/<endpoint>" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
# Encoding is automatically handled by the library.
# Encoding is automatically handled by the library.
// Encoding is automatically handled by the library.
// Encoding is automatically handled by the library.
// Encoding is automatically handled by the library.
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

You should send JSON encoded data in POST requests.

Don't forget to set the content-type accordingly: "Content-Type: application/json".

Here is an example using Postman:

Encoding with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

Models

Replace with the right pre-trained model:

curl "https://api.nlpcloud.io/v1/<model>/<endpoint>"
# Set the model during client initialization.
client = nlpcloud.Client("<model>", "<token>")
client = NLPCloud::Client.new('<model>','<token>')
client := nlpcloud.NewClient("<model>", "<token>", false)
const client = new NLPCloudClient('<model>', '<token>')
$client = new \NLPCloud\NLPCloud('<model>','<token>');
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Example: pre-trained spaCy's en_core_web_lg model for Named Entity Recognition (NER):

curl "https://api.nlpcloud.io/v1/en_core_web_lg/entities"
client = nlpcloud.Client("en_core_web_lg", "<token>")
client = NLPCloud::Client.new('en_core_web_lg','<token>')
client := nlpcloud.NewClient("en_core_web_lg", "<token>", false)
const client = new NLPCloudClient('en_core_web_lg', '<token>')
$client = new \NLPCloud\NLPCloud('en_core_web_lg','<your token>');
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Example: your own spaCy model with ID 7894 for Named Entity Recognition (NER):

curl "https://api.nlpcloud.io/v1/custom-model/7894/entities"
client = nlpcloud.Client("custom-model/7894", "<token>")
client = NLPCloud::Client.new('custom-model/7894','<token>')
client := nlpcloud.NewClient("custom-model/7894", "<token>", false)
const client = new NLPCloudClient('custom-model/7894', '<token>')
$client = new \NLPCloud\NLPCloud('custom-model/7894','<your token>');
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

We selected the best state-of-the-art pre-trained models from spaCy and Hugging Face in order to perform Named Entity Recognition (NER), text classification, text summarization, sentiment analysis, question answering, and Part-of-Speech (POS) tagging.

You can also also use your own spaCy and Hugging Face transformers-based models by uploading your models in your dashboard. Please contact us if your model is using another framework.

The name of the model comes right after the API version, and before the name of the endpoint.

If you are using your own spaCy or transformers-based model, the model name is made up of 2 things: custom-model and the ID of your model. For example if your model ID is 7894, you should use custom-model/7894. Your model ID appears in your dashboard once you upload the model and the instance creation is finished.

Here are examples on the right performing Named Entity Recognition (NER) with spaCy's en_core_web_lg model and another example doing the same thing with your own spaCy model with ID 7894 (ID of the custom model can be retrieved from your dashboard).

Models List

Here is a comprehensive list of all the pre-trained models supported by the NLP Cloud API:

Name Description Libraries
en_core_web_lg: spaCy's English Large See on spaCy spaCy v3
fr_core_news_lg: spaCy's French Large See on spaCy spaCy v3
zh_core_web_lg: spaCy's Chinese Large See on spaCy spaCy v3
da_core_news_lg: spaCy's Danish Large See on spaCy spaCy v3
nl_core_news_lg: spaCy's Dutch Large See on spaCy spaCy v3
de_core_news_lg: spaCy's German Large See on spaCy spaCy v3
el_core_news_lg: spaCy's Greek Large See on spaCy spaCy v3
it_core_news_lg: spaCy's Italian Large See on spaCy spaCy v3
ja_core_news_lg: spaCy's Japanese Large See on spaCy spaCy v3
lt_core_news_lg: spaCy's Lithuanian Large See on spaCy spaCy v3
nb_core_news_lg: spaCy's Norwegian okmål Large See on spaCy spaCy v3
pl_core_news_lg: spaCy's Polish Large See on spaCy spaCy v3
pt_core_news_lg: spaCy's Portuguese Large See on spaCy spaCy v3
ro_core_news_lg: spaCy's Romanian Large See on spaCy spaCy v3
es_core_news_lg: spaCy's Spanish Large See on spaCy spaCy v3
bart-large-mnli: Facebook's Bart Large MNLI See on Hugging Face PyTorch / Transformers
xlm-roberta-large-xnli: Joe Davison's XLM Roberta Large XNLI See on Hugging Face PyTorch / Transformers
bart-large-cnn: Facebook's Bart Large CNN See on Hugging Face PyTorch / Transformers
pegasus-xsum: Google's Pegasus XSUM See on Hugging Face PyTorch / Transformers
t5-base-en-generate-headline: Michau's T5 Base EN Generate Headline See on Hugging Face PyTorch / Transformers
roberta-base-squad2: Deepset's Roberta Base Squad 2 See on Hugging Face PyTorch / Transformers
distilbert-base-uncased-finetuned-sst-2-english: Distilbert Finetuned SST 2 See on Hugging Face PyTorch / Transformers
tf-allocine: Théophile Blard's TF Allociné See on Hugging Face Tensorflow / Transformers
codeswitch-spaeng-sentiment-analysis-lince: Sagorsarker's Codeswitch SpaEng Sentiment Analysis Lince See on Hugging Face PyTorch / Transformers
bert-base-japanese-sentiment: Daigo's Bert Base Japanese Sentiment See on Hugging Face PyTorch / Transformers
german-sentiment-bert: Oliver Guhr's German Sentiment Bert See on Hugging Face PyTorch / Transformers
finbert: Prosus AI's Finbert See on Hugging Face PyTorch / Transformers
gpt-j: GPT-J See on Hugging Face PyTorch / Transformers
gpt-neo-27b: GPT-Neo 2.7B See on Hugging Face PyTorch / Transformers
opus-mt-en-fr: Helsinki NLP's Opus MT English to French See on Hugging Face PyTorch / Transformers
opus-mt-fr-en: Helsinki NLP's Opus MT French to English See on Hugging Face PyTorch / Transformers
opus-mt-en-es: Helsinki NLP's Opus MT English to Spanish See on Hugging Face PyTorch / Transformers
opus-mt-es-en: Helsinki NLP's Opus MT Spanish to English See on Hugging Face PyTorch / Transformers
opus-mt-en-de: Helsinki NLP's Opus MT English to German See on Hugging Face PyTorch / Transformers
opus-mt-de-en: Helsinki NLP's Opus MT German to English See on Hugging Face PyTorch / Transformers
opus-mt-en-nl: Helsinki NLP's Opus MT English to Dutch See on Hugging Face PyTorch / Transformers
opus-mt-nl-en: Helsinki NLP's Opus MT Dutch to English See on Hugging Face PyTorch / Transformers
opus-mt-en-zh: Helsinki NLP's Opus MT English to Chinese See on Hugging Face PyTorch / Transformers
opus-mt-zh-en: Helsinki NLP's Opus MT Chinese to English See on Hugging Face PyTorch / Transformers
opus-mt-en-ru: Helsinki NLP's Opus MT English to Russian See on Hugging Face PyTorch / Transformers
opus-mt-ru-en: Helsinki NLP's Opus MT Russian to English See on Hugging Face PyTorch / Transformers
opus-mt-en-ar: Helsinki NLP's Opus MT English to Arabic See on Hugging Face PyTorch / Transformers
opus-mt-ar-en: Helsinki NLP's Opus MT Arabic to English See on Hugging Face PyTorch / Transformers
python-langdetect: Python LangDetect library See on Pypi LangDetect

Upload Your Transformer-Based Model

Save your model to disk

model.save_pretrained('saved_model')

You can use your own transformers-based models.

Save your model to disk in a saved_model directory using the .save_pretrained method: model.save_pretrained('saved_model').

Then compress the newly created saved_model directory using Zip.

Finally, upload your Zip file in your dashboard.

If your model comes with a custom script, you can send this script to [email protected], together with any relevant instruction necessary to make your model run. If your model must support custom input or output formats, no problem, just let us know so we can adapt the API signature. If we have questions we will let you know.

If you experience difficulties, do not hesitate to contact us, it will be a pleasure to help!

Upload Your spaCy Model

Export in Python script:

nlp.to_disk("/path")

Package:

python -m spacy package /path/to/exported/model /path/to/packaged/model

Archive as .tar.gz:

# Go to /path/to/packaged/model
python setup.py sdist

Or archive as .whl:

# Go to /path/to/packaged/model
python setup.py bdist_wheel

You can use your own spaCy models.

Upload your custom spaCy model in your dashboard, but first you need to export it and package it as a Python module.

Here is what you should do:

  1. Export your model to disk using the spaCy to_disk("/path") command.
  2. Package your exported model using the spacy package command.
  3. Archive your packaged model either as a .tar.gz archive using python setup.py sdist or as a Python wheel using python setup.py bdist_wheel (both formats are accepted).
  4. Retrieve you archive in the newly created dist folder and upload it in your dashboard.

If your model comes with a custom script, you can send this script to [email protected], together with any relevant instruction necessary to make your model run. If your model must support custom input or output formats, no problem, just let us know so we can adapt the API signature. If we have questions we will let you know.

If you experience difficulties, do not hesitate to contact us, it will be a pleasure to help!

GPU

Text classification with Bart Large MNLI on GPU

curl "https://api.nlpcloud.io/v1/gpu/bart-large-mnli/classification" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{
    "text":"John Doe is a Go Developer at Google. He has been working there for 10 years and has been awarded employee of the year",
    "labels":["job", "nature", "space"],
    "multi_class": true
  }'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>", gpu=True)
# Returns a json object.
client.classification("""John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.""",
  ["job", "nature", "space"],
  True)
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>', gpu: true)
# Returns a json object.
client.classification("John Doe is a Go Developer at Google.
  He has been working there for 10 years and has been 
  awarded employee of the year.",
  ["job", "nature", "space"],
  true)
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<model_name>", "<token>", true)
// Return a Classification struct.
client.Classification(`John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.`,
  []string{"job", "nature", "space"},
  true)
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>', gpu = true)
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.classification(`John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.`,
  ["job", "nature", "space"],
  true)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>', true);
# Returns a json object.
$client->classification("John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.",
  array("job", "nature", "space"),
  true)
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

We do recommend to subscribe to a GPU plan for better performance, especially for real-time applications or for computation-intensive models based on Transformers, like summarization, classification, and text generation. We do our best to provide affordable GPU prices.

By default, all models are running on CPUs. In order to use a GPU instead, simply add gpu in the endpoint URL, after the API version, and before the name of the model.

For example if you want to use the Bart Large MNLI classification model on a GPU, you should use the following endpoint:

https://api.nlpcloud.io/v1/gpu/bart-large-mnli/classification

See a full example on the right.

Endpoints

Entities

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/entities" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>")
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>')
# Returns a json object.
client.entities("John Doe has been working for Microsoft in Seattle since 1999.")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<model_name>", "<token>", false)
// Return an Entities struct.
client.Entities("John Doe has been working for Microsoft in Seattle since 1999.")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.entities('John Doe has been working for Microsoft in Seattle since 1999.')
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>');
# Returns a json object.
$client->entities("John Doe has been working for Microsoft in Seattle since 1999.")
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output (using en_core_web_lg for the example):

{
  "entities": [
    {
      "start": 0,
      "end": 8,
      "type": "PERSON",
      "text": "John Doe"
    },
    {
      "start": 30,
      "end": 39,
      "type": "ORG",
      "text": "Microsoft"
    },
    {
      "start": 43,
      "end": 50,
      "type": "GPE",
      "text": "Seattle"
    },
    {
      "start": 57,
      "end": 61,
      "type": "DATE",
      "text": "1999"
    }
  ]
}

Test it on the playground.

This endpoint uses any spaCy model to perform Named Entity Recognition (NER), in many languages. It can be either a spaCy pre-trained model or your own spaCy or transformers-based custom model. Give a block of text to the model and it will try to extract entitites from it like persons, organizations, countries...

See the spaCy named entity recognition documentation for more details.

Here are all the spaCy pre-trained models you can use for NER in several languages (see the models section for more details) :

Each spaCy pre-trained model has a list of supported built-in entities it is able to extract. For example, the list of entities for the en_core_web_lg model can be found here:

If you want to perform more advanced NER without having to annotate/train your own model, you can use GPT-J instead. See this few-shot learning example.

Here is an example using Postman:

NER example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/<model_name>/entities

POST Values

These values must be encoded as JSON.

Key Type Description
text string The sentence you want to analyze. 1000 characters maximum.

Output

This endpoint returns a JSON array of entities. Each entity is an object made up of the following:

Key Type Description
text string The content of the entity
type string The type of entity (PERSON, ORG, etc.)
start integer The position of the 1st character of the entity (starting at 0)
end integer The position of the 1st character after the entity

Classification

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/classification" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{
    "text":"John Doe is a Go Developer at Google. He has been working there for 10 years and has been awarded employee of the year",
    "labels":["job", "nature", "space"],
    "multi_class": true
  }'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>")
# Returns a json object.
client.classification("""John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.""",
  ["job", "nature", "space"],
  True)
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>')
# Returns a json object.
client.classification("John Doe is a Go Developer at Google.
  He has been working there for 10 years and has been 
  awarded employee of the year.",
  ["job", "nature", "space"],
  true)
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<model_name>", "<token>", false)
// Return a Classification struct.
client.Classification(`John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.`,
  []string{"job", "nature", "space"},
  true)
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.classification(`John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.`,
  ["job", "nature", "space"],
  true)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>');
# Returns a json object.
$client->classification("John Doe is a Go Developer at Google. 
  He has been working there for 10 years and has been 
  awarded employee of the year.",
  array("job", "nature", "space"),
  true)
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output (using bart-large-mnli for the example):

{
  "labels":["job", "space", "nature"],
  "scores":[0.9258800745010376, 0.1938474327325821, 0.010988450609147549]
}

Test it on the playground.

This endpoint uses Facebook's Bart Large MNLI and Joe Davison's XLM Roberta Large XNLI models to perform classification on a piece of text, in many languages. It can also use your own transformers-based custom model (replace <model_name> with the ID of your model in the URL).

Here are the 2 transformer-based models you can use:

Pass your text along with a list of labels. The model will give a score to each label. The higher the score, the more likely the text is related to this label.

You also need to say if you want more than one label to apply to your text, by passing the multi_class boolean.

If you want to perform text classification without having to pass a list of labels (meaning that the model will guess the categories from scratch), you can use GPT-J instead. See this few-shot learning example.

Here is an example using Postman:

Classification example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/<model_name>/classification

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text you want to analyze. 10,000 characters maximum.
labels array A list of labels you want to classify your text with
multi_class boolean Optional. Whether multiple labels should be applied to your text, meaning that the model will calculate an independent score for each label. Defaults to true.

Output

This endpoint returns a JSON object containing a list of labels along with a list of scores. Order matters. For example, the second score in the list corresponds to the second label.

Key Type Description
labels array of strings The labels you passed in your request
scores array of floats The scores applied to each label. Each score goes from 0 to 1. The higher the better

Sentiment Analysis

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/sentiment" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST -d '{"text":"NLP Cloud proposes an amazing service!"}'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>")
# Returns a json object.
client.sentiment("NLP Cloud proposes an amazing service!")
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>')
# Returns a json object.
client.sentiment("NLP Cloud proposes an amazing service!")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<model_name>", "<token>", false)
// Return a Sentiment struct.
client.Sentiment("NLP Cloud proposes an amazing service!")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.sentiment('NLP Cloud proposes an amazing service!')
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>');
# Returns a json object.
$client->sentiment("NLP Cloud proposes an amazing service!")
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output (using distilbert-base-uncased-finetuned-sst-2-english for the example):

{
  "scored_labels":[
    {
      "label":"POSITIVE",
      "score":0.9996881484985352
    }
  ]
}

Test it on the playground.

This endpoint uses either Distilbert Base Uncased Finetuned SST 2, or Théophile Blard's TF Allociné or Sagorsarker's Codeswitch SpaEng Sentiment Analysis Lince, or Daigo's Bert Base Japanese Sentiment or Oliver Guhr's German Sentiment Bert for sentiment analysis in English, French, Spanish, German, or Japanese.

The endpoint can also use Prosus AI's Finbert for financial sentiment analysis.

It can also use your own transformers-based custom model (replace <model_name> with the ID of your model in the URL).

Here are the 6 transformer-based models you can use:

Pass your text and let the model apply a POSITIVE or NEGATIVE label, with a score. The higher the score, the more accurate the label is.

Here is an example using Postman:

Sentiment analysis example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/<model_name>/sentiment

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text you want to analyze. 512 tokens maximum.

Output

This endpoint returns a JSON object containing a list of labels called scored_labels.

Key Type Description
scored_labels array of objects The returned scored labels. It can be one or two scored labels.

Each score label is an object made up of the following elements:

Key Type Description
label string POSITIVE or NEGATIVE
score float The score applied to the label. It goes from 0 to 1. The higher the score, the more important the sentiment is.

Question Answering

Input:

curl "https://api.nlpcloud.io/v1/roberta-base-squad2/question" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST -d '{
    "context":"French president Emmanuel Macron said the country was at war with an invisible, elusive enemy, and the measures were unprecedented, but circumstances demanded them.",
    "question":"Who is the French president?"
  }'
import nlpcloud

client = nlpcloud.Client("roberta-base-squad2", "<token>")
# Returns a json object.
client.question("""French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.""",
  "Who is the French president?")
require 'nlpcloud'

client = NLPCloud::Client.new('roberta-base-squad2','<token>')
# Returns a json object.
client.question("French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.",
  "Who is the French president?")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("roberta-base-squad2", "<token>", false)
// Return an Question struct.
client.Question(`French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.`,
  "Who is the French president?")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('roberta-base-squad2','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.question(`French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.`,
  `Who is the French president?`)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('roberta-base-squad2','<token>');
# Returns a json object.
$client->question("French president Emmanuel Macron said the country was at war
  with an invisible, elusive enemy, and the measures were unprecedented,
  but circumstances demanded them.",
  "Who is the French president?")
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output:

{
  "answer":"Emmanuel Macron",
  "score":0.9595934152603149,
  "start":17,
  "end":32
}

Test it on the playground.

This endpoint uses Deepset's Roberta Base Squad 2 model to answer questions based on a context, in many languages. It can also use your own transformers-based custom model (replace roberta-base-squad2 with the ID of your model in the URL).

Pass your context, and your question, and the model will return the answer along with the score (the higher the score, the more accurate the answer is), and the position of the answer in the context.

Here is an example using Postman:

Question answering example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/roberta-base-squad2/question

POST Values

These values must be encoded as JSON.

Key Type Description
context string The block of text that the model will use in order to find an answer to your question. 100,000 characters maximum.
question string The question you want to ask

Output

This endpoint returns a JSON object containing the following elements:

Key Type Description
answer string The answer to your question
score float The accuracy of the answer. It goes from 0 to 1. The higher the score, the more accurate the answer is.
start integer Position of the starting character of the response in your context.
end integer Position of the ending character of the response in your context.

Summarization

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/summarization" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST -d '{"text":"One month after the United States began what has become a 
  troubled rollout of a national COVID vaccination campaign, the effort is finally 
  gathering real steam. Close to a million doses -- over 951,000, to be more exact -- 
  made their way into the arms of Americans in the past 24 hours, the U.S. Centers 
  for Disease Control and Prevention reported Wednesday. That s the largest number 
  of shots given in one day since the rollout began and a big jump from the 
  previous day, when just under 340,000 doses were given, CBS News reported. 
  That number is likely to jump quickly after the federal government on Tuesday 
  gave states the OK to vaccinate anyone over 65 and said it would release all 
  the doses of vaccine it has available for distribution. Meanwhile, a number 
  of states have now opened mass vaccination sites in an effort to get larger 
  numbers of people inoculated, CBS News reported."}'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>")
# Returns a json object.
client.summarization("""One month after the United States began what has become a 
  troubled rollout of a national COVID vaccination campaign, the effort is finally 
  gathering real steam. Close to a million doses -- over 951,000, to be more exact -- 
  made their way into the arms of Americans in the past 24 hours, the U.S. Centers 
  for Disease Control and Prevention reported Wednesday. That s the largest number 
  of shots given in one day since the rollout began and a big jump from the 
  previous day, when just under 340,000 doses were given, CBS News reported. 
  That number is likely to jump quickly after the federal government on Tuesday 
  gave states the OK to vaccinate anyone over 65 and said it would release all 
  the doses of vaccine it has available for distribution. Meanwhile, a number 
  of states have now opened mass vaccination sites in an effort to get larger 
  numbers of people inoculated, CBS News reported.""")
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>')
# Returns a json object.
client.summarization("One month after the United States began what has become a 
  troubled rollout of a national COVID vaccination campaign, the effort is finally 
  gathering real steam. Close to a million doses -- over 951,000, to be more exact -- 
  made their way into the arms of Americans in the past 24 hours, the U.S. Centers 
  for Disease Control and Prevention reported Wednesday. That s the largest number 
  of shots given in one day since the rollout began and a big jump from the 
  previous day, when just under 340,000 doses were given, CBS News reported. 
  That number is likely to jump quickly after the federal government on Tuesday 
  gave states the OK to vaccinate anyone over 65 and said it would release all 
  the doses of vaccine it has available for distribution. Meanwhile, a number 
  of states have now opened mass vaccination sites in an effort to get larger 
  numbers of people inoculated, CBS News reported.")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<model_name>", "<token>", false)
// Return a Summarization struct.
client.Summarization(`One month after the United States began what has become a 
  troubled rollout of a national COVID vaccination campaign, the effort is finally 
  gathering real steam. Close to a million doses -- over 951,000, to be more exact -- 
  made their way into the arms of Americans in the past 24 hours, the U.S. Centers 
  for Disease Control and Prevention reported Wednesday. That s the largest number 
  of shots given in one day since the rollout began and a big jump from the 
  previous day, when just under 340,000 doses were given, CBS News reported. 
  That number is likely to jump quickly after the federal government on Tuesday 
  gave states the OK to vaccinate anyone over 65 and said it would release all 
  the doses of vaccine it has available for distribution. Meanwhile, a number 
  of states have now opened mass vaccination sites in an effort to get larger 
  numbers of people inoculated, CBS News reported.`)
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.summarization(`One month after the United States began what has become a 
  troubled rollout of a national COVID vaccination campaign, the effort is finally 
  gathering real steam. Close to a million doses -- over 951,000, to be more exact -- 
  made their way into the arms of Americans in the past 24 hours, the U.S. Centers 
  for Disease Control and Prevention reported Wednesday. That s the largest number 
  of shots given in one day since the rollout began and a big jump from the 
  previous day, when just under 340,000 doses were given, CBS News reported. 
  That number is likely to jump quickly after the federal government on Tuesday 
  gave states the OK to vaccinate anyone over 65 and said it would release all 
  the doses of vaccine it has available for distribution. Meanwhile, a number 
  of states have now opened mass vaccination sites in an effort to get larger 
  numbers of people inoculated, CBS News reported.`)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>');
# Returns a json object.
$client->summarization("One month after the United States began what has become a 
  troubled rollout of a national COVID vaccination campaign, the effort is finally 
  gathering real steam. Close to a million doses -- over 951,000, to be more exact -- 
  made their way into the arms of Americans in the past 24 hours, the U.S. Centers 
  for Disease Control and Prevention reported Wednesday. That s the largest number 
  of shots given in one day since the rollout began and a big jump from the 
  previous day, when just under 340,000 doses were given, CBS News reported. 
  That number is likely to jump quickly after the federal government on Tuesday 
  gave states the OK to vaccinate anyone over 65 and said it would release all 
  the doses of vaccine it has available for distribution. Meanwhile, a number 
  of states have now opened mass vaccination sites in an effort to get larger 
  numbers of people inoculated, CBS News reported.")
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output (using bart-large-cnn for the example):

{
  "summary_text": "Over 951,000 doses were given in the past 24 hours. 
  That's the largest number of shots given in one day since the rollout began. 
  That number is likely to jump quickly after the federal government 
  gave states the OK to vaccinate anyone over 65. A number of states have 
  now opened mass vaccination sites."
}

Test it on the playground.

This endpoint uses either Facebook's Bart Large CNN model or Google's Pegasus XSUM for text summarization in many languages, or Michau's T5 Base EN Generate Headline for headline generation in English. These are "abstractive" summarizations, which means that some sentences are directly taken from the input text, but also that new sentences might be generated. You can also use your own transformers-based custom model (replace <model_name> with the ID of your model in the URL).

Pass your block of text, and the model will return a summary.

Here are the 3 Transformer-based models you can use:

Here is an example using Postman:

Summarization example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/<model_name>/summarization

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text that you want to summarize. 1024 tokens maximum.

Output

This endpoint returns a JSON object containing the following elements:

Key Type Description
summary_text string The summary of your text

Generation

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/generation" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST -d '{
    "text":"GPT-J is a powerful NLP model",
    "min_length":10,
    "max_length":50
}'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>")
# Returns a JSON object.
client.generation("GPT-J is a powerful NLP model", min_length=10, max_length=50)
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>')
# Returns a json object.
client.generation('GPT-J is a powerful NLP model', min_length: 10, max_length: 50)
// Not released yet
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.generation('GPT-J is a powerful NLP model', minLength=10, maxLength=50)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>');
# Returns a json object.
$client->generation("GPT-J is a powerful NLP model", 10, 50, null, null, null, null, null, null, null, null)
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output (using gpt-j for the example):

{
  "generated_text":"GPT-J is a powerful NLP model for text generation. 
  This is the open-source version of GPT-3 by OpenAI. It is the most 
  advanced NLP model created as of today.",
  "nb_generated_tokens": 33
}

Test it on the playground.

This endpoint uses either EleutherAI' GPT-J or EleutherAI's GPT-Neo 2.7B models to generate a block of text, in many languages (GPT-J is the most advanced model, trained on 6 billion parameters, so this is the one we recommend). Start a sentence and let the model generate the rest for you. It can also use your own transformers-based custom model (replace <model> with the ID of your model in the URL).

The 2 models available are:

Pass your block of text, and the model will return a generated text. You can pass many optional arguments like min_length, max_length,end_sequence, and more. See below the comprehensive list of optional arguments. If you are looking for additional arguments for more fine-tuning, please don't hesitate to ask us!

You can achieve almost any NLP use case thanks to GPT-J: paraphrasing, chatbots/conversational AI, code generation, grammar and spelling correction, intent classification, text generation out of keywords, keywords and keyphrases extraction... and more! Read our article about few-shot learning to know more.

This is also possible to train/fine-tune your own GPT-J model if few-shot learning is not enough.

For advanced text generation tuning, you can play with many parameters like num_beams, temperature, repetition_penalty, etc. They are sometimes a good way to produce more original and fluent content. See the full list of parameters below. If you are not sure what these parameters do, please read this very good article from Hugging Face.

Here is an example using Postman:

Text generation example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/<model_name>/generation

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text that starts the generated text. 1200 tokens maximum (please contact us if you need more input tokens).
min_length int Optional. The minimum number of tokens that the generated text should contain. The size of the generated text should not exceed 256 tokens on a CPU plan and 1024 tokens on GPU plan (please contact us if you need more generated tokens). If length_no_input is false, the size of the generated text is the difference between min_length and the length of your input text. If length_no_input is true, the size of the generated text simply is min_length. Defaults to 10.
max_length int Optional. The maximum number of tokens that the generated text should contain. The size of the generated text should not exceed 256 tokens on a CPU plan and 1024 tokens on GPU plan (please contact us if you need more generated tokens). If length_no_input is false, the size of the generated text is the difference between max_length and the length of your input text. If length_no_input is true, the size of the generated text simply is max_length. Defaults to 50.
length_no_input bool Optional. Whether min_length and max_length should not include the length of the input text. If false, min_length and max_length include the length of the input text. If true, min_length and max_length don't include the length of the input text. Defaults to false.
end_sequence string Optional. A specific token that should be the end of the generated sequence. For example if could be . or \n or ### or anything else below 10 characters.
remove_input bool Optional. Whether you want to remove the input text form the result. Defaults to false.
do_sample bool Optional. Whether or not to use sampling ; use greedy decoding otherwise. Defaults to true.
num_beams int Optional. Number of beams for beam search. 1 means no beam search. If num_beams > 1, the size of the input text should not exceed 40 tokens on GPU (please contact us if you need a bigger input length with num_beams > 1). Defaults to 1.
early_stopping bool Optional. Whether to stop the beam search when at least num_beams sentences are finished per batch or not. Defaults to false.
no_repeat_ngram_size int Optional. If set to int > 0, all ngrams of that size can only occur once. Defaults to 0.
num_return_sequences int Optional. The number of independently computed returned sequences for each element in the batch. Defaults to 1.
top_k int Optional. The number of highest probability vocabulary tokens to keep for top-k-filtering. Maximum 1000 tokens. The lower this value, the less likely GPT-J is going to generate off-topic text. Defaults to 0.
top_p float Optional. If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. The higher this value, the less deterministic the result will be. It's recommended to play with top_p if you want to produce original content for applications that require accurate results, while you should use temperature if you want to generate more funny results. You should not use both at the same time. Should be between 0 and 1. Defaults to 0.7.
temperature float Optional. The value used to module the next token probabilities. The higher this value, the less deterministic the result will be. For example if temperature=0 the output will always be the same, while if temperature=1 each new request will produce very different results. It's recommended to play with top_p if you want to produce original content for applications that require accurate results, while you should use temperature if you want to generate more funny results. You should not use both at the same time. Should be between 0 and 1. Defaults to 1.
repetition_penalty float Optional. The parameter for repetition penalty. If prevents the same word to be repeated too many times. 1.0 means no penalty. Defaults to 1.0.
length_penalty float Optional. Exponential penalty to the length. 1.0 means no penalty. Set to values < 1.0 in order to encourage the model to generate shorter sequences, or to a value > 1.0 in order to encourage the model to produce longer sequences. Defaults to 1.0.
bad_words list of strings Optional. List of tokens that are not allowed to be generated. Defaults to null.

Output

This endpoint returns a JSON object containing the following elements:

Key Type Description
generated_text string The generated text
nb_generated_tokens int The number of tokens generated by the model

Translation

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/translation" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999."}'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>")
# Returns a json object.
client.translation("John Doe has been working for Microsoft in Seattle since 1999.")
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>')
# Returns a json object.
client.translation("John Doe has been working for Microsoft in Seattle since 1999.")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<model_name>", "<token>", false)
// Return a Translation struct.
client.Translation("John Doe has been working for Microsoft in Seattle since 1999.")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.translation(`John Doe has been working for Microsoft in Seattle since 1999.`)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>');
# Returns a json object.
$client->translation("John Doe has been working for Microsoft in Seattle since 1999.")
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output (using the opus-mt-en-fr (English to French) model for the example):

{
  "translation_text": "John Doe travaille pour Microsoft à Seattle depuis 1999."
}

Test it on the playground.

This endpoint uses Helsinki NLP's Opus MT models to translate text in several languages thanks to deep learning. Pass your block of text, and the model will return a translation. It can also use your own transformers-based custom model (replace the model name with the ID of your model in the URL).

Do not hesitate to use translation if you need to use other models, like classification of sentiment analysis, in non-English languages. Just translate your text first before sending it to another model.

Here are all the Helsinki NLP's Opus MT pre-trained models you can use:

We are planning to add many more models for translation in the future depending on customer requests. So if your use case is not listed above, please let us know and we will add it promptly (it should take about 1 day).

Here is an example of English to French traduction with the opus-mt-en-fr model, using Postman:

Translation example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/<model_name>/translation

POST Values

These values must be encoded as JSON.

Key Type Description
text string The sentence that you want to translate. 1000 characters maximum.

Output

This endpoint returns a JSON object containing the following elements:

Key Type Description
translation_text string The translation of your text

Language Detection

Input:

curl "https://api.nlpcloud.io/v1/python-langdetect/langdetection" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST -d '{"text":"John Doe has been working for Microsoft in Seattle since 1999. Et il parle aussi un peu français."}'
import nlpcloud

client = nlpcloud.Client("python-langdetect", "<token>")
# Returns a json object.
client.langdetection("John Doe has been working for Microsoft in Seattle since 1999. Et il parle aussi un peu français.")
require 'nlpcloud'

client = NLPCloud::Client.new('python-langdetect','<token>')
# Returns a json object.
client.langdetection("John Doe has been working for Microsoft in Seattle since 1999. Et il parle aussi un peu français.")
// Not implemented yet.
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('python-langdetect','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.langdetection(`John Doe has been working for Microsoft in Seattle since 1999. Et il parle aussi un peu français.`)
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('python-langdetect','<token>');
# Returns a json object.
$client->langdetection("John Doe has been working for Microsoft in Seattle since 1999. Et il parle aussi un peu français.")
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output:

{
  "languages": [
    {
      "en": 0.7142834369645996
    },
    {
      "fr": 0.28571521669868466
    }
  ]
}

Test it on the playground.

This endpoint uses Python's LangDetect library to detect languages from a text. It returns an array with all the languages detected in the text and their likelihood. The results are sorted by likelihood, so the first language in the array is the most likely. The languages follow the 2 characters ISO codes.

This endpoint is not using deep learning under the hood so the response time is extremely fast.

Here is an example of language detection using Postman:

Language detection example with Postman

Put your JSON data in Body > raw. Note that if your text contains double quotes (") you will need to escape them (using \") in order for your JSON to be properly decoded. This is not needed when using a client library.

HTTP Request

POST https://api.nlpcloud.io/v1/python-langdetect/langdetection

POST Values

These values must be encoded as JSON.

Key Type Description
text string The block of text containing one or more languages your want to detect. 100,000 characters maximum.

Output

This endpoint returns a JSON object called languages. Each object contains a detected language and its likelihood. The languages are sorted with the most likely first:

Key Type Description
languages array of objects. Each object has a string as key and float as value The list of detected languages (in 2 characters ISO format) with their likelihood

Dependencies

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/dependencies" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google"}'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>")
# Returns a json object.
client.dependencies("John Doe is a Go Developer at Google")
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>')
# Returns a json object.
client.dependencies("John Doe is a Go Developer at Google")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<model_name>", "<token>", false)
// Return a Dependencies struct.
client.Dependencies("John Doe is a Go Developer at Google")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.dependencies('John Doe is a Go Developer at Google')
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>');
# Returns a json object.
$client->dependencies("John Doe is a Go Developer at Google")
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output (using en_core_web_lg for the example):

{
  "words": [
    {
      "text": "John",
      "tag": "NNP"
    },
    {
      "text": "Doe",
      "tag": "NNP"
    },
    {
      "text": "is",
      "tag": "VBZ"
    },
    {
      "text": "a",
      "tag": "DT"
    },
    {
      "text": "Go",
      "tag": "NNP"
    },
    {
      "text": "Developer",
      "tag": "NN"
    },
    {
      "text": "at",
      "tag": "IN"
    },
    {
      "text": "Google",
      "tag": "NNP"
    }
  ],
  "arcs": [
    {
      "start": 0,
      "end": 1,
      "label": "compound",
      "text": "John",
      "dir": "left"
    },
    {
      "start": 1,
      "end": 2,
      "label": "nsubj",
      "text": "Doe",
      "dir": "left"
    },
    {
      "start": 3,
      "end": 5,
      "label": "det",
      "text": "a",
      "dir": "left"
    },
    {
      "start": 4,
      "end": 5,
      "label": "compound",
      "text": "Go",
      "dir": "left"
    },
    {
      "start": 2,
      "end": 5,
      "label": "attr",
      "text": "Developer",
      "dir": "right"
    },
    {
      "start": 5,
      "end": 6,
      "label": "prep",
      "text": "at",
      "dir": "right"
    },
    {
      "start": 6,
      "end": 7,
      "label": "pobj",
      "text": "Google",
      "dir": "right"
    }
  ]
}

This endpoint uses any spaCy model (it can be either a spaCy pre-trained model or your own spaCy custom model) to perform Part-of-Speech (POS) tagging in many languages and returns dependencies (arcs) extracted from the passed in text.

See the spaCy dependency parsing documentation for more details.

Here are all the spaCy models you can use in multiple languages (see the models section for more details) :

Each spaCy pre-trained model has a list of supported built-in part-of-speech tags and dependency labels. For example, the list of tags and dependency labels for the en_core_web_lg model can be found here:

For more details about what these abbreviations mean, see spaCy's glossary.

HTTP Request

POST https://api.nlpcloud.io/v1/<model_name>/dependencies

POST Values

These values must be encoded as JSON.

Key Type Description
text string The sentence of text you want to analyze. 1000 characters maximum.

Output

This endpoint returns 2 objects: words and arcs.

words contains an array of the following elements:

Key Type Description
text string The content of the word
tag string The part of speech tag for the word (https://spacy.io/api/annotation#pos-tagging)

arcs contains an array of the following elements:

Key Type Description
text string The content of the word
label string The syntactic dependency connecting child to head (https://spacy.io/api/annotation#pos-tagging)
start integer Position of the word if direction of the arc is left. Position of the head if direction of the arc is right.
end integer Position of the head if direction of the arc is left. Position of the word if direction of the arc is right.
dir string Direction of the dependency arc (left or right)

Sentence Dependencies

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/sentence-dependencies" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{"text":"John Doe is a Go Developer at Google. Before that, he worked at Microsoft."}'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>")
# Returns json object.
client.sentence_dependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>')
# Returns json object.
client.sentence_dependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
import "github.com/nlpcloud/nlpcloud-go"

client := nlpcloud.NewClient("<model_name>", "<token>", false)
// Return a SentenceDependencies struct.
client.SentenceDependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.sentenceDependencies('John Doe is a Go Developer at Google. Before that, he worked at Microsoft.')
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>');
# Returns a json object.
$client->sentenceDependencies("John Doe is a Go Developer at Google. Before that, he worked at Microsoft.")
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output (using en_core_web_lg for the example):

{
  "sentence_dependencies": [
    {
      "sentence": "John Doe is a Go Developer at Google.",
      "dependencies": {
        "words": [
          {
            "text": "John",
            "tag": "NNP"
          },
          {
            "text": "Doe",
            "tag": "NNP"
          },
          {
            "text": "is",
            "tag": "VBZ"
          },
          {
            "text": "a",
            "tag": "DT"
          },
          {
            "text": "Go",
            "tag": "NNP"
          },
          {
            "text": "Developer",
            "tag": "NN"
          },
          {
            "text": "at",
            "tag": "IN"
          },
          {
            "text": "Google",
            "tag": "NNP"
          },
          {
            "text": ".",
            "tag": "."
          }
        ],
        "arcs": [
          {
            "start": 0,
            "end": 1,
            "label": "compound",
            "text": "John",
            "dir": "left"
          },
          {
            "start": 1,
            "end": 2,
            "label": "nsubj",
            "text": "Doe",
            "dir": "left"
          },
          {
            "start": 3,
            "end": 5,
            "label": "det",
            "text": "a",
            "dir": "left"
          },
          {
            "start": 4,
            "end": 5,
            "label": "compound",
            "text": "Go",
            "dir": "left"
          },
          {
            "start": 2,
            "end": 5,
            "label": "attr",
            "text": "Developer",
            "dir": "right"
          },
          {
            "start": 5,
            "end": 6,
            "label": "prep",
            "text": "at",
            "dir": "right"
          },
          {
            "start": 6,
            "end": 7,
            "label": "pobj",
            "text": "Google",
            "dir": "right"
          },
          {
            "start": 2,
            "end": 8,
            "label": "punct",
            "text": ".",
            "dir": "right"
          }
        ]
      }
    },
    {
      "sentence": "Before that, he worked at Microsoft.",
      "dependencies": {
        "words": [
          {
            "text": "Before",
            "tag": "IN"
          },
          {
            "text": "that",
            "tag": "DT"
          },
          {
            "text": ",",
            "tag": ","
          },
          {
            "text": "he",
            "tag": "PRP"
          },
          {
            "text": "worked",
            "tag": "VBD"
          },
          {
            "text": "at",
            "tag": "IN"
          },
          {
            "text": "Microsoft",
            "tag": "NNP"
          },
          {
            "text": ".",
            "tag": "."
          }
        ],
        "arcs": [
          {
            "start": 9,
            "end": 13,
            "label": "prep",
            "text": "Before",
            "dir": "left"
          },
          {
            "start": 9,
            "end": 10,
            "label": "pobj",
            "text": "that",
            "dir": "right"
          },
          {
            "start": 11,
            "end": 13,
            "label": "punct",
            "text": ",",
            "dir": "left"
          },
          {
            "start": 12,
            "end": 13,
            "label": "nsubj",
            "text": "he",
            "dir": "left"
          },
          {
            "start": 13,
            "end": 14,
            "label": "prep",
            "text": "at",
            "dir": "right"
          },
          {
            "start": 14,
            "end": 15,
            "label": "pobj",
            "text": "Microsoft",
            "dir": "right"
          },
          {
            "start": 13,
            "end": 16,
            "label": "punct",
            "text": ".",
            "dir": "right"
          }
        ]
      }
    }
  ]
}

This endpoint uses a spaCy model (it can be either a spaCy pre-trained model or your own spaCy custom model) to perform Part-of-Speech (POS) tagging , in many languages and returns dependencies (arcs) extracted from the passed in text, for several sentences.

See the spaCy dependency parsing documentation for more details.

Here are all the spaCy models you can use in multiple languages (see the models section for more details) :

Each spaCy pre-trained model has a list of supported built-in part-of-speech tags and dependency labels. For example, the list of tags and dependency labels for the en_core_web_lg model can be found here:

For more details about what these abbreviations mean, see spaCy's glossary.

HTTP Request

POST https://api.nlpcloud.io/v1/<model_name>/sentence-dependencies

POST Values

These values must be encoded as JSON.

Parameter Type Description
text string The sentences containing parts of speech to extract. 1000 characters maximum.

Output

This endpoint returns a sentence_dependencies object containing an array of sentence dependencies objects. Each sentence dependency object contains the following:

Key Type Description
sentence string The sentence being analyzed
dependencies object An object containing the words and arcs

words contains an array of the following elements:

Key Type Description
text string The content of the word
tag string The part of speech tag for the word (https://spacy.io/api/annotation#pos-tagging)

arcs contains an array of the following elements:

Key Type Description
text string The content of the word
label string The syntactic dependency connecting child to head (https://spacy.io/api/annotation#pos-tagging)
start integer Position of the word if direction of the arc is left. Position of the head if direction of the arc is right.
end integer Position of the head if direction of the arc is left. Position of the word if direction of the arc is right.
dir string Direction of the dependency arc (left or right)

Tokens

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/tokens" \
  -H "Authorization: Token <token>" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{"text":"John is a Go Developer at Google."}'
import nlpcloud

client = nlpcloud.Client("<model_name>", "<token>")
# Returns json object.
client.tokens("John is a Go Developer at Google.")
require 'nlpcloud'

client = NLPCloud::Client.new('<model_name>','<token>')
# Returns json object.
client.tokens("John is a Go Developer at Google.")
//  Not available yet.
const NLPCloudClient = require('nlpcloud');

const client = new NLPCloudClient('<model_name>','<token>')
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.tokens('John is a Go Developer at Google.')
require 'vendor/autoload.php';

use NLPCloud\NLPCloud;

$client = new \NLPCloud\NLPCloud('<model_name>','<token>');
# Returns a json object.
$client->tokens("John is a Go Developer at Google.")
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

Output (using en_core_web_lg for the example):

{
  "tokens": [
    {
      "start": 0,
      "end": 4,
      "index": 1,
      "text": "John",
      "lemma": "John",
      "ws_after": true
    },
    {
      "start": 5,
      "end": 7,
      "index": 2,
      "text": "is",
      "lemma": "be",
      "ws_after": true
    },
    {
      "start": 8,
      "end": 9,
      "index": 3,
      "text": "a",
      "lemma": "a",
      "ws_after": true
    },
    {
      "start": 10,
      "end": 12,
      "index": 4,
      "text": "Go",
      "lemma": "Go",
      "ws_after": true
    },
    {
      "start": 13,
      "end": 22,
      "index": 5,
      "text": "Developer",
      "lemma": "developer",
      "ws_after": true
    },
    {
      "start": 23,
      "end": 25,
      "index": 6,
      "text": "at",
      "lemma": "at",
      "ws_after": true
    },
    {
      "start": 26,
      "end": 32,
      "index": 7,
      "text": "Google",
      "lemma": "Google",
      "ws_after": false
    },
    {
      "start": 32,
      "end": 33,
      "index": 8,
      "text": ".",
      "lemma": ".",
      "ws_after": false
    }
  ]
}

This endpoint uses a spaCy model (it can be either a spaCy pre-trained model or your own spaCy custom model) to tokenize and lemmatize a passed in text, in many languages.

See the spaCy tokenization and lemmatization documentations for more details.

Here are all the spaCy models you can use in many languages (see the models section for more details):

It returns a list of tokens and their corresponding lemmas. Each token is an object made up of several elements. See below for the details.

HTTP Request

POST https://api.nlpcloud.io/v1/<model_name>/tokens

POST Values

These values must be encoded as JSON.

Parameter Type Description
text string The sentence containing the tokens to extract. 1000 characters maximum.

Output

This endpoint returns a tokens object containing an array of token objects. Each token object contains the following:

Key Type Description
text string The content of the extracted token.
lemma string The corresponding lemma of the extracted token.
start int The position of the 1st character of the token (starting at 0)
end int The position of the 1st character after the token
index int The position of the token in the sentence (starting at 1)
ws_after boolean Says whether there is a whitespace after the token, or not

Library Versions

Input:

curl "https://api.nlpcloud.io/v1/<model_name>/versions"
# Returns a json object.
client.lib_versions()
# Returns a json object.
client.lib_versions()
// Returns a LibVersion struct.
client.LibVersions()
// Returns an Axios promise with the results.
// In case of success, results are contained in `response.data`. 
// In case of failure, you can retrieve the status code in `err.response.status` 
// and the error message in `err.response.data.detail`.
client.libVersions()
# Returns a json object.
$client->libVersions()
// This is a client built by the community:
// https://github.com/DaveCS1/NLPCloud.io-Simple-CSharpSamples

This endpoint returns the versions of the libraries used behind the hood with the model.

Output:

// Example (using bart-large-mnli for the example):
{
  "pytorch": "1.7.1",
  "transformers": "4.3.2"
}

HTTP Request

GET https://api.nlpcloud.io/v1/<model_name>/versions

Fine-tuning

This is possible to train/fine-tune your own models on NLP Cloud and have them available in production right away. This is the best way to get the most advanced results from NLP!

The idea is that you should select the task you want to achieve, and then upload your own dataset so we can use it to fine-tune the model. It all happens in your dashboard. You can also upload a validation dataset so we can measure the impact of the fine-tuning on the model accuracy, but this is optional.

If you select "GPT-J for any task", we will fine-tune the GPT-J model for you. Otherwise, we will automatically select the best non-GPT-J model that has the best performance and accuracy results (BERT, DistilBERT, BART, RoBERTa...). If you want to fine-tune a specific non-GPT-J model, please contact us before starting the fine-tuning to let us know.

Datasets should be in text format for GPT-J and CSV format for other tasks (see below for more details about how to build your dataset). There is no limit regarding the size of the dataset.

If you are unsure about which data you should use for your fine-tuning, please contact us so we can advise!

Here are the models you can fine-tune on NLP Cloud (if you want to fine-tune a model that is not in the list please contact us):

GPT-J for any task

You can fine-tune GPT-J for text generation and any NLP task based on text generation (paraphrase, summarization, classification, sentiment analysis, chatbots, code generation, etc.).

Your dataset should be a simple text file (.txt). It doesn't need to follow any specific formatting, except that you should add <|endoftext|> at the end of each example.

In each example, the size of the text you want the model to generate should not exceed 1024 tokens, and the size of the input should not exceed 1200 tokens. This is because, on our default GPUs, you won't be able to go above these limits when using your your model in production. If you want to remove these limitations, please let us know and we will deploy your fine-tune model on higher-end GPUs.

The size of your dataset depends on your use case but good news is that fine-tuning GPT-J requires relatively few examples (compared to traditional NLP fine-tuning). Here are a couple of guidelines, depending on your use case (this is a minimum, if you can provide more examples, it's even better!):

If you are unsure about the format or the size of your dataset, please contact us so we can help!

Don't forget that few-shot learning is also a very good way to get more advanced results from GPT-J, without even fine-tuning the model. And then combining fine-tuning and few-shot learning is the best way to get a GPT-J model perfectly tailored to your needs.

Here are examples of how you could format your dataset for various use cases (that's only suggestions of course). Basically you can apply the same technique that you would use during few-shot learning. Note that the trailing ### token is not compulsory. We recommend to add it at the end of all your examples so the model will add it to every response. Then you can conveniently use end_sequence="###" in your requests in production to make sure that the model does not generate more text than wanted. Most of the time, after a fine-tuning, GPT-J does not generate more text than necessary, but it still occasionnally happens, even when properly adding <|endoftext|> at the end of your examples, so thanks to this parameter you will be able to force GPT-J to stop the text generation once your answer is generated.

GPT-J Dataset for Short Story Generation

Let's say you want to teach GPT-J how to generate short stories about specific topics. You could build a dataset like the following (many more examples would be needed of course):

love: I went out yesterday with my girlfriend, we spent an amazing moment.
<|endoftext|>
adventure: We stayed one week in the jungle without anything to eat, it was tough...
<|endoftext|>
love: I fell in love with NLP Cloud. My life has changed since I met them!
<|endoftext|>

GPT-J Dataset for Sentiment Analysis

A fine-tuning dataset for sentiment analysis with GPT-J could look like this:

[Message]: Support has been terrible for 2 weeks...
[Sentiment]: Negative
###
<|endoftext|>
[Message]: I love your API, it is simple and so fast!
[Sentiment]: Positive
###
<|endoftext|>
[Message]: GPT-J has been released 2 months ago.
[Sentiment]: Neutral
###
<|endoftext|>

GPT-J Dataset for NER (Entity Extraction)

[Sentence]: My name is Julien and I work for NLP Cloud as a Chief Technical Officer.
[Position]: Chief Technical Officer
[Company]: NLP Cloud
###
<|endoftext|>
[Sentence]: Hi, I am a marketing assistant at Microsoft.
[Position]: marketing assistant
[Company]: Microsoft
###
<|endoftext|>
[Sentence]: John was the CEO of AquaFun until 2020.
[Position]: CEO
[Company]: AquaFun
###
<|endoftext|>

GPT-J Dataset for Text Classification

[Sentence]: I love skiing, rugby, and boxing. These are great for the body and the mind.
[Category]: Sport
###
<|endoftext|>
[Sentence]: In order to cook a pizza you need flour, tomatoes, ham, and cheese.
[Category]: Food
###
<|endoftext|>
[Sentence]: The Go programming language is a statically typed language, perfect for concurrent programming.
[Category]: Programming
###
<|endoftext|>

GPT-J Dataset for Question Answering

[Context]: NLP Cloud was founded in 2021 when the team realized there was no easy way to reliably leverage NLP in production.
[Question]: When was NLP Cloud founded?
[Answer]: 2021
###
<|endoftext|>
[Context]: NLP Cloud developed their API by mid-2020 and they added many pre-trained open-source models since then
[Question]: What did NLP Cloud develop?]
[Answer]: API
###
<|endoftext|>
[Context]: The main challenge with GPT-J is memory consumption. Using a GPU plan is recommended.
[Question]: Which plan is recommended for GPT-J?
[Answer]: a GPU plan
###
<|endoftext|>

GPT-J Dataset for Code Generation

[Question]: Fetch the companies that have less than five people in it.
[Answer]: SELECT COMPANY, COUNT(EMPLOYEE_ID) FROM Employee GROUP BY COMPANY HAVING COUNT(EMPLOYEE_ID) < 5;
###
<|endoftext|>
[Question]: Show all companies along with the number of employees in each department
[Answer]: SELECT COMPANY, COUNT(COMPANY) FROM Employee GROUP BY COMPANY;
###
<|endoftext|>
[Question]: Show the last record of the Employee table
[Answer]: SELECT * FROM Employee ORDER BY LAST_NAME DESC LIMIT 1;
###
<|endoftext|>

GPT-J Dataset for Paraphrasing

[Original]: Algeria recalled its ambassador to Paris on Saturday and closed its airspace to French military planes a day later after the French president made comments about the northern Africa country.
[Paraphrase]: Last Saturday, the Algerian government recalled its ambassador and stopped accepting French military airplanes in its airspace. It happened one day after the French president made comments about Algeria.
###
<|endoftext|>
[Original]: President Macron was quoted as saying the former French colony was ruled by a "political-military system" with an official history that was based not on truth, but on hatred of France.
[Paraphrase]: Emmanuel Macron said that the former colony was lying and angry at France. He also said that the country was ruled by a "political-military system".
###
<|endoftext|>
[Original]: The diplomatic spat came days after France cut the number of visas it issues for citizens of Algeria and other North African countries.
[Paraphrase]: Diplomatic issues started appearing when France decided to stop granting visas to Algerian people and other North African people.
###
<|endoftext|>

GPT-J Dataset for Chatbot / Conversational AI

The trick here is that a discussion should be split into several examples (one per AI response):

This is a discussion between a [human] and a [robot]. The [robot] is very nice and empathetic.

[human]: Hello nice to meet you.
[robot]: Nice to meet you too.
###
<|endoftext|>
This is a discussion between a [human] and a [robot]. The [robot] is very nice and empathetic.

[human]: Hello nice to meet you.
[robot]: Nice to meet you too.
[human]: How is it going today?
[robot]: Not so bad, thank you! How about you?
###
<|endoftext|>
This is a discussion between a [human] and a [robot]. The [robot] is very nice and empathetic.

[human]: Hello nice to meet you.
[robot]: Nice to meet you too.
[human]: How is it going today?
[robot]: Not so bad, thank you! How about you?
[human]: I am ok, but I am a bit sad...
[robot]: Oh? Why that?
###
<|endoftext|>

GPT-J Dataset for Product and Ad Descriptions

[Keywords]: shoes, women, $59
[Sentence]: Beautiful shoes for women at the price of $59.
###
<|endoftext|>
[Keywords]: trousers, men, $69
[Sentence]: Modern trousers for men, for $69 only.
###
<|endoftext|>
[Keywords]: gloves, winter, $19
[Sentence]: Amazingly hot gloves for cold winters, at $19.
###
<|endoftext|>

GPT-J Dataset for Knowledge Feeding

You might want to simply pass new knowledge to the model, without necessarily fine-tuning for a specific task. For example you can feed the model with internal contracts, recent news, technical knowledge specific to your industry... It's very simple: simply give it pure text. For example here we want GPT-J to become a Go programming expert, so we feed it with Go-related knowledge.

Channels are the pipes that connect concurrent goroutines. You can send values into channels from one goroutine and receive those values into another goroutine.
<|endoftext|>
Send a value into a channel using the channel <- syntax. Here we send "ping" to the messages channel we made above, from a new goroutine.
<|endoftext|>
The <-channel syntax receives a value from the channel. Here we’ll receive the "ping" message we sent above and print it out.
<|endoftext|>

Text classification

You can fine-tune a model for text classification. We will automatically select the best model for you (BERT, DistilBERT, BART, RoBERTa...). If you want to fine-tune a specific model, please contact us before starting the fine-tuning to let us know.

Your comma-separated CSV dataset should contain 2 columns:

Each row is a new example you want to teach the model. For example, you could build a dataset like the following (many more examples would be needed of course):

text class
I love skiing, rugby, and boxing. These are great for the body and the mind. sport
In order to cook a pizza you need flour, tomatoes, ham, and cheese. food
The Go programming language is a statically typed language, perfect for concurrent programming. programming

Size of your dataset: we recommend at least 150 examples per class. For example, if you want to create 6 categories, you will need at least 900 examples.

Summarization

You can fine-tune a model for text summarization. We will automatically select the best model for you (BERT, DistilBERT, BART, RoBERTa...). If you want to fine-tune a specific model, please contact us before starting the fine-tuning to let us know.

Your comma-separated CSV dataset should contain 2 columns:

Each row is a new example you want to teach the model. For example, you could build a dataset like the following (many more examples would be needed of course):

text summary
One month after the United States began what has become a troubled rollout of a national COVID vaccination campaign, the effort is finally gathering real steam. Close to a million doses -- over 951,000, to be more exact -- made their way into the arms of Americans in the past 24 hours, the U.S. Centers for Disease Control and Prevention reported Wednesday. That s the largest number of shots given in one day since the rollout began and a big jump from the previous day, when just under 340,000 doses were given, CBS News reported. That number is likely to jump quickly after the federal government on Tuesday gave states the OK to vaccinate anyone over 65 and said it would release all the doses of vaccine it has available for distribution. Meanwhile, a number of states have now opened mass vaccination sites in an effort to get larger numbers of people inoculated, CBS News reported. Over 951,000 doses were given in the past 24 hours. That's the largest number of shots given in one day since the rollout began. That number is likely to jump quickly after the federal government gave states the OK to vaccinate anyone over 65. A number of states have now opened mass vaccination sites.
The community is large enough that, instead of assuming everyone knows what is expected of them, our Code of Conduct serves as an agreement, setting explicit expectations for our behavior in both online and offline interactions. If we don’t live up to the agreement, people can point that out and we can correct our behavior. In this post we want to provide two updates: first, an update about how we approach enforcement of the Code of Conduct, and second, an update to the Gopher Values themselves. We want everyone to feel welcome here. What happens when members of our community make others feel unwelcome? Those behaviors can be reported to the Project Steward, who works with a committee from Google’s Open Source Programs Office to determine what to do about each report. Since the May 2018 revision to the Code of Conduct, community members have submitted more than 300 conduct reports, an average between one and two a week. A typical outcome is to meet with the person whose conduct was reported and help them understand how to take responsibility for and correct their actions moving forward. The Gopher community is large enough that people can point out bad behavior. Since the May 2018 revision to the Code of Conduct, community members have submitted more than 300 conduct reports. A typical outcome is to meet with the person whose conduct was reported and help them understand how to take responsibility for and correct their actions.

Size of your dataset: we recommend at least 800 examples.

Question Answering

You can fine-tune a model for question answering. We will automatically select the best model for you (BERT, DistilBERT, BART, RoBERTa...). If you want to fine-tune a specific model, please contact us before starting the fine-tuning to let us know.

Your comma-separated CSV dataset should contain 4 columns:

Each row is a new example you want to teach the model. For example, you could build a dataset like the following (many more examples would be needed of course):

context question answer answer_start_index
French president Emmanuel Macron said the country was at war with an invisible, elusive enemy, and the measures were unprecedented, but circumstances demanded them. Who is the French president? Emmanuel Macron 17
John would really like to work for Google but he is not sure which position would suit him best... Where would John like to work? Google 35

Size of your dataset: we recommend at least 800 examples.

Sentiment Analysis

You can fine-tune a model for sentiment analysis. We will automatically select the best model for you (BERT, DistilBERT, BART, RoBERTa...). If you want to fine-tune a specific model, please contact us before starting the fine-tuning to let us know.

Your comma-separated CSV dataset should contain 2 columns:

Each row is a new example you want to teach the model. For example, you could build a dataset like the following (many more examples would be needed of course):

text sentiment
I just love this movie! positive
I have this guy... negative
NLP sometimes looks like magic! positive

Size of your dataset: we recommend at least 500 examples.

Sensitive Applications

No data sent to our API is stored on our servers, but sometimes this is not enough.

Here are 3 advanced solutions we propose for sensitive applications.

Specific Region

For legal reasons you might want to make sure that the data you send is processed in a specific region of the world. It can be a specific continent (e.g. North America, Europe, Asia,...), or a specific country (e.g. US, France, Germany, ...).

If that is the case, please contact us at [email protected].

Specific Cloud Provider

You might want to avoid a specific cloud provider, or proactively choose a cloud provider (eg. AWS, GCP, OVH, Scaleway...).

If that is the case, please contact us at [email protected].

On-Premise

If you cannot afford to send any data to NLP Cloud for confidentiality reasons (e.g. medical applications, financial applications...) you can deploy our models on your own in-house infrastructure.

If that is the case, please contact us at [email protected].

Rate Limiting

Rate limiting depends on the plan you subscribed to.

For example for the free plan, you can create up to 3 requests per minute. If you reach the limit, the API will return a 429 HTTP error.

Errors

The NLP Cloud API uses the following error HTTP codes:

Code Meaning
400 Bad Request -- Your request is invalid.
401 Unauthorized -- Your API token is wrong.
402 Payment Required -- You are trying to access a resource that is only accessible after payment.
403 Forbidden -- You do not have the sufficient rights to access the resource. Please make sure you subscribed to the proper plan that grants you access to this resource.
404 Not Found -- The specified resource could not be found.
405 Method Not Allowed -- You tried to access a resource with an invalid method.
406 Not Acceptable -- You requested a format that isn't json.
413 Request Entity Too Large -- The piece of text that you are sending it too large. Please see the maximum sizes in the documentation.
422 Unprocessable Entity -- Your request is not properly formatted. Happens for example if your JSON payload is not correctly formatted, or if you omit the "Content-Type: application/json" header.
429 Too Many Requests -- You made too many requests in a short while, please slow down.
500 Internal Server Error -- Sorry, we had a problem with our server. Please try again later.
503 Service Unavailable -- Sorry, we are temporarily offline for maintenance. Please try again later.

If any problem, do not hesitate to contact us: [email protected].