NAV Navbar
Python Go Java

NLU as a Service gRPC API

The Nuance NLU (Natural Language Understanding) service turns text into meaning, extracting the underlying meaning of what your users say or write, in a form that an application can understand. Powered by two Nuance engines, Natural Language Engine (NLE) and Nuance Text Processing Engine (NtpE), NLU as a Service provides a semantic interpretation of the user’s input.

NLU works with a data pack for your language and locals, and a semantic language model customized for your environment. It can also use interpretation aides such as additional language models, dictionaries, and wordsets to improve understanding in specific environments or businesses.

The gRPC protocol provided by NLU allows a client application to request interpretation services in all the programming languages supported by gRPC.

gRPC is an open source RPC (remote procedure call) software used to create services. It uses HTTP/2 for transport, and protocol buffers to define the structure of the application. NLU supports Protocol Buffers version 3, also know as proto3.

Version: v1

This release supports two versions of the gRPC protocol: v1 and v1beta1.

You must use either the v1 or v1beta1 protocol, but not both. You may not combine v1 and v1beta1 syntax in one application.

Upgrading to v1

To upgrade to the v1 protocol from v1beta1, you need to regenerate your programming-language stub files from the new proto files, then make small adjustments to your client application.

First regenerate your client stubs from the new proto files, as described in gRPC setup.

  1. Download the gRPC proto file here. We recommend you make a new directory for the v1 files.
  2. Use gRPC tools to generate the client stubs from the proto file.
  3. Notice the new client stub files.

Then adjust your client application for the changes made to the protocol in v1. See gRPC API for details of each item.

NLU essentials

Natural language understanding (NLU) is one of the components of a rich conversational voice experience for your end users. NLUaaS uses two engines: Natural Language Engine (NLE) and Nuance Text Processing Engine (NTpE). These engines are hosted by Nuance and are accessible from a single gRPC interface.

NLE

NLE derives the meaning of text using speech technology based on artificial intelligence (AI) and machine learning.

Using a Nuance data pack and a semantic model created in Mix.nlu, NLE accepts input from the user. This input can be text written by the user or the result of speech transcribed into text by automatic speech recognition (ASR).

NLE interprets the input against the model and returns a semantic interpretation. Your client application can use this result to drive the next human-machine turn.

Intents and entities

NLE's interpretation result consists of one or more hypotheses of the meaning of the user input. Each hypothesis contains intents and entities, along with NLE's confidence in the hypothesis.

An intent is the overall meaning of the user input in a form an application can understand, for example PAY_BILL, PLACE_ORDER, BOOK_FLIGHT, or GET_INFO. See Interpretation results: Intents for some examples.

Entities (also known as concepts) define the meaning of individual words within the input. They represent categories of things that are important to the intent. For example, the PLACE_ORDER intent might have entities such as PRODUCT, FLAVOR, and QTY. Each entity contains a set of values, so the FLAVOR entity could have values such as Chocolate, Strawberry, Blueberry, Vanilla, and so on.

At runtime, NLE interprets the sentence I’d like to order a dozen blueberry pies as:

List entities have specific values, while other types of entity have values defined in a grammar file and/or regular expression. See Interpretation results: Entities for examples.

Extending the model

For more flexibility, you can extend your semantic model with a wordset containing additional terms. External wordset files and compiled wordsets are not currently supported in Nuance-hosted NLUaaS. To use wordsets in this environment, use inline wordsets.

NTpE

NTpE is Nuance's normalization and tokenization (or lexical analysis) engine. It applies transformation rules and formats output for display or for further processing by NLE.

Prerequisites from Mix

Before developing your gRPC application, you need a Mix project that provides an NLU model as well as authorization credentials.

  1. Create a Mix project and model: see Mix.nlu workflow to:

    • Create a Mix project.

    • Create, train, and build a model in the project.

    • Create and deploy an application configuration for the project.

  2. Learn how to reference the semantic model in your application. You may only reference models created in your Mix project. See Mix.dashboard URN.

  3. Generate a "secret" and client ID of your Mix project: see Mix.dashboard Obtain authentication for services. Later you will use these credentials to request an authorization token to run your application.

  4. Learn the URL to call the NLU service: see Mix.dashboard Accessing a runtime service.

gRPC setup

Download proto files

interpretation-common.proto
multi-intent-interpretation.proto
result.proto
runtime.proto
single-intent-interpretation.proto

Install gRPC for programming language

$ python3 -m venv env
$ source env/bin/activate
$ pip install grpcio
$ pip install grpcio-tools
$ mkdir -p google/api
$ curl https://raw.githubusercontent.com/googleapis/googleapis/master/google/api/annotations.proto \
    google/api/annotations.proto
$ curl https://raw.githubusercontent.com/googleapis/googleapis/master/google/api/http.proto \
    google/api/http.proto
$ go get google.golang.org/grpc
$ go get google.golang.org/grpc/credentials
$ go get google.golang.org/grpc/metadata
$ go get github.com/akamensky/argparse
Java version 11.0.2 or greater is required.

Generate client stubs from proto files

$ python -m grpc_tools.protoc --proto_path=./ --python_out=./ \
    --grpc_python_out=./ nuance/nlu/v1/runtime.proto
$ python -m grpc_tools.protoc --proto_path=./ --python_out=./ \
    nuance/nlu/v1/result.proto
$ python -m grpc_tools.protoc --proto_path=./ --python_out=./ \
    nuance/nlu/v1/interpretation-common.proto
$ python -m grpc_tools.protoc --proto_path=./ --python_out=./ \
    nuance/nlu/v1/single-intent-interpretation.proto
$ python -m grpc_tools.protoc --proto_path=./ --python_out=./ \
    nuance/nlu/v1/multi-intent-interpretation.proto

$ ls -1 nuance/nlu/v1/*.py
interpretation-common_pb2.py
multi-intent-interpretation_pb2.py
result_pb2.py  
runtime_pb2_grpc.py  
runtime_pb2.py  
single-intent-interpretation_pb2.py
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/interpretation-common.proto --java_out=./src/main/java
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/multi-intent-interpretation.proto --java_out=./src/main/java
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/result.proto --java_out=./src/main/java
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/runtime.proto \
    --java_out=./src/main/java --grpc-java_out=./src/main/java
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/single-intent-interpretation.proto \
    --java_out=./src/main/java

$ ls /src/nuance/nlu/v1/*.go 
interpretation-common_pb2.go
multi-intent-interpretation_pb2.go
result_pb2.go
runtime_pb2_grpc.go  
runtime_pb2.go
single-intent-interpretation_pb2.go
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/interpretation-common.proto --java_out=./src/main/java
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/multi-intent-interpretation.proto --java_out=./src/main/java
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/result.proto --java_out=./src/main/java
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/runtime.proto \
    --java_out=./src/main/java --grpc-java_out=./src/main/java
$ protoc -I ./src/nuance/nlu/v1 \
    ./src/nuance/nlu/v1/single-intent-interpretation.proto \
    --java_out=./src/main/java

$ ls -1 src/main/java/com/nuance/grpc/nlu/v1/
AudioRange.java
AudioRangeOrBuilder.java
EntityNode.java
EntityNodeOrBuilder.java
EnumInterpretationInputLoggingMode.java
EnumInterpretationResultType.java
EnumOperator.java
EnumOrigin.java
EnumResourceType.java
IntentNode.java
... 

The basic steps in using the NLU gRPC API are:

  1. Download the five gRPC proto files here. These files specify the generic functions and classes for requesting and receiving interpretation from an NLU engine.

    • runtime.proto
    • result.proto
    • interpretation-common.proto
    • single-intent-interpretation.proto
    • multi-intent-interpretation.proto

  2. Install gRPC for the programming language of your choice, including C++, Java, Python, Go, Ruby, C#, Node.js, and others. See gRPC Documentation for a complete list and instructions on using gRPC with each language.

  3. Generate client stub files in your programming language from the proto files using gRPC protoc. Depending on your programming language, the stubs may consist of one file or multiple files per proto file.

    These stub files contain the methods and fields from the proto files as implemented in your programming language. You will consult the stubs with the proto files.

  4. Write your client application, referencing the functions or classes in the client stub files. See Client app development for details.

Client app development

The gRPC protocol for NLU lets you create a client application for requesting and receiving semantic interpretation from input text. This section describes how to implement the basic functionality of NLU in the context of a Python, Go, and Java application. For the complete applications, see Sample applications.

The esssential tasks are shown in the following high-level sequence flow:

Sequence flow

Step 1: Generate token

The run-nlu-client.sh script requests the token then runs the application

#!/bin/bash

CLIENT_ID="appID%3ANMDPTRIAL_your_name_nuance_com_20190919T190532565840"
SECRET="5JEAu0YSAjV97oV3BWy2PRofy6V8FGmywiUbc0UfkGE"
export TOKEN="`curl -s -u "$CLIENT_ID:$SECRET" "https://auth.crt.nuance.com/oauth2/token" \
-d 'grant_type=client_credentials' -d 'scope=tts nlu asr' \
| python -c 'import sys, json; print(json.load(sys.stdin)["access_token"])'`"

./nlu_client.py --serverUrl nlu.api.nuance.com:443 --secure --token $TOKEN 
--modelUrn "urn:nuance-mix:tag:model/bakery/mix.nlu?=language=eng-USA" 
--textInput "$1"

The Go application sets the client ID and secret in a config file, config.json

{
    "client_id": "appID:<Provide Your Mix App Id>",
    "client_secret": "<Provide Your Mix Client Secret>",
    "token_url": "https://auth.crt.nuance.com/oauth2/token"
}

The Java application sets the client ID and secret in a config file, config.json

{
    "client_id": "appID:<Provide Your Mix App Id>",
    "client_secret": "<Provide Your Mix Client Secret>",
    "token_url": "https://auth.crt.nuance.com/oauth2/token"
}

Nuance Mix uses the OAuth 2.0 protocol for authentication. Your client application must provide an access token to be able to access the NLU runtime service. The token expires after a short period of time so must be regenerated frequently.

Your client application uses the client ID and secret from the Mix Dashboard (see Prerequisites from Mix) to generate an authentication token from the Mix Authentication Service, available at the following URL:

auth.crt.nuance.com/oauth2/token

You can generate the token in one of several ways, for example:

Step 2: Authenticate and connect

The Python app uses the token as it creates the secure connection to the NLU service

def create_channel(args):
    call_credentials = None
    channel = None

    if args.token:
        log.debug("Adding CallCredentials with token %s" % args.token)
        call_credentials = grpc.access_token_call_credentials(args.token)

    if args.secure:
        log.debug("Creating secure gRPC channel")
        root_certificates = None
        certificate_chain = None
        private_key = None
        if args.rootCerts:
            log.debug("Adding root certs")
            root_certificates = open(args.rootCerts, 'rb').read()
        if args.certChain:
            log.debug("Adding cert chain")
            certificate_chain = open(args.certChain, 'rb').read()
        if args.privateKey:
            log.debug("Adding private key")
            private_key = open(args.privateKey, 'rb').read()

        channel_credentials = grpc.ssl_channel_credentials(root_certificates=root_certificates, private_key=private_key, certificate_chain=certificate_chain)
        if call_credentials is not None:
            channel_credentials = grpc.composite_channel_credentials(channel_credentials, call_credentials)
        channel = grpc.secure_channel(args.serverUrl, credentials=channel_credentials)
    else:
        log.debug("Creating insecure gRPC channel")
        channel = grpc.insecure_channel(args.serverUrl)

    return channel

The Go app collects the service URL (server) and authentication credentials (configFile) in nlu_client.go

func main() {

    // collect arguments
    parser := argparse.NewParser("nlu_client", "Use Nuance Mix NLU to add Intelligence to your app")
    server := parser.String("s", "server", &argparse.Options{
        Default: "nlu.api.nuance.com:443",
        Help:    "NLU server URL host:port",
    })
    modelUrn := parser.String("m", "modelUrn", &argparse.Options{
        Default: "",
        Help:    "NLU model URN with the following schema: urn:nuance-mix:tag:model/<context_tag>/mix.nlu?=language=<language> (e.g. urn:nuance-mix:tag:model/A2_C16/mix.nlu?=language=eng-USA)",
    })
    textInput := parser.String("i", "textInput", &argparse.Options{
        Default: "",
        Help:    "Text to perform interpretation on",
    })
    configFile := parser.String("c", "configFile", &argparse.Options{
        Default: "config.json",
        Help:    "config file containing client credentials (client_id and client_secret)",
    })
. . .
    // Import the user's Mix credentials
    config, err := NewConfig(*configFile)
    if err != nil {
        log.Fatalf("Error importing user credentials: %v", err)
        os.Exit(1)
    }
    // Authenticate the user's credentials
    auth := NewAuthenticator(*config)
    token, err := auth.Authenticate()
    if err != nil {
        log.Fatalf("Error authenticating to Mix: %v", err)
        os.Exit(1)
    }

Then calls authenticate.go to generate and validate the token using the values from config.json

/* file: authenticate.go */
package main

import (
    "encoding/json"
    "errors"
    "fmt"
    "io/ioutil"
    "log"
    "net/http"
    "net/url"
    "os"
    "strings"
    "time"
)

const (
    TokenCache  = "token.cache"
    TokenMaxAge = 59 // minutes
    GrantType   = "client_credentials"
    Scope       = "nlu"
)

type Token struct {
    AccessToken string `json:"access_token"`
    ExpiresIn   int    `json:"expires_in"`
    Scope       string `json:"scope"`
    TokenType   string `json:"bearer"`
}

func (t *Token) String(pretty bool) string {
    var str []byte
    var err error

    if pretty {
        str, _ = json.MarshalIndent(t, "", "  ")
    } else {
        str, _ = json.Marshal(t)
    }

    if err != nil {
        log.Printf("Error marshalling token to json: %s", err)
    }

    return string(str)
}

type Authenticator struct {
    config Config
    token  *Token
}

func (a *Authenticator) generateToken() (*Token, error) {
    a.token = nil

    body := strings.NewReader(fmt.Sprintf("grant_type=%s&scope=%s", GrantType, Scope))
    req, err := http.NewRequest("POST", a.config.TokenURL, body)
    if err != nil {
        return nil, err
    }

    req.SetBasicAuth(url.QueryEscape(a.config.ClientID), url.QueryEscape(a.config.ClientSecret))
    req.Header.Set("Content-Type", "application/x-www-form-urlencoded")

    resp, err := http.DefaultClient.Do(req)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()

    if resp.StatusCode < 200 || resp.StatusCode >= 300 {
        return nil, errors.New(resp.Status)
    }

    bodyBytes, _ := ioutil.ReadAll(resp.Body)
    t := &Token{}
    err = json.Unmarshal(bodyBytes, t)
    if err != nil {
        return nil, err
    }

    a.token = t
    return a.token, nil
}

func (a *Authenticator) isTokenValid() bool {

    // Is token cached?
    info, err := os.Stat(TokenCache)
    if err != nil {
        return false
    }

    // Can token be read from file?
    source, err := ioutil.ReadFile(TokenCache)
    if err != nil {
        return false
    }

    // Are contents of token valid?
    t := &Token{}
    err = json.Unmarshal(source, t)
    if err != nil || len(t.AccessToken) == 0 {
        return false
    }

    // Has token expired?
    lapsed := time.Since(info.ModTime())
    if lapsed > (TokenMaxAge * time.Minute) {
        return false
    }

    // All tests passed
    a.token = t
    return true
}

func (a *Authenticator) cacheToken() {
    outputJSON, err := json.MarshalIndent(a.token, "", "  ")
    if err != nil {
        log.Printf("Failed to cache token: %v", err)
        return
    }

    err = ioutil.WriteFile(TokenCache, outputJSON, 0644)
    if err != nil {
        log.Printf("Failed to cache token: %v", err)
    }

    return
}

func (a *Authenticator) Authenticate() (*Token, error) {
    if a.isTokenValid() {
        return a.token, nil
    }

    if _, err := a.generateToken(); err != nil {
        return nil, err
    }

    a.cacheToken()
    return a.token, nil
}

func NewAuthenticator(config Config) *Authenticator {
    a := &Authenticator{
        config: config,
    }
    return a
}

The Java app collects the service URL (SERVER) and authentication credentials (CONFIG_FILE) in NluClient.java

public class NluClient {

    public class Defaults {
        static final String SERVER = "nlu.api.nuance.com:443";
        static final String CONFIG_FILE = "config.json";
    }

Then calls authenticator.java to generate and validate the token using the values from config.json

/* file: Authenticator.java */
package xaas.sample.nlu.java.client;

import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.net.URL;
import java.net.URLEncoder;
import java.util.Base64;

import javax.net.ssl.HttpsURLConnection;

import com.google.gson.Gson;
import com.google.gson.annotations.SerializedName;
import com.google.gson.GsonBuilder;

import xaas.sample.nlu.java.client.Config.Configuration;

public class Authenticator {

    static final String GRANT_TYPE = "client_credentials";
    static final String SCOPE = "nlu";
    static final String TOKEN_CACHE = "token.cache";
    static final long TOKEN_MAX_AGE = 3540000; //in ms == 59 minutes;

    Configuration config;
    Token token;

    public Authenticator(Configuration config) {
        this.config = config;
    }

    private Token generateToken() throws Exception {
        token = null;

        String auth = URLEncoder.encode(config.getClientID(), "UTF-8") + ":" + config.getClientSecret();
        String authentication = Base64.getEncoder().encodeToString(auth.getBytes());

        String content = String.format("grant_type=%s&scope=%s", GRANT_TYPE, SCOPE);

        URL url = new URL(config.getTokenURL());

        HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
        connection.setRequestMethod("POST");
        connection.setDoOutput(true);

        connection.setRequestProperty("Authorization", "Basic " + authentication);
        connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
        connection.setRequestProperty("Accept", "application/json");

        PrintStream os = new PrintStream(connection.getOutputStream());
        os.print(content);
        os.close();

        Gson gson = new Gson();

        BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream()));

        // Parse the configuration parameters...
        token = gson.fromJson(reader, Token.class);
        return token;
}

    private boolean isTokenValid() {
        File f = new File(TOKEN_CACHE);
        if(!f.exists() || f.isDirectory() || !f.canRead()) {
            return false;
        }

        Gson gson = new Gson();
        try {
            BufferedReader reader = new BufferedReader(new FileReader(TOKEN_CACHE));
            Token t = gson.fromJson(reader, Token.class);
            if (t.accessToken == null || t.accessToken.isEmpty()) {
                return false;
            }

            if ((System.currentTimeMillis() - f.lastModified()) > TOKEN_MAX_AGE) {
                return false;
            }

            token = t;
        } catch (Exception e) {
            return false;
        }
        return true;
    }

    private void cacheToken() {
        // Create a new Gson object
        Gson gson = new Gson();

        try {
            String jsonString = gson.toJson(token);
            FileWriter fileWriter = new FileWriter(TOKEN_CACHE);
            fileWriter.write(jsonString);
            fileWriter.close();
        } catch (Exception e) {
            // Ignore...
        }
    }

    public Token Authenticate() throws Exception {
        if (isTokenValid()) {
            return token;
        }

        if (generateToken() != null) {
            cacheToken();
        }

        return token;
    }

    @Override
    public String toString() {
        return super.toString();
    }
}

You create a secure gRPC channel and authenticate your application to the NLU service by providing the URL of the hosted NLU service and an access token.

In all these examples, the URL of the NLU service is passed to the application as an argument:

There are several ways to generate and use the token that authenticates your application to the NLU service. The code samples show two methods:

Step 3: Import functions

Import functions from stubs

from nuance.nlu.v1.runtime_pb2 import *
from nuance.nlu.v1.runtime_pb2_grpc import *
from nuance.nlu.v1.result_pb2 import *
import (
    ...
    pb "./v1"
)    
import io.grpc.*;
import io.grpc.ClientInterceptor;
import io.grpc.ForwardingClientCall.SimpleForwardingClientCall;
import io.grpc.ForwardingClientCallListener.SimpleForwardingClientCallListener;
import io.grpc.Metadata;
import com.nuance.rpc.nlu.v1.*;

In your client application, import all functions from the Krypton client stubs that you generated in gRPC setup.

Do not edit these stub files.

Step 4: Set parameters

Set interpretation parameters

# Single intent, plain text logging
params = InterpretationParameters(
    interpretation_result_type=EnumInterpretationResultType.SINGLE_INTENT,
    interpretation_input_logging_mode=EnumInterpretationInputLoggingMode.PLAINTEXT)
# Reference the model 
model = ResourceReference(
    type=EnumResourceType.SEMANTIC_MODEL,
    uri=args.modelUrn)
# Describe the text to perform interpretation on
input = InterpretationInput(
    text=args.textInput)
func Interpret(ctx context.Context, client pb.NluClient, modelUrn string, textInput string) {

    // Single intent, plain text logging
    params := &pb.InterpretationParameters{
        InterpretationResultType:       pb.EnumInterpretationResultType_SINGLE_INTENT,
        InterpretationInputLoggingMode: pb.EnumInterpretationInputLoggingMode_PLAINTEXT,
    }
    // Reference the model via the app config
    model := &pb.ResourceReference{
        Type: pb.EnumResourceType_SEMANTIC_MODEL,
        Uri:  modelUrn,
    }

    // Describe the text to perform interpretation on
    input := &pb.InterpretationInput{
        InputUnion: &pb.InterpretationInput_Text{Text: textInput},
    }
    public void interpret(String modelUrn, String textInput) {

        InterpretRequest req = InterpretRequest.newBuilder()
            .setParameters(InterpretationParameters.newBuilder()
                .setInterpretationResultType(EnumInterpretationResultType.SINGLE_INTENT)
                .setInterpretationInputLoggingMode(EnumInterpretationInputLoggingMode.PLAINTEXT)
            )
            .setModel(ResourceReference.newBuilder()
                .setType(EnumResourceType.SEMANTIC_MODEL)
                .setUri(modelUrn)
            )
            .setInput(InterpretationInput.newBuilder().setText(textInput))
            .build();

The application includes InterpretationParameters that define the type of interpretation you want. Consult your generated stubs for the precise parameter names (see Field names in proto and stub files). Some parameters are:

For details about intepretation parameters, see InterpretRequest.

Step 5: Request interpretation

Request interpretation

def main():
    args = parse_args()
    log_level = logging.DEBUG
    logging.basicConfig(
        format='%(lineno)d %(asctime)s %(levelname)-5s: %(message)s', level=log_level)
    with create_channel(args) as channel:
        stub = RuntimeStub(channel)
        response = stub.Interpret(construct_interpret_request(args))
        print(MessageToJson(response))
    print("Done")
    public void interpret(String modelUrn, String textInput) {

        InterpretRequest req = InterpretRequest.newBuilder()
            .setParameters(InterpretationParameters.newBuilder()
                .setInterpretationResultType(EnumInterpretationResultType.SINGLE_INTENT)
                .setInterpretationInputLoggingMode(EnumInterpretationInputLoggingMode.PLAINTEXT)
            )
            .setModel(ResourceReference.newBuilder()
                .setType(EnumResourceType.SEMANTIC_MODEL)
                .setUri(modelUrn)
            )
            .setInput(InterpretationInput.newBuilder().setText(textInput))
            .build();
func Interpret(ctx context.Context, client pb.NluClient, modelUrn string, textInput string) *pb.InterpretResult {

    // Single intent, plain text logging
    params := &pb.InterpretationParameters{
        InterpretationResultType:       pb.EnumInterpretationResultType_SINGLE_INTENT,
        InterpretationInputLoggingMode: pb.EnumInterpretationInputLoggingMode_PLAINTEXT,
    }
    // Reference the model via the app config
    model := &pb.ResourceReference{
        Type: pb.EnumResourceType_SEMANTIC_MODEL,
        Uri:  modelUrn,
    }

    // Describe the text to perform interpretation on
    input := &pb.InterpretationInput{
        InputUnion: &pb.InterpretationInput_Text{Text: textInput},
    }

    req := &pb.InterpretRequest{
        Parameters: params,
        Model:      model,
        Input:      input,
    }

    resp, err := client.Interpret(ctx, req)
    if err != nil {
        log.Printf("Interpretation failed: %s", err)
        return resp.Result
    }

    if resp.Status.Code != 200 {
        log.Printf("Interpretation failed: %s", resp.Status.Message)
        return resp.Result
    }

        return resp.Result
}

To request an interpretation, this client application specifies the following:

Step 6: Call client stub

Call main client stub

with create_channel(args) as channel:
    stub = RuntimeStub(channel)
    response = stub.Interpret(construct_interpret_request(args))
    print(MessageToJson(response))
print("Done")
    client := pb.NewRuntimeClient(conn)
    ctx, cancel := CreateChannelContext(&token.AccessToken)
    defer cancel()
    Interpret(ctx, client, *modelUrn, *textInput)
    public NluClient(RuntimeGrpc.RuntimeBlockingStub conn) {
        this.conn = conn;
    }

The app must include the location of the NLU instance, the authentication token, and where the audio is obtained. See Authenticate and connect.

Using this information, the app calls a client stub function or class. This stub is basesd on the main service name and is defined in the generated client files: in Python it is named RuntimeStub, in Go it is RuntimeClient, and in Java it is RuntimeBlockingStub.

Step 7: Process results

Receive results

def process_result(response):
    print(MessageToJson(response))
        InterpretResponse resp = conn.interpret(req);
        if (resp == null) {
            System.out.println("Interpretation failed. No response returned.");
            return;
        }

        if (resp.getStatus().getCode() != 200) {
            System.out.println(String.format("Interpretation failed: %s", resp.getStatus().getMessage()));
            return;
        }

        System.out.println(String.format("Interpretation: %s", resp.getResult().toString()));
    }
func ProcessResult(result *pb.InterpretResult) {
    out, _ := json.MarshalIndent(*result, "", "  ")
    log.Printf("Interpretatation: %s", string(out))
}

Finally the app returns the results received from the NLU engine. These applications format the interpretation result as a JSON object, similar to the Try panel in Mix.nlu.

For details about the structure of the result, see InterpretResult.

Sample applications

This section contains sample NLU client applications.

Sample Python app

This basic Python app, nlu_client.py, requests and receives interpretation

mport argparse
import sys
import logging
import os
import grpc
import wave
from time import sleep

from google.protobuf.json_format import MessageToJson

from nuance.nlu.v1.runtime_pb2 import *
from nuance.nlu.v1.runtime_pb2_grpc import *
from nuance.nlu.v1.result_pb2 import *

log = logging.getLogger(__name__)

def parse_args():
    parser = argparse.ArgumentParser(
        prog="nlu_client.py",
        usage="%(prog)s [-options]",
        add_help=False,
        formatter_class=lambda prog: argparse.HelpFormatter(
            prog, max_help_position=45, width=100)
    )

    options = parser.add_argument_group("options")
    options.add_argument("-h", "--help", action="help",
                         help="Show this help message and exit")
    options.add_argument("--nmaid", nargs="?", help=argparse.SUPPRESS)
    options.add_argument("--token", nargs="?", help=argparse.SUPPRESS)
    options.add_argument("-s", "--serverUrl", metavar="url", nargs="?",
                         help="NLU server URL, default=localhost:8080", default='localhost:8080')
    options.add_argument('--modelUrn', nargs="?", 
                         help="NLU Model URN")
    options.add_argument("--secure", action="store_true",
                         help="Connect to the server using a secure gRPC channel.")
    options.add_argument("--rootCerts",  metavar="file", nargs="?",
                         help="Root certificates when using secure channel.")
    options.add_argument("--privateKey",  metavar="file", nargs="?",
                         help="Certificate private key when using secure channel.")
    options.add_argument("--certChain",  metavar="file", nargs="?",
                         help="Certificate chain when using secure channel.")
    options.add_argument("--textInput", metavar="file", nargs="?",
                         help="Text to perform interpretation on")
    return parser.parse_args()

def create_channel(args):
    call_credentials = None
    channel = None

    if args.token:
        log.debug("Adding CallCredentials with token %s" % args.token)
        call_credentials = grpc.access_token_call_credentials(args.token)

    if args.secure:
        log.debug("Creating secure gRPC channel")
        root_certificates = None
        certificate_chain = None
        private_key = None
        if args.rootCerts:
            log.debug("Adding root certs")
            root_certificates = open(args.rootCerts, 'rb').read()
        if args.certChain:
            log.debug("Adding cert chain")
            certificate_chain = open(args.certChain, 'rb').read()
        if args.privateKey:
            log.debug("Adding private key")
            private_key = open(args.privateKey, 'rb').read()

        channel_credentials = grpc.ssl_channel_credentials(root_certificates=root_certificates, private_key=private_key, certificate_chain=certificate_chain)
        if call_credentials is not None:
            channel_credentials = grpc.composite_channel_credentials(channel_credentials, call_credentials)
        channel = grpc.secure_channel(args.serverUrl, credentials=channel_credentials)
    else:
        log.debug("Creating insecure gRPC channel")
        channel = grpc.insecure_channel(args.serverUrl)

    return channel

def construct_interpret_request(args):
    # Single intent, plain text logging
    params = InterpretationParameters(
        interpretation_result_type=EnumInterpretationResultType.SINGLE_INTENT,
        interpretation_input_logging_mode=EnumInterpretationInputLoggingMode.PLAINTEXT)
    # Reference the model via the app config
    model = ResourceReference(
        type=EnumResourceType.SEMANTIC_MODEL,
        uri=args.modelUrn)
    # Describe the text to perform interpretation on
    input = InterpretationInput(
        text=args.textInput)
    # Build the request
    interpret_req = InterpretRequest(
        parameters=params,
        model=model,
        input=input)
    return interpret_req

def main():
    args = parse_args()
    log_level = logging.DEBUG
    logging.basicConfig(
        format='%(lineno)d %(asctime)s %(levelname)-5s: %(message)s', level=log_level)
    with create_channel(args) as channel:
        stub = RuntimeStub(channel)
        response = stub.Interpret(construct_interpret_request(args))
        print(MessageToJson(response))
    print("Done")

if __name__ == '__main__':
  main()

This is the Python app used in the examples. It performs these tasks:

Running the Python app

This is the script, run-nlu-client.sh, that generates a token and calls the application.

#!/bin/bash
 
CLIENT_ID="appID%3ANMDPTRIAL_your_name_nuance_com_20190919T190532565840"
SECRET="5JEAu0YSAjV97oV3BWy2PRofy6V8FGmywiUbc0UfkGE"
export TOKEN="`curl -s -u "$CLIENT_ID:$SECRET" "https://auth.crt.nuance.com/oauth2/token" \
-d 'grant_type=client_credentials' -d 'scope=tts nlu asr' \
| python -c 'import sys, json; print(json.load(sys.stdin)["access_token"])'`"
 
./nlu_client.py --serverUrl nlu.api.nuance.com:443 --secure --token $TOKEN 
--modelUrn "urn:nuance-mix:tag:model/bakery/mix.nlu?=language=eng-USA" 
--textInput "$1"

Run this script, passing it the text to interpret. The NLU engine returns the results.

$ ./run-nlu-client.sh "I'd like to order a strawberry latte"
{
  "status": {
    "code": 200,
    "message": "OK"
  },
  "result": {
    "literal": "I'd like to order a strawberry latte",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "PLACE_ORDER",
          "confidence": 0.9961925148963928,
          "origin": "STATISTICAL",
          "entities": {
            "PRODUCT": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 31,
                    "endIndex": 36
                  },
                  "confidence": 0.9177429676055908,
                  "origin": "STATISTICAL",
                  "stringValue": "latte"
                }
              ]
            },
            "FLAVOR": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 20,
                    "endIndex": 30
                  },
                  "confidence": 0.9367110133171082,
                  "origin": "STATISTICAL",
                  "stringValue": "strawberry"
                }
              ]
            }
          }
        }
      }
    ]
  }
}

Sample Go app

This Go app, nlu_client.go, requests and receives interpretation

package main

import (
    "context"
    "crypto/tls"
    "encoding/json"
    "fmt"
    "log"
    "os"
    "time"

    "google.golang.org/grpc"
    "google.golang.org/grpc/credentials"
    "google.golang.org/grpc/metadata"

    pb "./v1"

    "github.com/akamensky/argparse"
)

func CreateChannelContext(token *string) (context.Context, context.CancelFunc) {
    ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)

    // https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md
    ctx = metadata.AppendToOutgoingContext(ctx, "authorization", "Bearer "+*token)

    return ctx, cancel
}

func Interpret(ctx context.Context, client pb.NluClient, modelUrn string, textInput string) {

    // Single intent, plain text logging
    params := &pb.InterpretationParameters{
        InterpretationResultType:       pb.EnumInterpretationResultType_SINGLE_INTENT,
        InterpretationInputLoggingMode: pb.EnumInterpretationInputLoggingMode_PLAINTEXT,
    }
    // Reference the model via the app config
    model := &pb.ResourceReference{
        Type: pb.EnumResourceType_SEMANTIC_MODEL,
        Uri:  modelUrn,
    }

    // Describe the text to perform interpretation on
    input := &pb.InterpretationInput{
        InputUnion: &pb.InterpretationInput_Text{Text: textInput},
    }

    req := &pb.InterpretRequest{
        Parameters: params,
        Model:      model,
        Input:      input,
    }

    resp, err := client.Interpret(ctx, req)
    if err != nil {
        log.Printf("Interpretation failed: %s", err)
        return
    }

    if resp.Status.Code != 200 {
        log.Printf("Interpretation failed: %s", resp.Status.Message)
        return
    }

    out, _ := json.MarshalIndent(resp.Result, "", "  ")
    log.Printf("Interpretatation: %s", string(out))
}

func main() {

    // collect arguements
    parser := argparse.NewParser("nlu_client", "Use Nuance Mix NLU to add Intelligence to your app")
    server := parser.String("s", "server", &argparse.Options{
        Default: "nlu.api.nuance.com:443",
        Help:    "NLU server URL host:port",
    })
    modelUrn := parser.String("m", "modelUrn", &argparse.Options{
        Default: "",
        Help:    "NLU Model URN with the following schema: urn:nuance-mix:tag:model/<context_tag>/mix.nlu?=language=<language> (e.g. urn:nuance-mix:tag:model/A2_C16/mix.nlu?=language=eng-USA)",
    })
    textInput := parser.String("i", "textInput", &argparse.Options{
        Default: "",
        Help:    "Text to perform interpretation on",
    })
    configFile := parser.String("c", "configFile", &argparse.Options{
        Default: "config.json",
        Help:    "config file containing client credentials (client_id and client_secret)",
    })
    err := parser.Parse(os.Args)
    if err != nil {
        fmt.Print(parser.Usage(err))
        os.Exit(1)
    }

    // Import the user's Mix credentials
    config, err := NewConfig(*configFile)
    if err != nil {
        log.Fatalf("Error importing user credentials: %v", err)
        os.Exit(1)
    }

    // Authenticate the user's credentials
    auth := NewAuthenticator(*config)
    token, err := auth.Authenticate()
    if err != nil {
        log.Fatalf("Error authenticating to Mix: %v", err)
        os.Exit(1)
    }

    // Connect to Mix Dialog Service
    creds := credentials.NewTLS(&tls.Config{})
    if err != nil {
        log.Fatalf("Failed to create TLS credentials %v", err)
    }

    conn, err := grpc.Dial(*server, grpc.WithTransportCredentials(creds))
    if err != nil {
        log.Fatalf("fail to dial: %v", err)
    }
    defer conn.Close()

    client := pb.NewRuntimeClient(conn)
    ctx, cancel := CreateChannelContext(&token.AccessToken)
    defer cancel()
    Interpret(ctx, client, *modelUrn, *textInput)
}

This Go application consists of these files:

Running the Go app

To run the Go app, first add the credentials from Mix.nlu to the config.json file. For example:

{
    "client_id": "appID:NMDPTRIAL_jane_doe_example_com_20191114T133132096157",
    "client_secret": "db00Ap7bbdfW5EJLYsX0UamuHbMKFAv4nf_61ngRBys",
    "token_url": "https://auth.crt.nuance.com/oauth2/token"
}

Then run the application. This runs the app with the help option to see the values you may pass to the application.

$ go run ./src -h
usage: nlu_client [-h|--help] [-s|--server "<value>"] [-m|--modelUrn "<value>"]
                [-i|--textInput "<value>"] [-c|--configFile "<value>"]
 
                Use Nuance Mix NLU to add Intelligence to your app
 
Arguments:
-h  --help        Print help information
-s  --server      NLU server URL host:port. Default:
                    nlu.api.nuance.com:443
-m  --modelUrn    NLU Model URN with the following schema:
                    urn:nuance-mix:tag:model/<context_tag>/mix.nlu?=language=<language> (e.g.
                    urn:nuance-mix:tag:model/A2_C16/mix.nlu?=language=eng-USA). Default: 
-i  --textInput   Text to perform interpretation on. Default: 
-c  --configFile  config file containing client credentials (client_id and
                    client_secret). Default: config.json

And this runs the app using the default server and config file, returning the interpretation in JSON format.

$ go run ./src \
   -m urn:nuance-mix:tag:model/A52_C1/mix.nlu?=language=eng-USA \
   -i "turn the lights on"
2019/10/31 21:18:09 Interpretatation: {
"literal": "turn the lights on",
"interpretations": [
    {
    "InterpretationUnion": {
        "SingleIntentInterpretation": {
        "intent": "TURN_ON",
        "confidence": 0.9999998,
        "origin": 2
        }
      }
    },
    {
    "InterpretationUnion": {
        "SingleIntentInterpretation": {
        "intent": "TURN_OFF",
        "confidence": 1.192e-7,
        "origin": 2
        }
      }
    }
  ]
}

Sample Java app

This Java app, NluClient.java, requests and receives interpretation

/*
 * This Java source file was generated by the Gradle 'init' task.
 */
package xaas.sample.nlu.java.client;

import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.io.StringWriter;
import java.net.URL;
import java.net.URLEncoder;
import java.util.Base64;
import java.util.concurrent.Executor;
import java.util.concurrent.TimeUnit;
import javax.net.ssl.HttpsURLConnection;

/* Processing JSON and reading local files. */
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
import com.google.gson.JsonElement;
import com.google.gson.JsonObject;
import com.google.gson.JsonParser;
import com.google.gson.stream.JsonReader;
import com.google.gson.TypeAdapterFactory;
import com.googlecode.protobuf.format.JsonFormat;

import org.apache.commons.cli.CommandLineParser;
import org.apache.commons.cli.DefaultParser;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.CommandLineParser;
import org.apache.commons.cli.DefaultParser;
import org.apache.commons.cli.HelpFormatter;
import org.apache.commons.cli.Option;
import org.apache.commons.cli.OptionGroup;
import org.apache.commons.cli.Options;
import org.apache.commons.cli.ParseException;
import org.apache.commons.cli.MissingOptionException;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;

/* Generated from the NLU gRPC proto files. */
import io.grpc.*;
import io.grpc.ClientInterceptor;
import io.grpc.ForwardingClientCall.SimpleForwardingClientCall;
import io.grpc.ForwardingClientCallListener.SimpleForwardingClientCallListener;
import io.grpc.Metadata;

import com.nuance.rpc.nlu.v1.*;

public class NluClient {

    public class Defaults {
        static final String SERVER = "nlu.api.nuance.com:443";
        static final String CONFIG_FILE = "config.json";
    }

    private final RuntimeGrpc.RuntimeBlockingStub conn;

    public NluClient(RuntimeGrpc.RuntimeBlockingStub conn) {
        this.conn = conn;
    }

    public static ManagedChannel createChannel(String server) {
        ManagedChannel chan = ManagedChannelBuilder.forTarget(server)
        .useTransportSecurity()
        .build();

        return chan;
    }

    public static RuntimeGrpc.RuntimeBlockingStub createConnection(ManagedChannel chan, String accessToken) {
        RuntimeGrpc.RuntimeBlockingStub stub = RuntimeGrpc.newBlockingStub(chan).withCallCredentials(new CallCredentials() {
            @Override
            public void applyRequestMetadata(RequestInfo r, Executor e, MetadataApplier m) {
                e.execute(new Runnable() {
                    @Override
                    public void run() {
                        try {
                            Metadata headers = new Metadata();
                            Metadata.Key<String> clientIdKey =
                                    Metadata.Key.of("Authorization", Metadata.ASCII_STRING_MARSHALLER);
                            headers.put(clientIdKey, accessToken);
                            m.apply(headers);
                        } catch (Throwable ex) {
                            //log the exception
                            ex.printStackTrace(System.out);
                        }
                    }
                });
            }

            @Override
            public void thisUsesUnstableApi() {
            }
        });
        return stub;
    }

    public static void shutdown(ManagedChannel chan) throws InterruptedException {
        chan.shutdown().awaitTermination(2, TimeUnit.SECONDS);
    }

    public void interpret(String modelUrn, String textInput) {

        InterpretRequest req = InterpretRequest.newBuilder()
            .setParameters(InterpretationParameters.newBuilder()
                .setInterpretationResultType(EnumInterpretationResultType.SINGLE_INTENT)
                .setInterpretationInputLoggingMode(EnumInterpretationInputLoggingMode.PLAINTEXT)
            )
            .setModel(ResourceReference.newBuilder()
                .setType(EnumResourceType.SEMANTIC_MODEL)
                .setUri(modelUrn)
            )
            .setInput(InterpretationInput.newBuilder().setText(textInput))
            .build();

        InterpretResponse resp = conn.interpret(req);
        if (resp == null) {
            System.out.println("Interpretation failed. No response returned.");
            return;
        }

        if (resp.getStatus().getCode() != 200) {
            System.out.println(String.format("Interpretation failed: %s", resp.getStatus().getMessage()));
            return;
        }

        System.out.println(String.format("Interpretation: %s", resp.getResult().toString()));
    }

    /**
     * Generate cmd line options.
     *
     * @return the options
     */
    public static Options generateCmdLineOptions() {
        Options options = new Options();

        /** Help option */
        options.addOption( Option.builder("h")
                                .argName("help")
                                .required(false)
                                .longOpt("help")
                                .desc("Print this help information")
                                .build() );

        options.addOption( Option.builder("s")
                                .argName("server")
                                .hasArg()
                                .required(false)
                                .longOpt("server")
                                .desc("NLU server URL host:port. Default: " + Defaults.SERVER)
                                .build() );

        options.addOption( Option.builder("m")
                                .argName("modelUrn")
                                .hasArg()
                                .required(false)
                                .longOpt("modelUrn")
                                .desc("NLU Model URN with the following schema:\n" +
                                    "urn:nuance-mix:tag:model/<context_tag>/mix.nlu?=language=<language> (e.g. \n" +
                                    "urn:nuance-mix:tag:model/A2_C16/mix.nlu?=language=eng-USA). Default: ")
                                .build() );

        options.addOption( Option.builder("i")
                                .argName("textInput")
                                .hasArg()
                                .required(false)
                                .longOpt("textInput")
                                .desc("Text to perform interpretation on. Default: ")
                                .build() );

        options.addOption( Option.builder("c")
                                .argName("configFile")
                                .hasArg()
                                .required(false)
                                .longOpt("configFile")
                                .desc("config file containing client credentials (client_id and\n" +
                                    "client_secret). Default: " + Defaults.CONFIG_FILE)
                                .build() );
        return options;
    }

    /**
     * Parses the command line.
     *
     * @param args the args
     * @param options the options
     * @return the command line
     * @throws ParseException the parse exception
     */
    public static CommandLine parseCommandLine(String[] args, Options options) throws ParseException {
        CommandLineParser parser = new DefaultParser();     
         return parser.parse(options, args);
    }

    /**
     * Prints the usage.
     *
     * @param options the options
     */
    public static void printUsage(Options options) {
        HelpFormatter formatter = new HelpFormatter();
        formatter.setOptionComparator(null);
        formatter.setWidth(800);

       String path = NluClient.class.getProtectionDomain().getCodeSource().getLocation().getFile();
       File f = new File(path);
       String jar = f.getName();

       formatter.printHelp("java -jar " + jar + " [-h|--help] [-s|--server \"<value>\"] [-m|--modelUrn \"<value>\"]\n" +
                         "                                " + 
                         "[-i|--textInput \"<value>\"] [-c|--configFile \"<value>\"]\n\n" +
                         "                                " + 
                         "Use Nuance Mix NLU to add Intelligence to your app\n\nArguments:\n\n"
                         , options);
   }

    public static void main(String[] args) {
        try {
            // Initialize available options and then parse the command line
            Options options = NluClient.generateCmdLineOptions();
            CommandLine cmd = NluClient.parseCommandLine(args, options);

            // If --help was specified, display usage details and exit, even if other options were provided
            if( cmd.hasOption("help") ) {
                printUsage(options);
                System.exit(0);
            }

            // Parse command-line options
            String configFile = cmd.getOptionValue("configFile", Defaults.CONFIG_FILE);
            String server = cmd.getOptionValue("server", Defaults.SERVER);
            String textInput = cmd.getOptionValue("textInput");
            if (textInput == null || textInput.isEmpty()) {
                throw new MissingOptionException("Missing Required option: textInput");
            }
            String modelUrn = cmd.getOptionValue("modelUrn");
            if (modelUrn == null || modelUrn.isEmpty()) {
                throw new MissingOptionException("Missing Required option: modelUrn");
            }

            // Load credentials from config file
            Config c = new Config(configFile);

            // Authenticate and create a token
            Authenticator a = new Authenticator(c.getConfiguration());
            Token t = a.Authenticate();

            // Create a connection
            ManagedChannel chan = createChannel(server);
            RuntimeGrpc.RuntimeBlockingStub conn = createConnection(chan, String.format("%s %s", t.getTokenType(), t.getAccessToken()));

            // Run the interpretation request
            NluClient client = new NluClient(conn);
            client.interpret(modelUrn, textInput);
            shutdown(chan);
        }
        catch (Exception e) {
            e.printStackTrace(System.out);
        }
    }
}

This Java application consists of these files:

Running the Java app

To run the Java app, first add the credentials from Mix.nlu to the config.json file. For example:

{
    "client_id": "appID:NMDPTRIAL_jane_doe_example_com_20191114T133132096157",
    "client_secret": "db00Ap7bbdfW5EJLYsX0UamuHbMKFAv4nf_61ngRBys",
    "token_url": "https://auth.crt.nuance.com/oauth2/token"
}

Then run the application. This runs the app with the help option to see the values you may pass to the application.

$ java -jar build/libs/nlu_client.jar -h
usage: java -jar nlu_client.jar [-h|--help] [-s|--server "<value>"] [-m|--modelUrn "<value>"]
                                [-i|--textInput "<value>"] [-c|--configFile "<value>"]
 
                                Use Nuance Mix NLU to add Intelligence to your app
 
Arguments:
 -h,--help                    Print this help information
 -s,--server <server>         NLU server URL host:port. Default: nlu.api.nuance.com:443
 -m,--modelUrn <modelUrn>     NLU Model URN with the following schema:
                              urn:nuance-mix:tag:model/<context_tag>/mix.nlu?=language=<language>(e.g.
                              urn:nuance-mix:tag:model/A2_C16/mix.nlu?=language=eng-USA). Default:
 -i,--textInput <textInput>   Text to perform interpretation on. Default:
 -c,--configFile <configFile> Config file containing client credentials (client_id and
                              client_secret). Default: config.json

And this runs the app using the default server and config file, returning the interpretation in JSON format.

$ java -jar build/libs/nlu_client.jar \
   -m urn:nuance-mix:tag:model/ABC1231/mix.nlu?=language=eng-USA \
   -i "i'd like an americano"
Interpretation: literal: "i\'d like an americano"
interpretations {
  single_intent_interpretation {
    intent: "OrderCoffee"
    confidence: 1.0
    origin: GRAMMAR
    entities {
      key: "COFFEE_TYPE"
      value {
        entities {
          text_range {
            start_index: 12
            end_index: 21
          }
          confidence: 1.0
          origin: GRAMMAR
          string_value: "americano"
        }
      }
    }
  }
}

Reference topics

This section provides more information about topics in the gRPC API.

Status messages and codes

gRPC includes error and exception handling facilities. For details, see Error Handling. Use these gRPC features to confirm the success or failure of a request.

NLU also returns a status message confirming the outcome of an InterpretRequest call. The status field in InterpretResponse contains an HTTP status code, a brief description of this status, and, possibly, a longer description.

An HTTP status code of 200 means that NLU successfully interpreted the input. Values in the 400 range indicate an error in the request that your client app sent. Values in the 500 range indicate an internal error with NLUaaS.

Code Indicates
200 Success
400 Error: your client app sent a malformed or unsupported request.
401, 403 Error: your client has not authenticated properly or permission was denied.
404 Error: A resource, such as a model or word list, does not exist or could not be accessed.
415 Error: Unsupported resource type.
500-511 Error: Internal error.

Wordsets

A wordset is a collection of words and short phrases that extends vocabulary by providing additional values for entities in a model. For example, a wordset might provide the names in a user’s contact list or local place names. Like models, wordsets are declared with InterpretRequest - resources.

To use a wordset in an interpretation the following conditions must be met:

The wordset is defined in JSON format as one or more objects, with each object being an array value. Each array is named after a dynamic entity defined within a model to which words can be added at runtime.

For example, you might have an entity, CONTACTS, containing personal names, or CITY, with place names used by the application. The wordset adds to the existing terms in the entity, but applies only to the current session. The terms in the wordset are not added permanently to the entity. All entities must be defined in the model, which are loaded and activated along with the wordset.

This wordset adds terms to the CITY entity

{
  "CITY" : [
    {"canonical" : "La Jolla", "literal" : "La Jolla" },
    {"canonical" : "Beaulieu", "literal" : "Beaulieu" },
    {"canonical" : "Worcester", "literal" : "Worcester" },
    {"canonical" : "Abington Pigotts", "literal" : "Abington Pigotts" },
    {"canonical" : "Steeple Morden", "literal" : "Steeple Morden" }
  ]
}

The wordset includes additional values for one or more entities. The syntax is:

{
   "entity" : [
      { "canonical": "value",
      "literal": "written form",
      "spoken": ["spoken form 1", "spoken form n"]
      },
      { "literal": "written form",
      "canonical": "value",
      "spoken":"spoken form 1", "spoken form n"] },
   ...
   ],
   "entity" : [ ... ]
}

Syntax
entity String A dynamic entity defined in a model, containing a set of values. The name is case-sensitive. Consult the model for entity names.
canonical String (Optional) The value of the entity to be returned by interpretation by NLU.
literal String The written form of the value to be returned by ASR recognition.
spoken Array (Optional) One or more spoken forms of the value, used by ASR. Ignored for NLU.

Interpretation results: Intents

This input exactly matches a training sentence for the ASK_JOB intent (note "origin": GRAMMAR). An alternative intent is proposed but with much lower confidence.

"result": {
  "literal": "Do you have any openings for a pastry chef ?",
  "interpretations": [
    {
      "singleIntentInterpretation": {
        "intent": "ASK_JOB",
        "confidence": 1.0,
        "origin": "GRAMMAR"
      }
    },
    {
      "singleIntentInterpretation": {
        "intent": "PLACE_ORDER",
        "confidence": 0.00010213560017291456,
        "origin": "STATISTICAL"
      }
    }
  ]
}

This input is similar to the PLACE_ORDER training sentences (note "origin": STATISTICAL)

"result": {
  "literal": "I'd like to make an order please",
  "interpretations": [
    {
      "singleIntentInterpretation": {
        "intent": "PLACE_ORDER",
        "confidence": 0.9779196381568909,
        "origin": "STATISTICAL"
      }
    }
  ]
}

This result returns the intent PLACE_ORDER along with several entities. See Interpretation results: Entities for more results with entities

  "result": {
    "literal": "I'd like to order a blueberry pie",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "PLACE_ORDER",
          "confidence": 0.9913266897201538,
          "origin": "STATISTICAL",
          "entities": {
            "FLAVOR": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 20,
                    "endIndex": 29
                  },
                  "confidence": 0.8997141718864441,
                  "origin": "STATISTICAL",
                  "stringValue": "blueberry"
                }
              ]
            },
            "PRODUCT": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 30,
                    "endIndex": 33
                  },
                  "confidence": 0.8770073652267456,
                  "origin": "STATISTICAL",
                  "stringValue": "pie"
                }
              ]
            }
          }
        }
      }
    ]
  }

NLU returns two alternative intents for this input: GET_INFO or PLACE_ORDER, both with medium confidence

"result": {
  "literal": "Can I see a price list and place an order",
  "interpretations": [
    {
      "singleIntentInterpretation": {
        "intent": "GET_INFO",
        "confidence": 0.563047468662262,
        "origin": "STATISTICAL"
      }
    },
    {
      "singleIntentInterpretation": {
        "intent": "PLACE_ORDER",
        "confidence": 0.40654945373535156,
        "origin": "STATISTICAL"
      }
    }
  ]
}

Multi-intent interpretation currently returns information similar to single-intent

"result": {
  "literal": "Can I see a price list and place an order",
  "interpretations": [
    {
      "multiIntentIntepretation": {
        "root": {
          "intent": {
            "name": "GET_INFO",
            "textRange": {
              "endIndex": 41
            },
            "confidence": 0.563047468662262,
            "origin": "STATISTICAL"
          }
        }
      }
    },
    {
      "multiIntentIntepretation": {
        "root": {
          "intent": {
            "name": "PLACE_ORDER",
            "textRange": {
              "endIndex": 41
            },
            "confidence": 0.40654945373535156,
            "origin": "STATISTICAL"
          }
        }
      }
    }
  ]
}

The results returned by NLU include one or more candidate intents that identify the underlying meaning of the user's input. (They can also include entities and values, described in Interpretation results: Entities.) You may request either single-intent interpretation or multi-intent interpretation with InterpretationParameters - interpretation_result_type: SINGLE_INTENT or MULTI_INTENT

Single-intent interpretation means that NLU returns one intent for the user's input: the intent that best describes the user's underlying meaning. NLU may return several candidate intents, but they are listed as alternatives rather than as complementary intents.

Multi-intent interpretation requires that your semantic model support this type of interpretation. Currently you cannot create these models in Mix, so the feature is not fully supported, but you may still request multi-interpretation without error. Like single-intent results, multi-intent results contain one best candidate for the user's input, optionally with alternatives.

True multi-intent results show all the intents contained within the user's input. These results will be available in an upcoming release.

Interpretation results: Entities

List: The FLAVOR and PRODUCT entities identify what the user wants to order

  "result": {
    "literal": "I'd like to order a butterscotch cake",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "PLACE_ORDER",
          "confidence": 0.9917341470718384,
          "origin": "STATISTICAL",
          "entities": {
            "FLAVOR": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 20,
                    "endIndex": 32
                  },
                  "confidence": 0.9559149146080017,
                  "origin": "STATISTICAL",
                  "stringValue": "caramel"
                }
              ]
            },
            "PRODUCT": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 33,
                    "endIndex": 37
                  },
                  "confidence": 0.9386003613471985,
                  "origin": "STATISTICAL",
                  "stringValue": "cake"
                }
              ]
            }
          }
        }
      }
    ]
  }

Freeform: The MESSAGE entity matches anything prefixed with "Call someone" or "Ask someone" or "Send this message to someone." An additional list entity, NAMES, captures the "someone."

  "result": {
    "literal": "Ask Jenny When should we arrive",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "ASK_RANDOM",
          "confidence": 1.0,
          "origin": "GRAMMAR",
          "entities": {
            "NAMES": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 4,
                    "endIndex": 9
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "stringValue": "Jenny"
                }
              ]
            },
            "MESSAGE": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 10,
                    "endIndex": 31
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "stringValue": "when should we arrive"
                }
              ]
            }
          }
        }
      }
    ]
  }
  "result": {
    "literal": "Send this message to Chris Can you pick me up from the 5:40 train",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "ASK_RANDOM",
          "confidence": 1.0,
          "origin": "GRAMMAR",
          "entities": {
            "MESSAGE": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 27,
                    "endIndex": 65
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "stringValue": "can you pick me up from the 5:40 train"
                }
              ]
            },
            "NAMES": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 21,
                    "endIndex": 26
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "stringValue": "Chris"
                }
              ]
            }
          }
        }
      }
    ]
  }

Relationship: DATE is an isA entity that wraps nuance_CALENDARX

  "result": {
    "literal": "I want to pay my Visa bill on February 28",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "PAY_BILL",
          "confidence": 1.0,
          "origin": "GRAMMAR",
          "entities": {
            "DATE": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 30,
                    "endIndex": 41
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "entities": {
                    "nuance_CALENDARX": {
                      "entities": [
                        {
                          "textRange": {
                            "startIndex": 30,
                            "endIndex": 41
                          },
                          "confidence": 1.0,
                          "origin": "GRAMMAR",
                          "structValue": {
                            "nuance_CALENDAR": {
                              "nuance_DATE": {
                                "nuance_DATE_ABS": {
                                  "nuance_MONTH": 2.0,
                                  "nuance_DAY": 28.0
                                }
                              }
                            }
                          }
                        }
                      ]
                    }
                  }
                }
              ]
            },
            "BILL_TYPE": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 17,
                    "endIndex": 21
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "stringValue": "Visa"
                }
              ]
            }
          }
        }
      }
    ]
  }

nuance_CALENDARX is a hasA entity containing several date and time entities

  "result": {
    "literal": "I want to pay my AMEX bill on February 28",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "PAY_BILL",
          "confidence": 1.0,
          "origin": "GRAMMAR",
          "entities": {
            "nuance_CALENDARX": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 30,
                    "endIndex": 41
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "structValue": {
                    "nuance_CALENDAR": {
                      "nuance_DATE": {
                        "nuance_DATE_ABS": {
                          "nuance_MONTH": 2.0,
                          "nuance_DAY": 28.0
                        }
                      }
                    }
                  }
                }
              ]
            },
            "BILL_TYPE": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 17,
                    "endIndex": 21
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "stringValue": "American Express"
                }
              ]
            }
          }
        }
      }
    ]
  }

nuance_AMOUNT is another hasA entity containing multiple entities

  "result": {
    "literal": "i'd like to pay six hundred and twenty five dollars on my hydro bill",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "PAY_BILL",
          "confidence": 1.0,
          "origin": "GRAMMAR",
          "entities": {
            "BILL_TYPE": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 58,
                    "endIndex": 63
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "stringValue": "Hydro"
                }
              ]
            },
            "nuance_AMOUNT": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 16,
                    "endIndex": 51
                  },
                  "confidence": 1.0,
                  "origin": "GRAMMAR",
                  "structValue": {
                    "nuance_UNIT": "USD",
                    "nuance_NUMBER": 625.0
                  }
                }
              ]
            }
          }
        }
      }
    ]
  }

NLU results may also include individual words and phrases in the user's input, mapped to entities and values. The results differ depending on the type of entity that is interpreted. Entity creation and annotation is done in Mix.nlu or in Mix.dialog, where the entity types include: List, Freeform, and Relationship (isA and/or hasA).

List entity

A list entity is an entity with named values. For example, FLAVOR is a list entity that might contain values such as caramel, blueberry, strawberry, chocolate, and so on. Or BILL_TYPE might contain values such as Visa, American Express, Telephone, and Hydro.

Each value has one or more user literals, or ways that users might express this canonical value. For example, the FLAVOR entity matches sentences containing phrases such as:

And the BILL_TYPE entity matches sentences such as:

Freeform entity

A freeform entity has values that are only vaguely defined. Freeform entities can match lengthy user input, but they do not give precise information about the contents of the input. For example, MESSAGE is a freeform entity that occurs after keywords in the user input such as:

"Call Fred to say..."
"Ask Jenny..."
"Send a message to Chris."

Relationship entity

Relationship entities are isA and/or hasA entities.

Defaults

The proto files provide the following default values for InterpretRequest sent to NLU. Mandatory fields are in bold.

     
Fields in InterpretRequest Default value
InterpretRequest  
    parameters  
        interpretation_result_type SINGLE_INTENT
        interpretation_input_logging_mode PLAINTEXT
        post_processing_script_parameters Blank
        max_interpretations 0: Use the configured setting in the NLU instance.
    model  
        type UNDEFINED_RESOURCE_TYPE: NLU will use the resource provided by the content server.
        uri Mandatory
        request_timeout_ms 0: Use the configured setting in the NLU instance.
        headers Blank
    resources  
        inline_wordset
Blank
    client_data Blank
    user_id Blank
    input Mandatory

gRPC API

NLU as a Service provides these protocol buffer (.proto) files to define the NLU service for gRPC. These files contain the building blocks of your NLU applications:

Once you have transformed the proto files into functions and classes in your programming language using gRPC tools, you can call these functions from your client application to start interpreting plain text or results from ASR as a Service.

See Client app development for a samples using Python, Go, and Java. For other languages, consult the gRPC and Protocol Buffer documentation:

Field names in proto and stub files

In this section, the names of the fields are shown as they appear in the proto files. To see how they are generated in your programming language, consult your generated files. For example:

Proto file Python Go Java
start_index start_index StartIndex startIndex or getStartIndex
audio_range audio_range AudioRange audioRange or getAudioRange

For details, see the Protocol Buffers documentation for:

Proto file structure

The proto files define an RPC service with an Interpet method containing an InterpretRequest and InterpretResponse. Details about each component are referenced by name within the proto file.

This is the structure of InterpretRequest:

Proto files: request

And this shows InterpretResponse:

Proto files: response

Runtime

Runtime interpretation service. Use the Interpret method to request an interpretation.

Name Request Type Response Type Description
Interpret InterpretRequest InterpretResponse

InterpretRequest

InterpretRequest example

interpret_req = InterpretRequest(
    parameters=params,
    model=model,
    input=input)
req := &pb.InterpretRequest{
    Parameters: params,
    Model:      model,
    Input:      input,
}
InterpretRequest req = InterpretRequest.newBuilder()

The input to interpret, with parameters, model, extra resources, and client tags to customize the interpretation. Included in Runtime service.

Field Type Description
parameters InterpretationParameters Optional parameters for the interpretation.
model ResourceReference Required semantic model to perform the interpretation.
resources InterpretationResource Repeated. Optional resources to customize the interpretation.
client_data string,string Optional key-value pairs to log.
user_id string Optional. Identifies a particular user within an application.
input InterpretationInput Required input to interpret.

This message includes:

InterpretRequest
    parameters InterpretationParameters
        interpretation_result_type (EnumInterpretationResultType)
        interpretation_input_logging_mode (EnumInterpretationInputLoggingMode)
        post_processing_script_parameters
        max_interpretations
    model (ResourceReference)
    resources (InterpretationResource)
    client_data
    user_id
    input (InterpretationInput)

InterpretationParameters

InterpretationParameters example

params = InterpretationParameters(
    interpretation_result_type=EnumInterpretationResultType.SINGLE_INTENT,
    interpretation_input_logging_mode=EnumInterpretationInputLoggingMode.PLAINTEXT)
params := &pb.InterpretationParameters{
    InterpretationResultType:       pb.EnumInterpretationResultType_SINGLE_INTENT,
    InterpretationInputLoggingMode: pb.EnumInterpretationInputLoggingMode_PLAINTEXT,
}
InterpretRequest req = InterpretRequest.newBuilder()
    .setParameters(InterpretationParameters.newBuilder()
        .setInterpretationResultType(EnumInterpretationResultType.SINGLE_INTENT)
        .setInterpretationInputLoggingMode(EnumInterpretationInputLoggingMode.PLAINTEXT)
    )

Optional parameters controlling the interpretation. Included in InterpretRequest.

Field Type Description
interpretation_result_type EnumInterpretation ResultType Format of interpretation result. Default is SINGLE_INTENT.
interpretation_input_logging_mode EnumInterpretation InputLoggingMode Format for input in the diagnostic logs. Default is PLAINTEXT.
post_processing_script_parameters string,string Parameters to pass to custom post-processing ECMA scripts in the model.
max_interpretations uint32 Maximum interpretations for the result. Default is 0 for the NLU server's configured setting.

This message includes:

InterpretRequest
  InterpretationParameters
    interpretation_result_type (EnumInterpretationResultType)
    interpretation_input_logging_mode (EnumInterpretationInputLoggingMode)
    post_processing_script_parameters
    max_interpretations

EnumInterpretationResultType

Format of interpretations result. Included in InterpretationParameters.

Name Number Description
UNKNOWN 0 Default. Same as SINGLE_INTENT.
SINGLE_INTENT 1 Always return a single-intent interpretation.
MULTI_INTENT 2 Always return multi-intent interpretation.

EnumInterpretationInputLoggingMode

Format for input in the diagnostic logs. Included in InterpretationParameters.

Name Number Description
PLAINTEXT 0 Default. Log the literal text of the input.
SUPPRESSED 9 Input is replaced with "value suppressed."

ResourceReference

ResourceReference example

options.add_argument('--modelUrn', nargs="?", 
                     help="NLU Model URN")
. . .
# Reference the model via the app config
model = ResourceReference(
    type=EnumResourceType.SEMANTIC_MODEL,
    uri=args.modelUrn)
modelUrn := parser.String("m", "modelUrn", &argparse.Options{
    Default: "",
    Help:    "NLU Model URN with the following schema: urn:nuance-mix:tag:model/<context_tag>/mix.nlu?=language=<language> (e.g. urn:nuance-mix:tag:model/A2_C16/mix.nlu?=language=eng-USA)",
    })
. . .
model := &pb.ResourceReference{
    Type: pb.EnumResourceType_SEMANTIC_MODEL,
    Uri:  modelUrn,
}
options.addOption( Option.builder("m")
                         .argName("modelUrn")
                         .hasArg()
                         .required(false)
                         .longOpt("modelUrn")
                         .desc("NLU Model URN with the following schema:\n" +
                         "urn:nuance-mix:tag:model/<context_tag>/mix.nlu?=language=<language> (e.g. \n" +
                         "urn:nuance-mix:tag:model/A2_C16/mix.nlu?=language=eng-USA). Default: ")
                         .build() );
. . .
InterpretRequest req = InterpretRequest.newBuilder()
    .setModel(ResourceReference.newBuilder()
        .setType(EnumResourceType.SEMANTIC_MODEL)
        .setUri(modelUrn)
    )

Parameters to fetch an external resource. Included in InterpretRequest and InterpretationResource.

Field Type Description
type EnumResourceType Resource type.
uri string Location or name of the resource.
request_timeout_ms uint32 Time, in ms, to wait for a response from the hosting server. Default is 0 for the NLU server's configured setting.
headers string,string Optional map of headers to transmit to the server hosting the resource. May include max_age, max_stale, max_fresh, cookies.

This message includes:

InterpretRequest | InterpretationResource
  ResourceReference
    type (EnumResourceType)
    uri
    request_timeout_ms
    headers

EnumResourceType

Specifies a semantic model or wordset. Included in ResourceReference. Use the default, UNDEFINED_RESOURCE_TYPE, to determine the type from the content-type header returned by the resource's server.

Name Number Description
UNDEFINED_RESOURCE_TYPE 0 Default. Use the content-type header from the resource's server to determine the type.
SEMANTIC_MODEL 1 A semantic model from Mix.nlu.
WORDSET 2 Currently unsupported. Use InterpretationResource – inline_wordset instead.

InterpretationResource

A resource to customize the interpretation. Included in InterpretRequest.

External wordset files and compiled wordsets are not currently supported in Nuance-hosted NLUaaS. To use wordsets in this environment, use inline wordsets.

Field Type Description
external_reference ResourceReference External resource.
inline_wordset string Inline wordset, in JSON. See Wordsets.

This message includes:

InterpretRequest
  InterpretationResource
    external_reference (ResourceReference)
    inline_wordset

InterpretationInput

InterpretationInput example

def parse_args():
    options.add_argument("--textInput", metavar="file", nargs="?",
        help="Text to perform interpretation on")
. . .
input = InterpretationInput(
    text=args.textInput)
func main() {
    textInput := parser.String("i", "textInput", &argparse.Options{
        Default: "",
        Help:    "Text to perform interpretation on",
    })   
. . .
input := &pb.InterpretationInput{
    InputUnion: &pb.InterpretationInput_Text{Text: textInput},
    }
options.addOption(Option.builder("i")
    .argName("textInput")
    .hasArg()
    .required(false)
    .longOpt("textInput")
    .desc("Text to perform interpretation on. Default: ")
    .build() );
. . .
InterpretRequest req = InterpretRequest.newBuilder() 
    .setInput(InterpretationInput.newBuilder().setText(textInput))

Input to interpret. Included in InterpretRequest. Use either text or the result from ASR as a Service.

Field Type Description
text string Text input.
asr_result google.protobuf.Any Result from ASR as a Service.

This message includes:

InterpretRequest
  InterpretationInput
    text
    or
    asr_result

InterpretResponse

The interpretation result. Included in Runtime service.

Field Type Description
status Status Whether the request was successful. The 200 code means success, other values indicate an error.
result InterpretResult The result of the interpretation.

This message includes:

InterpretResponse
  status (Status)
  result (InterpretResult)
    literal
    interpretations (Interpretation)

Status

A Status message indicates whether the request was successful or reports errors that occurred during the request. Included in InterpretResponse.

Field Type Description
code uint32 HTTP status code. The 200 code means success, other values indicate an error.
message string Brief description of the status.
details string Longer description if available.

InterpretResult

For examples, see Interpretation results: Intents and Interpretation results: Entities.

Result of interpretation. Contains the input literal and one or more interpretations. Included in InterpretResponse.

Field Type Description
literal string Input used for interpretation. For text input, this is always the raw input text. For ASR as a Service results, a concatenation of the audio tokens, separated by spaces. See InterpretationInput.
interpretations Interpretation Repeated. Candidate interpretations of the input.

This message includes:

InterpretResponse
  InterpretResult
    literal
    interpretations (Interepretation)

Interpretation

Candidate interpretation of the input. Included in InterpretResult.

The interpret request specifies the type of interpretation: either single-intent or multi-intent (see InterpretRequest - InterpretationParameters - interpretation_result_type). Multi-intent interpretation requires a semantic model that is enabled for multi-intent (not currently supported in Mix.nlu). See Interpretation results for details and more examples.

Field Type Description
single_intent_interpretation SingleIntentInterpretation The result contains one intent.
multi_intent_interpretation MultiIntentInterpretation The result contains multiple intents. This choice requires a multi-intent semantic model, which is not currently supported in Nuance-hosted NLUaaS.

This message includes:

InterpretResult
  Interpretation
    single_intent_interpretation (SingleIntentInterpretation)
    or
    multi_intent_interpretation (MultiIntentInterpretation)

SingleIntentInterpretation

Single intent interpretation returns the most likely intent, PLACE_ORDER, and an alternative, PAY_BILL, with a much lower confidence rate

  "result": {
    "literal": "I'd like to place an order",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "PLACE_ORDER",
          "confidence": 0.9941431283950806,
          "origin": "STATISTICAL"
        }
      },
      {
        "singleIntentInterpretation": {
          "intent": "PAY_BILL",
          "confidence": 0.0019808802753686905,
          "origin": "STATISTICAL"
        }
      }
    ]
  }
}

Single-intent interpretation results. Included in Interpretation. Theses results include one or more alternative intents, complete with entities if they occur in the text. Each intent is shown with a confidence score and whether the match was done from a grammar file or an SSM (statistical) file.

Field Type Description
intent string Intent name as specified in the semantic model.
confidence float Confidence score (between 0.0 and 1.0 inclusive). The higher the score, the likelier the detected intent is correct.
origin EnumOrigin How the intent was detected.
entities string,SingleIntentEntityList Map of entity names to lists of entities: key, entity list.

This message includes:

InterpretResponse
  InterpretResult
    SingleIntentInterpretation
      intent
      confidence
      origin (EnumOrigin)
      entities (key,SingleIntentEntityList)
        entity (SingleIntentEntity)

EnumOrigin

Origin of an intent or entity. Included in SingleIntentInterpretation, SingleIntentEntity, IntentNode, and EntityNode.

Name Number Description
UNKNOWN 0
GRAMMAR 1 Determined from an exact match with a grammar file in the model.
STATISTICAL 2 Determined statistically from the SSM file in the model.

SingleIntentEntityList

List of entities. Included in SingleIntentInterpretation.

Field Type Description
entities SingleIntentEntity Repeated. An entity match in the intent, for single-intent interpretation.

SingleIntentEntity

Single intent, PLACE_ORDER, with entity FLAVOR: strawberry and PRODUCT: cheesecake

  "result": {
    "literal": "I want to order a strawberry cheesecake",
    "interpretations": [
      {
        "singleIntentInterpretation": {
          "intent": "PLACE_ORDER",
          "confidence": 0.9987062215805054,
          "origin": "STATISTICAL",
          "entities": {
            "FLAVOR": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 18,
                    "endIndex": 28
                  },
                  "confidence": 0.9648909568786621,
                  "origin": "STATISTICAL",
                  "stringValue": "strawberry"
                }
              ]
            },
            "PRODUCT": {
              "entities": [
                {
                  "textRange": {
                    "startIndex": 29,
                    "endIndex": 39
                  },
                  "confidence": 0.9585300087928772,
                  "origin": "STATISTICAL",
                  "stringValue": "cheesecake"
                }
              ]
            }
          }
        }
      }
    ]
  }
}

Entity in the intent. Included in SingleIntentEntityList.

Field Type Description
text_range TextRange Range of literal text for which this entity applies.
confidence float Confidence score between 0.0 and 1.0 inclusive. The higher the score, the likelier the entity detection is correct.
origin EnumOrigin How the entity was detected.
entities string, SingleIntentEntityList For hierarchical entities, the child entities of the entity: key, entity list.
string_value string The canonical value as a string.
struct_value google.protobuf. Struct The entity value as an object. This object may be directly converted to a JSON representation.
literal string The input literal associated with this entity.
audio_range AudioRange Range of audio input this operator applies to. Available only when interpreting a recognition result from ASR as a Service.

This message includes:

InterpretResponse
  InterpretResult
    SingleIntentInterpretation
      SingleIntentEntityList
        SingleIntentEntity
          text_range (TextRange)
          confidence
          origin (EnumOrigin)
          entities (key, SingleIntentEntityList)
          string_value
          struct_value
          audio_range (AudioRange)

TextRange

Range of text in the input literal. Included in SingleIntentEntity, OperatorNode, IntentNode, and EntityNode.

Field Type Description
start_index uint32 Inclusive, 0-based character position.
end_index uint32 Exclusive, 0-based character position.

AudioRange

Range of time in the input audio. Included in and SingleIntentEntity, OperatorNode, IntentNode, and EntityNode. Available only when interpreting a recognition result from ASR as a Service.

Field Type Description
start_time_ms uint32 Inclusive start time in milliseconds.
end_time_ms uint32 Exclusive end time in milliseconds.

MultiIntentInterpretation

Multi-intent interpretation against a standard semantic model returns just one intent for each candidate in the "root" object

  "result": {
    "literal": "Price list and place order",
    "interpretations": [
      {
        "multiIntentIntepretation": {
          "root": {
            "intent": {
              "name": "GET_INFO",
              "textRange": {
                "endIndex": 26
              },
              "confidence": 0.8738054037094116,
              "origin": "STATISTICAL"
            }
          }
        }
      },
      {
        "multiIntentIntepretation": {
          "root": {
            "intent": {
              "name": "PLACE_ORDER",
              "textRange": {
                "endIndex": 26
              },
              "confidence": 0.08993412554264069,
              "origin": "STATISTICAL"
            }
          }
        }
      }
    ]
  }

Multi-intent interpretation. Contains a tree of nodes representing the detected operators, intents, and entities and their associations. Included in Interpretation.

Multi-intent interpretation may be requested without error, but it is not currently supported as it requires a multi-intent semantic model, not yet available in Mix. When requesting multi-intent interpretation against a single-intent model, the results contain the same information as a single-intent interpretation, but formatted slightly differently: the root of the multi-intent interpretation contains the intent.

Field Type Description
root InterpretationNode Root node of the interpretation tree. Can be either OperatorNode or IntentNode.

This message includes:

InterpretResponse
  InterpretResult
    MultiIntentInterpretation
      root (InterpretationNode)

InterpretationNode

Node in the interpretation tree. Included in MultiIntentInterpretation.

Field Type Description
operator OperatorNode The relationship of the intents or entities.
intent IntentNode The intents detected in the user input.
entity EntityNode The entities in the intent.

This message includes:

InterpretResponse
  InterpretResult
    MultiIntentInterpretation
      InterpretationNode
        operator (OperatorNode)
        intent (IntentNode)
        entity (EntityNode)

OperatorNode

Logical operator node. Included in InterpretationNode.

Field Type Description
operator EnumOperator Type of operator.
text_range TextRange Range of the literal text this operator applies to.
children InterpretationNode Repeated. Child nodes for this operator. An operator node always has children.
literal string The input literal associated with this operator.
audio_range AudioRange Range of audio input this operator applies to. Available only when interpreting a recognition result from ASR as a Service.

This message includes:

InterpretResponse
  InterpretResult
    MultiIntentInterpretation
      InterpretationNode
        OperatorNode
          operator (EnumOperator)
          text_range (TextRange)
          children (InterpretationNode)
          audio_range (AudioRange)

EnumOperator

Logical operator type, AND, OR, or NOT. Included in OperatorNode.

Name Number Description
AND 0 The following item is an additional intent or entity.
OR 1 The following item is an alternative intent or entity.
NOT 2 The following item is not detected.

IntentNode

Node representing an intent. Included in InterpretationNode.

Field Type Description
name string Intent name as specified in the semantic model.
text_range TextRange Range of literal text this intent applies to.
confidence float Confidence score between 0.0 and 1.0 inclusive. The higher the score, the likelier the detected intent is correct.
origin EnumOrigin How the intent was detected.
children InterpretationNode Repeated. Child nodes for this intent. An intent node has zero or more child nodes.
literal string The input literal associated with this intent.
audio_range AudioRange Range of audio input this operator applies to. Available only when interpreting a recognition result from ASR as a Service.

This message includes:

InterpretResponse
  InterpretResult
    MultiIntentInterpretation
      InterpretationNode
        IntentNode
          name
          text_range (TextRange)
          confidence
          origin (EnumOrigin)
          children (InterpretationNode)
          audio_range (AudioRange)

EntityNode

Node representing an entity. Included in InterpretationNode.

Field Type Description
name string Entity name as specified in the semantic model.
text_range TextRange Range of literal text this intent applies to.
confidence float Confidence score between 0.0 and 1.0 inclusive. The higher the score, the likelier the detected entity is correct.
origin EnumOrigin How the intent was detected.
children InterpretationNode Repeated. Child nodes for this entity. A hierarchical entity node can have child entity and operator nodes. Entity nodes currently never have intent nodes as children.
string_value string The value of the entity as specified in the semantic model.
struct_value google. protobuf.Struct Structured data, ready to convert to a JSON representation.
literal string The input literal associated with this entity.
audio_range AudioRange Range of audio input this operator applies to. Available only when interpreting a recognition result from ASR as a Service.

This message includes:

InterpretResponse
  InterpretResult
    MultiIntentInterpretation
      InterpretationNode
        EntityNode
          name
          text_range (TextRange)
          confidence
          origin (EnumOrigin)
          children (InterpretationNode)
          string_value
          struct_value
          audio_range (AudioRange)

Scalar value types

The data types in the proto files are changed to equivalent types in the generated client stub files.

Proto Notes C++ Java Python
double double double float
float float float float
int32 Uses variable-length encoding. Inefficient for encoding negative numbers. If your field is likely to have negative values, use sint32 instead. int32 int int
int64 Uses variable-length encoding. Inefficient for encoding negative numbers. If your field is likely to have negative values, use sint64 instead. int64 long int/long
uint32 Uses variable-length encoding. uint32 int int/long
uint64 Uses variable-length encoding. uint64 long int/long
sint32 Uses variable-length encoding. Signed int value. These encode negative numbers more efficiently than regular int32s. int32 int int
sint64 Uses variable-length encoding. Signed int value. These encode negative numbers more efficiently than regular int64s. int64 long int/long
fixed32 Always four bytes. More efficient than uint32 if values are often greater than 2^28. uint32 int int
fixed64 Always eight bytes. More efficient than uint64 if values are often greater than 2^56. uint64 long int/long
sfixed32 Always four bytes. int32 int int
sfixed64 Always eight bytes. int64 long int/long
bool bool boolean boolean
string A string must always contain UTF-8 encoded or 7-bit ASCII text. string String str/unicode
bytes May contain any arbitrary sequence of bytes. string ByteString str

Change log

2020-06-24

2020-04-30

2020-03-30

2020-02-19

2020-01-22

2019-12-18

2019-12-11

2019-12-02

2019-11-25

2019-11-15

Below are changes made to the NLUaaS API documentation since the initial Beta release: