Develop and Deploy an NLP Model (2023)

  1. Predict project risk sentiment using NLP and machine learning
  2. Develop and Deploy an NLP Model

You can use the FastAI (v2) deep learning library to develop the NLP model to implement this solution.

FastAI uses a transfer learning approach which is relatively recent (2019) in NLP practice. Originating in image recognition modeling, it allows you to use small datasets and transfer learning and leverage pretrained (usually large) models. It then adds or adjusts the final layers for a specific classification. Before the advent of deep neural networks and transfer learning, classic NLP development involved tokenization and statistical analysis of word occurrences. So, before jumping in to transfer learning, review the sample Jupyter Notebook which presents some classic statistical analysis. The statistical analysis gives sanity checks on the dataset and adheres to the modern principle that the more you check and recheck your data, the better.

You'll need to develop a model for this solution to work. The model used in this solution is based on NLP using a deep learning library called FastAI.

Get started using the following samples available on Github:

  • A Jupyter Notebook in the Github repository to demonstrate the model development process using the sample data.
  • A dataset of project and task status reports in this solution.


Consider providing more datasets in the project and task management domain to improve model training and prediction accuracy.

The following are the steps to develop and deploy this model:

  • Deploy the final model binary file (export.pkl) created using Jupyter Notebook.
  • Upload it to the Oracle Cloud Infrastructure Data Science model.
  • Deploy it to the ODS model deployment running on Oracle Cloud Infrastructure Compute.
  • Expose it as a REST API for the Oracle Functions application to call.

For your convenience, we provided a Python script (scripts/ to demonstrate how to use OCI Python API to create and deploy a model to ODS.

Develop a Model

You will now learn about the model-building processes using the example Jupyter notebook. Follow these steps:

  1. Set up Jupyter notebook

    Set up the Jupyter notebook in a conda environment defined in the environment YAML file for this solution.
  2. Prepare dataset

    Use the sample dataset provided in a CSV file. Review and analyze the data to enable you to create a good machine learning model. Include additional data in the project or task management domain if required.
  3. Process dataset

    Process your dataset using the framework provided which makes it easier for you to load and process the data. This includes tokenization, splitting the training and validation data, and numericalization.
  4. Perform statistical analysis

    Analyze the proportion of reports with each status label; Red, Amber, and Green. For example, look for the most frequently occurring words in Red status. Using the analysis, you can find a way to make a classifier based on it called Naive Bayes.
  5. Create NLP model using FastAI

    Use FastAI to create a language model based on the vocabulary of the dataset. Then use a transfer learning technique to integrate the language model with the core model (AWD_LSTM).
  6. Test and validate your model

    Use some sample data to test your model.
  7. Create a model binary file

    Export the model to a pickle file (pkl) and upload it to the OCI Data Science platform.

After you have the model binary file, you must package the binary file together with the model runtime and custom logic Python script (

Upload and Deploy the Model

You can now upload it to the Oracle Cloud Infrastructure Data Science (ODS) platform and expose it as a REST API.

Use the model deployment Python script ( to learn how to use the OCI Python SDK and create and deploy the model to ODS. Additionally use the included example custom logic Python script to load serialized model objects to the memory, and define an inference endpoint, predict().

The model deployment Python script performs the following steps to deploy the model:

  1. Retrieves the existing model from Model Catalog and Model Deployment.
  2. Deactivates the existing model in Model Deployment.
  3. Deletes the existing model artifact from Model Catalog.
  4. Uploads the artifacts export.pkl, runtime.yaml and in a compressed ZIP format to Model Catalog.
  5. Creates or updates Model Deployment.

Modify the following Python script and provide the correct parameter values for your environment:

compartmentID="<YOUR OCI COMPARTMENT OCID>" projectID="<YOUR ODS PROJECT OCID>" modelDisplayName="Risk Predictor" modelDescription="Risk Predictor Sample" modelDeploymentDisplayName="Risk Predictor" modelDeploymentDescription="Risk Predictor Sample" modelDeploymentInstanceCount=1 modelDeploymentLBBandwidth=10 #mbps modelDeploymentPreditLogID="<YOUR ODS DEPLOYMENT PREDICT LOG OCID>" modelDeploymentPreditLogGroupID="<YOUR ODS DEPLOYMENT PREDICT LOG GROUP OCID>" modelDeploymentAccessLogID="<YOUR ODS DEPLOYMENT ACCESS LOG OCID>" modelDeploymentAccessLogGroupID="<YOUR ODS DEPLOYMENT ACCESS LOG GROUP OCID>" modelDeploymentInstanceShapeName="<OCI VM SHAPE , FOR EXAMPLE: VM.Standard2.1>"

Alternatively, you can update the parameters from the OCI Console.

Follow the instructions in About Model Catalog in the Oracle Cloud Infrastructure Data Science documentation linked in the Explore section.


The maximum model artifact file size limit is 2 GB and the upload file size limit using the OCI Console is 100 MB. In most cases the NLP model binary file will exceed this limit.

You can choose the Virtual Machine (VM) shape when you deploy the model. See the Oracle Cloud Infrastructure Data Science Models and VM Shapes documentation linked in the Explore section to find the supported shapes.

Create an ODS Project

Follow these instructions to create a project and populate the ODS Project ID:

  1. In the OCI Console, open the compartment you used to create the ODS Project.
  2. In the navigation menu, under Analytics & AI, click Data Science.
  3. Click Create project to create a Data Science project.
  4. After the project is created, copy the ODS Project OCID, and paste it in the Python script parameter under project ID.

Create a Log Group and Log

For Model Deployment Predict and Access Logs, you must first create a log group and then create a log shared by both events.

Follow these steps:

  1. In the OCI Console, confirm that you are viewing the compartment you used to create the ODS Project.
  2. In the navigation menu, under Observability & Management, click Log Groups.
  3. Click Create Log Group to create a log group for the model deployment. The predict and access log can use the same log group.
  4. After the log group is created, copy the log group OCID, and paste it in the Python script parameter under modelDeploymentPreditLogGroupID and modelDeploymentAccessLogGroupID.
  5. While you are still in the Logging service screen, click Logs.
  6. Click Create custom log to create a custom log and select the log group you created earlier.
  7. After the log is created, copy the log OCID, and paste it in the Python script parameter under modelDeploymentPreditLogID and modelDeploymentAccessLogID.

Develop Functions

You can now develop and deploy your functions to complete the implementation of this solution.

You implement multiple API calls to backend applications and perform data aggregation, filtering, and manipulation.

OCI Health Checks does the following:

  • Calls Oracle Functions at the end of each day.
  • Gets the task progress using the Oracle Project Management REST API.
  • Identifies the project manager using the Oracle Cloud HCM REST API.
  • Retrieves the OSN task wall conversation and the text comments posted by the project team member.

Oracle Functions then posts the prediction about the project status to the project manager's OSN wall.

For the purposes of this example, we have used Java as the language and the Apache HttpClient library to connect to the REST Service. This example uses the Apache library because it's easy to use and implement. Alternatively, you can use the new HTTP client that Java 11 provides. You also use the OCI Java SDK to retrieve the resource principal and the secret stored in Oracle Cloud Infrastructure Vault.

The sample Functions Java code executes the following steps:

  1. Retrieve the credential/secret from Oracle Cloud Infrastructure Vault.
  2. Retrieve the project details from Oracle Project Management.
  3. Retrieve the project manager person ID from HCM.
  4. Retrieve the username of the project manager in HCM.
  5. Retrieve the project manager wall details in OSN.
  6. Retrieve all tasks for the project using the project ID.
  7. For each project and task:
    1. Retrieve the social object ID using the task external ID.
    2. Retrieve the latest comment in OSN.
    3. Send the comment to the Oracle Cloud Infrastructure Data Science platform to predict the sentiment.


Oracle recommends that you avoid frameworks that instantiate many in-memory objects when calling REST APIs. These objects are discarded on each call and may slow down the function's execution.

Store Credentials in Oracle Vault

Store the Fusion Applications Suite user credentials in Oracle Cloud Infrastructure Vault.

To create a secret, follow the instructions linked OCI Vault documentation in the Explore section.

Create an app

Run the following command to create an Oracle Functions application.

fn create app RiskPredictor --annotation'["<subnet-ocid>"]'

Configure Environment Parameters

You can define certain parameters in Oracle Functions OCI environment and then reference those parameters from your code.

For this use case, you must set the following parameters used in this sample code. After creating the app, you can execute the following commands to create the configuration parameters. Then modify the values with the correct values for your configuration and environment:


You can provide multiple project IDs in Oracle Project Management in the Project ID list, separated by a commas.

Top Articles
Latest Posts
Article information

Author: Geoffrey Lueilwitz

Last Updated: 01/21/2023

Views: 5889

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Geoffrey Lueilwitz

Birthday: 1997-03-23

Address: 74183 Thomas Course, Port Micheal, OK 55446-1529

Phone: +13408645881558

Job: Global Representative

Hobby: Sailing, Vehicle restoration, Rowing, Ghost hunting, Scrapbooking, Rugby, Board sports

Introduction: My name is Geoffrey Lueilwitz, I am a zealous, encouraging, sparkling, enchanting, graceful, faithful, nice person who loves writing and wants to share my knowledge and understanding with you.