Skip to content

Azure-Samples/serverless-chat-langchainjs-purview

Repository files navigation

Serverless AI Chat with RAG using LangChain.js

Open project in GitHub Codespaces Join Azure AI Community Discord Official Learn documentation Watch to learn about RAG and this sample on YouTube dev.to blog post walkthrough
Build Status Node version Ollama + Llama3.1 TypeScript License

⭐ If you like this sample, star it on GitHub — it helps a lot!

OverviewGet startedRun the sampleResourcesFAQTroubleshooting

Animation showing the chat app in action

This sample is originally forked from Azure-Samples/serverless-chat-langchainjs and has been modified to integrate with the Microsoft Purview API. This integration showcases how Purview can be used to audit and secure AI prompts and responses. Most of the deployment instructions remain the same as in the original repository. However, there are additional steps required for the Purview integration,and it has to be done pre-deployment phase and explained in the Purview API Integration section below.

This sample shows how to build a serverless AI chat experience with Retrieval-Augmented Generation using LangChain.js and Azure. The application is hosted on Azure Static Web Apps and Azure Functions, with Azure Cosmos DB for NoSQL as the vector database. You can use it as a starting point for building more complex AI applications.

Tip

You can test this application locally without any cost using Ollama. Follow the instructions in the Local Development section to get started.

Overview

Building AI applications can be complex and time-consuming, but using LangChain.js and Azure serverless technologies allows to greatly simplify the process. This application is a chatbot that uses a set of enterprise documents to generate responses to user queries.

We provide sample data to make this sample ready to try, but feel free to replace it with your own. We use a fictitious company called Contoso Real Estate, and the experience allows its customers to ask support questions about the usage of its products. The sample data includes a set of documents that describes its terms of service, privacy policy and a support guide.

Application architecture

This application is made from multiple components:

  • A web app made with a single chat web component built with Lit and hosted on Azure Static Web Apps. The code is located in the packages/webapp folder.

  • A serverless API built with Azure Functions and using LangChain.js to ingest the documents and generate responses to the user chat queries. The code is located in the packages/api folder.

  • A database to store chat sessions and the text extracted from the documents and the vectors generated by LangChain.js, using Azure Cosmos DB for NoSQL.

  • A file storage to store the source documents, using Azure Blob Storage.

We use the HTTP protocol for AI chat apps to communicate between the web app and the API.

Features

  • Serverless Architecture: Utilizes Azure Functions and Azure Static Web Apps for a fully serverless deployment.
  • Retrieval-Augmented Generation (RAG): Combines the power of Azure Cosmos DB and LangChain.js to provide relevant and accurate responses.
  • Chat Sessions History: Maintains a personal chat history for each user, allowing them to revisit previous conversations.
  • Scalable and Cost-Effective: Leverages Azure's serverless offerings to provide a scalable and cost-effective solution.
  • Local Development: Supports local development using Ollama for testing without any cloud costs.

Getting started

There are multiple ways to get started with this project.

The quickest way is to use GitHub Codespaces that provides a preconfigured environment for you. Alternatively, you can set up your local environment following the instructions below.

Important

If you want to run this sample entirely locally using Ollama, you have to follow the instructions in the local environment section.

Use your local environment

You need to install following tools to work on your local machine:

  • Node.js LTS
  • Azure Developer CLI
  • Git
  • PowerShell 7+ (for Windows users only)
    • Important: Ensure you can run pwsh.exe from a PowerShell command. If this fails, you likely need to upgrade PowerShell.
    • Instead of Powershell, you can also use Git Bash or WSL to run the Azure Developer CLI commands.
  • Azure Functions Core Tools (should be installed automatically with NPM, only install manually if the API fails to start)

Then you can get the project code:

  1. Fork the project to create your own copy of this repository.
  2. On your forked repository, select the Code button, then the Local tab, and copy the URL of your forked repository.
Screenshot showing how to copy the repository URL
3. Open a terminal and run this command to clone the repo: git clone <your-repo-url>

Use GitHub Codespaces

You can run this project directly in your browser by using GitHub Codespaces, which will open a web-based VS Code:

Open in GitHub Codespaces

Use a VSCode dev container

A similar option to Codespaces is VS Code Dev Containers, that will open the project in your local VS Code instance using the Dev Containers extension.

You will also need to have Docker installed on your machine to run the container.

Open in Dev Containers

Run the sample

There are multiple ways to run this sample: locally using Ollama or Azure OpenAI models, or by deploying it to Azure.

Deploy the sample to Azure

Azure prerequisites

Cost estimation

See the cost estimation details for running this sample on Azure.

Purview API Integration

As part of the Purview API integration, the app must first authenticate with Microsoft Entra ID, and then acquire a Purview Graph token. This token enables Purview policies to be enforced for both the user and the application. Based on the applicable policy, the app will invoke the appropriate APIs.

The sections below explain the manual steps required to set up the Entra app registrations needed to obtain the token. These app registration details will later be used during deployment to configure the sample.

Register the backend app (backend-node-api)

  1. Navigate to the Microsoft Entra admin center and select the Microsoft Entra ID service.

  2. Select the App Registrations blade on the left, then select New registration.

  3. In the Register an application page that appears, enter your application's registration information:

    1. In the Name section, enter a meaningful application name that will be displayed to users of the app, for example backend-node-api.
    2. Under Supported account types, select Accounts in any organizational directory (Any Microsoft Entra ID tenant - Multitenant)
    3. Select Register to create the application.
  4. In the Overview blade, find and note the Application (client) ID. You use this value later while deploying this sample through azd command.

  5. In the app's registration screen, select the Expose an API blade to the left to open the page where you can publish the permission as an API for which client applications can obtain access tokens for. The first thing that we need to do is to declare the unique resource URI that the clients will be using to obtain access tokens for this API. To declare an resource URI(Application ID URI), follow the following steps:

    1. Select Set next to the Application ID URI to generate a URI that is unique for this app.
    2. For this sample, accept the proposed Application ID URI (api://{clientId}) by selecting Save.

      ℹ️ Read more about Application ID URI at Validation differences by supported account types (signInAudience).

  6. Publish Delegated Permissions
    1. All APIs must publish a minimum of one scope, also called Delegated Permission, for the client apps to obtain an access token for a user successfully. To publish a scope, follow these steps:
    2. Select Add a scope button open the Add a scope screen and Enter the values as indicated below:
      1. For Scope name, use access_as_user.
      2. Select Admins and users options for Who can consent?.
      3. For Admin consent display name type in access_as_user.
      4. For Admin consent description type in e.g. Allows the app to get LLM response..
      5. For User consent display name type in scopeName.
      6. For User consent description type in eg. Allows the app to get LLM response..
      7. Keep State as Enabled.
      8. Select the Add scope button on the bottom to save this scope.

    ℹ️ Follow the principle of least privilege when publishing permissions for a web API.

  7. In the app's registration screen, select the API permissions blade in the left to open the page where we add access to the APIs that your application needs:

    1. Select the Add a permission button and then select Microsoft Graph.
    2. Choose below delegated permissions
      1. Content.Process.User
      2. ContentActivity.Write
      3. ProtectionScopes.Compute.User
      4. SensitivityLabel.Read
    3. Grant admin consent for all the above permissions
  8. From the Certificates & secrets page, in the Client secrets section, choose New client secret:

    • Type a key description (of instance app secret),
    • Select a key duration of either In 1 year, In 2 years, or Never Expires.
    • When you press the Add button, the key value will be displayed, copy, and save the value in a safe location.
    • You'll need this key later to during the package deployment through azd up command. . This key value will not be displayed again, nor retrievable by any other means, so record it as soon as it is visible from the Azure portal.

Register the client app (front-end-javascript-spa)

  1. Navigate to the Microsoft Entra admin center and select the Microsoft Entra ID service.
  2. Select the App Registrations blade on the left, then select New registration.
  3. In the Register an application page that appears, enter your application's registration information:
    1. In the Name section, enter a meaningful application name that will be displayed to users of the app, for example msal-javascript-spa.
    2. Under Supported account types, select Accounts in this organizational directory only
    3. Select Register to create the application.
  4. In the Overview blade, find and note the Application (client) ID. You use this value in your app's configuration file(s) later in your code.
  5. In the app's registration screen, select the Authentication blade to the left.
  6. If you don't have a platform added, select Add a platform and select the Single-page application option.
    1. In the Redirect URI section enter the following redirect URIs:
      1. http://localhost:8000
    2. Click Save to save your changes.
  7. Since this app signs-in users, we will now proceed to select delegated permissions, which is is required by apps signing-in users.
    1. In the app's registration screen, select the API permissions blade in the left to open the page where we add access to the APIs that your application needs:
    2. Select the Add a permission button and then:
    3. Ensure that the My APIs tab is selected.
    4. In the list of APIs, select the API backend-node-api.
    5. In the Delegated permissions section, select access_as_user in the list. Use the search box if necessary.
    6. Select the Add permissions button at the bottom.

Configure the client app (front-end-javascript-spa) to use your app registration

Open the project in your IDE (like VVisual Studio Code) to configure the code.

In the steps below, "ClientID" is the same as "Application ID" or "AppId" \

  1. Navigate to the packages->webapp folderand create a new file called .env andd insert the below information.
  2. "<CLIENT_ID>" should be replaced by the appid of the front-end app registration.
  3. "<API_ID>" should be replaced by the appid of the backned-end app registration.
VITE_AZURE_AD_CLIENT_ID="<CLIENT_ID>"
VITE_AZURE_AD_AUTHORITY_HOST="https://login.microsoftonline.com/organizations"
VITE_BACKEND_API_SCOPE="api://<API_ID>/access_as_user"

Deploy the sample

  1. Open a terminal and navigate to the root of the project.
  2. Authenticate with Azure by running azd auth login.
  3. Run azd up to deploy the application to Azure. This will provision Azure resources, deploy this sample, and build the search index based on the files found in the ./data folder.
    • You will be prompted to select a base location for the resources. If you're unsure of which location to choose, select eastus2.
    • By default, the OpenAI resource will be deployed to eastus2. You can set a different location with azd env set AZURE_OPENAI_RESOURCE_GROUP_LOCATION <location>. Currently only a short list of locations is accepted. That location list is based on the OpenAI model availability table and may become outdated as availability changes.
    • You will be prompted to insert the app id of the backend app registration followed by the secret that yuo have created in the app registration step.

The deployment process will take a few minutes. Once it's done, you'll see the URL of the web app in the terminal.

Screenshot of the azd up command result

Note on Redirect URI Error

Note: When you run the application for the first time, you may encounter a Redirect URI error. This happens because the web app URL is not yet registered as a redirect URI in the front-end app registration.
To resolve this:

  1. Copy the web app URL displayed in the terminal after deployment (e.g., https://<your-webapp-name>.azurestaticapps.net).
  2. Navigate to the Microsoft Entra admin center (https://entra.microsoft.com).
  3. Select your front-end app registration.
  4. Go to the Authentication blade and update the Redirect URI under the Single-page application (SPA) section with the web app URL.
  5. Save the changes and retry accessing the application.

You can now open the web app in your browser and start chatting with the bot.

How the Sample Works from a Purview API Integration Perspective

There are three primary steps your app must take to process prompts and responses using Microsoft Purview:

  1. Compute protection scopes
  2. Compute rights for the labels assigned to the user
  3. Process content

1. Compute Protection Scopes

The first step is to compute the protection scope state for the user by calling the Compute protection scopes API. This should be done shortly after the user authenticates.

  • Docs: Compute protection scopes
  • Code: packages/api/src/purview-wrapper.tsinvokeProtectionScopeApi
  • Usage: The API response is processed in chat-post.ts, where the policy details are cached for further use.

2. Compute Rights for Labels

The next step is to determine the access rights for the user by evaluating which sensitivity labels they are allowed to access.

  • Docs: Usage rights (Graph API)
  • Code: packages/api/src/purview-wrapper.tsinvokeUserRightsForLables
  • Usage: The API response is processed and cached in chat-post.ts based on the authenticated user.

ℹ️ Label ID information is pre-populated during document uploads via documents-post.ts. for this to happen, you will have to run the project with npm start and then use below format to upload the each of the 3 files separetly

curl -X POST http://localhost:7071/api/documents \
  -F "file=@data/privacy-policy.pdf" \
  -F "labelId=<LABEL_ID>>" \
  -F "labelName=<LABEL_NAME>"

You can compute the label id information of the file through MIP SDK.


3. Process Content

Finally, the app calls the Process content API using the cached protection scope state.

  • For each user activity, the app checks the user's protection scope and calls the API accordingly.

  • Include the cached ETag from the protection scopes call in the If-None-Match header.

  • The API response provides a decision (allow, restrict, etc.) for handling the interaction.

  • Docs: Process content

  • Code: packages/api/src/purview-wrapper.tsinvokeProcessContentApi

  • Usage: The API response is handled in chat-post.ts. If the decision is restrict access, the request is blocked from further processing.


Enhance security

When deploying the sample in an enterprise context, you may want to enforce tighter security restrictions to protect your data and resources. See the enhance security guide for more information.

Clean up

To clean up all the Azure resources created by this sample:

  1. Run azd down --purge
  2. When asked if you are sure you want to continue, enter y

The resource group and all the resources will be deleted.

Run the sample locally with Ollama

If you have a machine with enough resources, you can run this sample entirely locally without using any cloud resources. To do that, you first have to install Ollama and then run the following commands to download the models on your machine:

ollama pull llama3.1:latest
ollama pull nomic-embed-text:latest

Note

The llama3.1 model with download a few gigabytes of data, so it can take some time depending on your internet connection.

After that you have to install the NPM dependencies:

npm install

Then you can start the application by running the following command which will start the web app and the API locally:

npm start

Then, open a new terminal running concurrently and run the following command to upload the PDF documents from the /data folder to the API:

npm run upload:docs

This only has to be done once, unless you want to add more documents.

You can now open the URL http://localhost:8000 in your browser to start chatting with the bot.

Note

While local models usually works well enough to answer the questions, sometimes they may not be able to follow perfectly the advanced formatting instructions for the citations and follow-up questions. This is expected, and a limitation of using smaller local models.

Run the sample locally with Azure OpenAI models

First you need to provision the Azure resources needed to run the sample. Follow the instructions in the Deploy the sample to Azure section to deploy the sample to Azure, then you'll be able to run the sample locally using the deployed Azure resources.

Once your deployment is complete, you should see a .env file in the packages/api folder. This file contains the environment variables needed to run the application using Azure resources.

To run the sample, you can then use the same commands as for the Ollama setup. This will start the web app and the API locally:

npm start

Open the URL http://localhost:8000 in your browser to start chatting with the bot.

Note that the documents are uploaded automatically when deploying the sample to Azure with azd up.

Tip

You can switch back to using Ollama models by simply deleting the packages/api/.env file and starting the application again. To regenerate the .env file, you can run azd env get-values > packages/api/.env.

Resources

Here are some resources to learn more about the technologies used in this sample:

You can also find more Azure AI samples here.

FAQ

You can find answers to frequently asked questions in the FAQ.

Troubleshooting

If you have any issue when running or deploying this sample, please check the troubleshooting guide. If you can't find a solution to your problem, please open an issue in this repository.

Guidance

For more detailed guidance on how to use this sample, please refer to the tutorial.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

About

About This sample showcases how Purview API can be integrated to audit and secure AI prompts and responses

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published