This project demonstrates the integration between a React frontend and a .NET API backend to create a real-time chat application using Azure OpenAI's Realtime API. The application utilizes WebRTC for voice communication and showcases streaming responses from Azure OpenAI models.
The project consists of two main parts:
-
React Frontend (
azure-openai-demo
)- A modern React application that handles the user interface and WebRTC communication
- Provides settings configuration, message display, and recording controls
-
.NET API Backend (
realtime-api-dotnet
)- Serves as a proxy to the Azure OpenAI service
- Manages session creation and WebRTC connection establishment
- Handles authentication with Azure OpenAI using API keys
- Real-time voice transcription using Whisper model
- Real-time streaming responses from Azure OpenAI GPT models
- WebRTC audio communication
- Configurable voice selection
- Low-latency communication
- Session management
- Detailed logging for debugging and monitoring
- Node.js and npm for the React frontend
- .NET 10+ for the backend API
- Azure subscription with access to Azure OpenAI service
- Azure OpenAI resource with GPT-4o models deployed
- Navigate to the
realtime-api-dotnet
directory - Update the
appsettings.json
file with your Azure OpenAI configuration:"AzureOpenAI": { "ResourceName": "<your-resource-name>", "DeploymentName": "gpt-4o-realtime-preview", "ApiKey": "<your-api-key>", "ApiVersion": "2025-04-01-preview" }
- Run the API:
The API will be available at http://localhost:5126 by default
dotnet run
- Navigate to the
azure-openai-demo
directory - Install dependencies:
npm install
- Start the development server:
The application will be available at http://localhost:3000
npm start
- Open the web application at http://localhost:3000
- Configure your Azure OpenAI settings if needed
- Click "Start Conversation" to begin
- Speak into your microphone to interact with the AI assistant
- View real-time transcription and streaming responses
- Monitor the logs panel for detailed information about the connection and messages
The application uses the following architecture:
- React frontend establishes a WebRTC connection to Azure OpenAI through the .NET API
- The .NET API creates a session with Azure OpenAI and handles authentication
- Audio is streamed from the browser to Azure OpenAI's real-time service
- Transcription and AI responses are streamed back to the frontend
- Settings: Configures Azure OpenAI service parameters
- ChatWindow: Displays conversation messages and streaming responses
- Controls: Manages WebRTC connection and recording
- Logs: Shows detailed log information for debugging
- AzureOpenAIController: Handles session creation and WebRTC connection
To modify or extend the application:
- The React components in
azure-openai-demo/src/components
manage different aspects of the UI - The
ApiService.js
inazure-openai-demo/src/services
handles communication with the .NET API - The .NET API's
AzureOpenAIController.cs
manages communication with Azure OpenAI
This project is for demonstration and learning purposes.