A full-stack platform for secure video uploads and AI-powered sentiment analysis, featuring robust API management and usage quotas.
Decode human emotion and sentiment from video, audio, and text—at scale, in real-time, and with research-grade accuracy.
NeuroSense is a next-generation multimodal AI framework that fuses video, audio, and text to recognize emotions and sentiments in human communication. Designed for research, real-world deployment, and SaaS applications, NeuroSense combines the power of deep learning, cloud scalability, and a modern web interface.
- 🎥 Video Frame Analysis — Extracts facial and contextual cues using ResNet3D.
- 🎙️ Audio Feature Extraction — Captures vocal emotion with Mel spectrograms and CNNs.
- 📝 Text Embeddings with BERT — Understands semantic sentiment from transcripts.
- 🔗 Multimodal Fusion — Late fusion of 128D features from each modality for robust affect detection.
- 📊 Dual Head Classification — Simultaneous prediction of 7 emotion classes and 3 sentiment classes.
- 🧪 Model Training & Evaluation — Efficient PyTorch pipeline with TensorBoard logging.
- ☁️ Scalable Cloud Deployment — AWS SageMaker for training, S3 for data, and real-time inference endpoints.
- 🔐 Authentication & API Keys — Auth.js and secure key management for SaaS users.
- 📈 Usage Quota Tracking — Monitor and limit API usage per user.
- 🌐 Modern Frontend — Next.js, Tailwind CSS, and T3 Stack for a seamless user experience.
- 🖼️ Rich Visualizations — Confusion matrices, training curves, and interactive analytics.
Video Frames ─┐
│
[ResNet3D]──┐
Text ─────[BERT]─────┼─► [Fusion Layer] ──► [Emotion Classifier] ─► 7 Emotions
│ │ └─► [Sentiment Classifier] ─► 3 Sentiments
Audio ──[CNN+Mel]────┘
- JWT-based session management with NextAuth
- Credential authentication with bcrypt hashing
- Role-based API key system with crypto-safe secret generation
- Secure S3 presigned URL generation for uploads
- AWS SageMaker integration for ML analysis
- Supported formats: MP4, MOV, AVI
- Type-safe endpoints with Zod validation
- Usage quotas with monthly resets
- Interactive API documentation with TS/cURL examples
- Animated UI components with real-time feedback
- Analysis visualization with emotion timelines
- Quota monitoring and API key management
Core
- Next.js 13 (App Router)
- TypeScript
- Prisma (PostgreSQL)
- NextAuth.js
AI/Cloud
- AWS S3 (Video Storage)
- AWS SageMaker (ML Inference)
- AWS SDK v3
Utilities
- Zod (Schema Validation)
- bcryptjs (Password Hashing)
- react-icons (UI Icons)
- Node.js 18+
- PostgreSQL
- AWS account with S3/SageMaker access
git clone https://github.com/yourusername/neurosense-frontend.git
cd neurosense-frontend
bun install
- Create
.env
file:
DATABASE_URL="postgresql://..."
AWS_ACCESS_KEY_ID="..."
AWS_SECRET_ACCESS_KEY="..."
AWS_INFERENCE_BUCKET="..."
AWS_ENDPOINT_NAME="..."
- Initialize database:
npx prisma migrate dev
pnpm dev
curl -X POST /api/upload-url \
-H "Authorization: Bearer {API_KEY}" \
-d '{"fileType": ".mp4"}'
curl -X PUT "{PRESIGNED_URL}" \
-H "Content-Type: video/mp4" \
--data-binary @video.mp4
curl -X POST /api/sentiment-inference \
-H "Authorization: Bearer {API_KEY}" \
-d '{"key": "inference/uuid.mp4"}'
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature
- Commit changes:
git commit -m 'feat: add amazing feature'
- Push branch:
git push origin feature/amazing-feature
- Open Pull Request