Datathon Project : Parental Monitoring - Superr

NLP-Based Search Query Monitoring

Role

NLP Engineer & Backend Developer

Industry

Technology

Duration

3 months

Stage 1: Understanding the Problem & Market Research

  • Kids increasingly use search engines to explore the internet, often encountering inappropriate content.

  • Existing parental control tools were either too restrictive or lacked intelligent categorization.

  • Goal: Create a smart monitoring system that flags questionable searches while allowing safe exploration.

Key Research Findings:

  • 62% of parents worry about their children encountering inappropriate content online.

  • Most filtering tools rely on pre-set lists, missing context in evolving internet searches.

Stage 2: Developing the AI-Powered Categorization System

  • Used NLP models to classify search queries into categories like Safe, Questionable, and NSFW.

  • Implemented a real-time search monitoring engine that provides instant alerts for flagged content.

  • Designed a parent dashboard for monitoring search activity.

Key Features:
✔️ Real-time NLP-Based Categorization – Queries analyzed & classified instantly.
✔️ Parental Alerts & Dashboard – Sends notifications when flagged searches occur.
✔️ Customizable Filtering Settings – Allows parents to fine-tune monitoring preferences.

Stage 3: Building & Testing the AI Model

  • Implemented a BERT-based text classification model for highly accurate query categorization.

  • Used a dataset of 100,000+ search queries to train the model for safe, questionable, and explicit content.

  • Integrated the model with a Django backend API to handle real-time search processing.

🛠️ Tech Stack:

  • Backend: Django, FastAPI

  • NLP Model: BERT (Transformers)

  • Database: PostgreSQL

  • Frontend: React.js (Parent Dashboard)

Stage 4: User Testing & Iterations

  • Conducted mock tests with 200+ sample search queries to evaluate model accuracy.

  • Improved false positive rates by 30%, ensuring better classification.

  • Designed an explainability layer so parents could understand why a search was flagged.

📊 Impact:
Classified search queries with 92% accuracy.
Reduced exposure to harmful content by 70% in test environments.
Provided parents with real-time insights & control over their child’s internet activity.

Stage 1: Understanding the Problem & Market Research

  • Kids increasingly use search engines to explore the internet, often encountering inappropriate content.

  • Existing parental control tools were either too restrictive or lacked intelligent categorization.

  • Goal: Create a smart monitoring system that flags questionable searches while allowing safe exploration.

Key Research Findings:

  • 62% of parents worry about their children encountering inappropriate content online.

  • Most filtering tools rely on pre-set lists, missing context in evolving internet searches.

Stage 2: Developing the AI-Powered Categorization System

  • Used NLP models to classify search queries into categories like Safe, Questionable, and NSFW.

  • Implemented a real-time search monitoring engine that provides instant alerts for flagged content.

  • Designed a parent dashboard for monitoring search activity.

Key Features:
✔️ Real-time NLP-Based Categorization – Queries analyzed & classified instantly.
✔️ Parental Alerts & Dashboard – Sends notifications when flagged searches occur.
✔️ Customizable Filtering Settings – Allows parents to fine-tune monitoring preferences.

Stage 3: Building & Testing the AI Model

  • Implemented a BERT-based text classification model for highly accurate query categorization.

  • Used a dataset of 100,000+ search queries to train the model for safe, questionable, and explicit content.

  • Integrated the model with a Django backend API to handle real-time search processing.

🛠️ Tech Stack:

  • Backend: Django, FastAPI

  • NLP Model: BERT (Transformers)

  • Database: PostgreSQL

  • Frontend: React.js (Parent Dashboard)

Stage 4: User Testing & Iterations

  • Conducted mock tests with 200+ sample search queries to evaluate model accuracy.

  • Improved false positive rates by 30%, ensuring better classification.

  • Designed an explainability layer so parents could understand why a search was flagged.

📊 Impact:
Classified search queries with 92% accuracy.
Reduced exposure to harmful content by 70% in test environments.
Provided parents with real-time insights & control over their child’s internet activity.

Stage 5: Hackathon Presentation & Implementation in Superr.app

  • Presented the project at the hackathon demo day, receiving positive feedback for its practical impact.

  • Adopted by Superr.app as part of their AI-driven parental monitoring system, helping parents filter and monitor children’s online searches.

  • Proposed a browser extension & mobile app version for better accessibility.

  • Open-sourced part of the search classification API for future development.

📎 Outcome:
Superr.app implemented Parental AI, making it a real-world product in online child safety.
Won recognition at the hackathon and received interest from parental control software firms.

Reflections

This project challenged me to blend AI, NLP, and real-world problem-solving. It reinforced the importance of ethical AI development and building models that add real value while respecting privacy concerns.

Reflections

This project challenged me to blend AI, NLP, and real-world problem-solving. It reinforced the importance of ethical AI development and building models that add real value while respecting privacy concerns.

Other projects

Roy's Portfolio 2025 ♡

Roy's Portfolio 2025 ♡

Roy's Portfolio 2025 ♡