
Note: Face is modified by AI Tool for Privacy Reason
Imagine a website that doesn’t just present information, but interacts with you on a personal level. A web page that sees you smile and celebrates with you, or senses your calm and offers a moment of peace. This isn’t science fiction; it’s the power of emotion recognition in the browser.
In this tutorial, we’ll build a simple yet powerful web application that uses your webcam to detect your emotions and changes the displayed content in real-time. By the end, you’ll have a web page that plays an upbeat video when you’re happy and a calming one when you’re neutral.
User Requirement
Build a web application that uses a device’s camera to recognize a user’s emotion and dynamically changes the displayed content in response. For example, if the app detects a ‘happy’ expression, it should show an upbeat video. If it detects a ‘neutral’ expression, it should display a calming video.
Prerequisites
Before we begin, make sure you have the following:
- A basic understanding of HTML, CSS, and JavaScript.
- A modern web browser (like Chrome or Firefox) that supports the necessary APIs.
- A text editor (such as Visual Studio Code, Sublime Text, or Atom).
- A working webcam.
Step 1: The Foundation – Setting Up Your HTML
Every web app needs a skeleton. Let’s create an index.html file. This file will hold a video element for our webcam feed and a container where the magic happens – our dynamic content.
Create a file named index.html and add the following code:codeHtml
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Emotion Reactive App</title>
<style>
body {
font-family: sans-serif;
display: flex;
justify-content: center;
align-items: center;
flex-direction: column;
height: 100vh;
margin: 0;
background-color: #f0f0f0;
}
#video-container {
position: relative;
border: 3px solid #333;
border-radius: 10px;
overflow: hidden;
}
canvas {
position: absolute;
top: 0;
left: 0;
}
#content-container {
margin-top: 20px;
padding: 20px;
background-color: white;
border-radius: 10px;
box-shadow: 0 4px 8px rgba(0,0,0,0.1);
width: 720px;
text-align: center;
}
</style>
</head>
<body>
<h1>Show Me How You Feel!</h1>
<div id="video-container">
<video id="video" width="720" height="560" autoplay muted></video>
</div>
<div id="content-container">
<p>Loading AI model...</p>
</div>
<!-- We'll add our JavaScript here -->
<script defer src="face-api.min.js"></script>
<script defer src="script.js"></script>
</body>
</html>
This code sets up a centered video player and a content box below it. The defer attribute in the script tags ensures our JavaScript runs after the HTML is parsed.
Step 2: The Brains – Integrating the AI Model
We’ll use a fantastic open-source library called face-api.js to handle the heavy lifting of face and emotion detection. First, you’ll need to download the library and its pre-trained models.
- Download face-api.js: Get the face-api.min.js file from the official GitHub repository and place it in your project folder.
- Download the Models: Download the pre-trained models from the weights folder in the same repository. Create a models folder in your project directory and place the downloaded model files inside it.
Now, let’s create our script.js file. This is where we will write the logic for our application.
Step 3: The Eye of the App – Accessing the Webcam
Before we can detect emotions, we need to see the user. We’ll use the navigator.mediaDevices.getUserMedia API to access the webcam and stream its feed into the <video> element we created.
Add the following code to your script.js file:codeJavaScript
const video = document.getElementById('video');
const contentContainer = document.getElementById('content-container');
// Load AI models first
Promise.all([
faceapi.nets.tinyFaceDetector.loadFromUri('/models'),
faceapi.nets.faceLandmark68Net.loadFromUri('/models'),
faceapi.nets.faceExpressionNet.loadFromUri('/models')
]).then(startVideo);
async function startVideo() {
try {
const stream = await navigator.mediaDevices.getUserMedia({ video: true });
video.srcObject = stream;
contentContainer.innerHTML = '<p>Detecting your emotion...</p>';
} catch (err) {
console.error("Error accessing webcam: ", err);
contentContainer.innerHTML = '<p>Error: Could not access webcam.</p>';
}
}
When you open index.html in your browser, it will now ask for permission to use your camera. Once you grant it, you should see your face on the screen!
Step 4: Feeling the Vibe – Detecting Emotions
This is where the AI comes into play. We’ll set up a loop that continuously analyzes the video feed, detects a face, and identifies the user’s primary emotion.
Add this code to your script.js file, right after the startVideo function:codeJavaScript
video.addEventListener('play', () => {
// Create a canvas to draw detection results
const canvas = faceapi.createCanvasFromMedia(video);
document.getElementById('video-container').append(canvas);
const displaySize = { width: video.width, height: video.height };
faceapi.matchDimensions(canvas, displaySize);
setInterval(async () => {
const detections = await faceapi.detectAllFaces(video, new faceapi.TinyFaceDetectorOptions())
.withFaceLandmarks()
.withFaceExpressions();
if (detections.length > 0) {
const expressions = detections[0].expressions;
// Get the emotion with the highest confidence
const primaryEmotion = Object.keys(expressions).reduce((a, b) => expressions[a] > expressions[b] ? a : b);
// This is where we'll trigger our content change
changeContent(primaryEmotion);
// Optional: Draw detections on the canvas for visual feedback
const resizedDetections = faceapi.resizeResults(detections, displaySize);
canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height);
faceapi.draw.drawDetections(canvas, resizedDetections);
faceapi.draw.drawFaceExpressions(canvas, resizedDetections);
}
}, 200); // Run detection every 200 milliseconds
});
This code listens for the video to start playing. It then sets up an interval that:
- Detects faces and their expressions using face-api.js.
- Figures out the most likely emotion (e.g., “happy”, “sad”, “neutral”).
- Calls a function changeContent with the detected emotion.
- Draws the detection results over the video so you can see what the AI sees.
Step 5: The Reaction – Changing Content Dynamically
We have the emotion, now let’s make the app react! The final step is to write the changeContent function. This function will update the content container based on the emotion it receives.
Add the final piece of code to script.js:codeJavaScript
let currentEmotion = '';
function changeContent(emotion) {
// Only update if the emotion has changed
if (emotion !== currentEmotion) {
currentEmotion = emotion;
let content = '';
console.log("Detected emotion:", emotion); // For debugging
switch (emotion) {
case 'happy':
content = `
<h3>You look happy! Here's a fun video.</h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/ZbZSe6N_BXs?autoplay=1&mute=1" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
`;
break;
case 'neutral':
content = `
<h3>Feeling calm? Enjoy this relaxing scene.</h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/BHACKCNDMW8?autoplay=1&mute=1" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
`;
break;
case 'sad':
content = `<p>It's okay to feel sad. Here's a comforting thought: "This too shall pass."</p>`;
break;
case 'surprised':
content = `<p>Wow, you look surprised!</p>`;
break;
default:
content = `<p>Detecting your emotion...</p>`;
}
contentContainer.innerHTML = content;
}
}
This function keeps track of the currentEmotion to avoid constantly reloading content. When a new emotion is detected, it uses a switch statement to select the appropriate HTML content (in this case, YouTube embeds) and updates the content-container.
The Final Result
That’s it! Open your index.html file in a browser. If you see the error of failed model loading message, make sure your local server is running in the integrated terminal: python3 -m http.server 8000, and open http://localhost:8000 in your browser.
After loading the models and accessing your camera, try smiling. The content should switch to the “happy” video. Then, adopt a neutral expression, and it should change to the calming video.
You’ve successfully built a web application that bridges the gap between human emotion and digital content.
What’s Next?
This is just the beginning. You can expand on this project in countless ways:
- More Emotions: Add cases for “angry,” “sad,” or “surprised” expressions.
- Different Content: Instead of videos, change the website’s background color, play different music, or display different articles.
- User Feedback: Create an application that adjusts its difficulty or provides encouragement based on a user’s expression of confusion or satisfaction.
The basic version above can be extended to a version with a better UI, and Click Here to try the demo!
The ability for technology to understand and react to human emotion opens up a new frontier for creating truly interactive and empathetic user experiences. Go ahead and experiment—what will you build?


Leave a Reply