The Web that Reads Your Mood: Building an Emotion-Reactive App with AI (Basic Version)

An image showing the emotion reaction web app UI.

Note: Face is modified by AI Tool for Privacy Reason

Imagine a website that doesn’t just present information, but interacts with you on a personal level. A web page that sees you smile and celebrates with you, or senses your calm and offers a moment of peace. This isn’t science fiction; it’s the power of emotion recognition in the browser.

In this tutorial, we’ll build a simple yet powerful web application that uses your webcam to detect your emotions and changes the displayed content in real-time. By the end, you’ll have a web page that plays an upbeat video when you’re happy and a calming one when you’re neutral.

User Requirement

Build a web application that uses a device’s camera to recognize a user’s emotion and dynamically changes the displayed content in response. For example, if the app detects a ‘happy’ expression, it should show an upbeat video. If it detects a ‘neutral’ expression, it should display a calming video.

Prerequisites

Before we begin, make sure you have the following:

  • A basic understanding of HTML, CSS, and JavaScript.
  • A modern web browser (like Chrome or Firefox) that supports the necessary APIs.
  • A text editor (such as Visual Studio Code, Sublime Text, or Atom).
  • A working webcam.

Step 1: The Foundation – Setting Up Your HTML

Every web app needs a skeleton. Let’s create an index.html file. This file will hold a video element for our webcam feed and a container where the magic happens – our dynamic content.

Create a file named index.html and add the following code:codeHtml

This code sets up a centered video player and a content box below it. The defer attribute in the script tags ensures our JavaScript runs after the HTML is parsed.

Step 2: The Brains – Integrating the AI Model

We’ll use a fantastic open-source library called face-api.js to handle the heavy lifting of face and emotion detection. First, you’ll need to download the library and its pre-trained models.

  1. Download face-api.js: Get the face-api.min.js file from the official GitHub repository and place it in your project folder.
  2. Download the Models: Download the pre-trained models from the weights folder in the same repository. Create a models folder in your project directory and place the downloaded model files inside it.

Now, let’s create our script.js file. This is where we will write the logic for our application.

Step 3: The Eye of the App – Accessing the Webcam

Before we can detect emotions, we need to see the user. We’ll use the navigator.mediaDevices.getUserMedia API to access the webcam and stream its feed into the <video> element we created.

Add the following code to your script.js file:codeJavaScript

When you open index.html in your browser, it will now ask for permission to use your camera. Once you grant it, you should see your face on the screen!

Step 4: Feeling the Vibe – Detecting Emotions

This is where the AI comes into play. We’ll set up a loop that continuously analyzes the video feed, detects a face, and identifies the user’s primary emotion.

Add this code to your script.js file, right after the startVideo function:codeJavaScript

This code listens for the video to start playing. It then sets up an interval that:

  1. Detects faces and their expressions using face-api.js.
  2. Figures out the most likely emotion (e.g., “happy”, “sad”, “neutral”).
  3. Calls a function changeContent with the detected emotion.
  4. Draws the detection results over the video so you can see what the AI sees.

Step 5: The Reaction – Changing Content Dynamically

We have the emotion, now let’s make the app react! The final step is to write the changeContent function. This function will update the content container based on the emotion it receives.

Add the final piece of code to script.js:codeJavaScript

This function keeps track of the currentEmotion to avoid constantly reloading content. When a new emotion is detected, it uses a switch statement to select the appropriate HTML content (in this case, YouTube embeds) and updates the content-container.

The Final Result

That’s it! Open your index.html file in a browser. If you see the error of failed model loading message, make sure your local server is running in the integrated terminal: python3 -m http.server 8000, and open http://localhost:8000 in your browser.

After loading the models and accessing your camera, try smiling. The content should switch to the “happy” video. Then, adopt a neutral expression, and it should change to the calming video.

You’ve successfully built a web application that bridges the gap between human emotion and digital content.

What’s Next?

This is just the beginning. You can expand on this project in countless ways:

  • More Emotions: Add cases for “angry,” “sad,” or “surprised” expressions.
  • Different Content: Instead of videos, change the website’s background color, play different music, or display different articles.
  • User Feedback: Create an application that adjusts its difficulty or provides encouragement based on a user’s expression of confusion or satisfaction.

The basic version above can be extended to a version with a better UI, and Click Here to try the demo!

The ability for technology to understand and react to human emotion opens up a new frontier for creating truly interactive and empathetic user experiences. Go ahead and experiment—what will you build?

Leave a Reply

Your email address will not be published. Required fields are marked *