Fork me on GitHub


Democratizing Webcam Eye Tracking on the Browser

WebGazer.js is an eye tracking library that uses common webcams to infer the eye-gaze locations of web visitors on a page in real time. The eye tracking model it contains self-calibrates by watching web visitors interact with the web page and trains a mapping between the features of the eye and positions on the screen. WebGazer.js is written entirely in JavaScript and with only a few lines of code can be integrated in any website that wishes to better understand their visitors and transform their user experience. WebGazer.js runs entirely in the client browser, so no video data needs to be sent to a server. WebGazer.js runs only if the user consents in giving access to their webcam.

Real time gaze prediction on most major browsers

No special hardware - WebGazer.js uses common webcams

Self-calibration from clicks and cursor movements

Easy to integrate with a few lines of JavaScript

Swappable components for eye detection

Multiple gaze prediction models


To use WebGazer.js you need to add the webgazer.js file as a script in your website:
 /* WebGazer.js library */ 
<script src="webgazer.js" type="text/javascript" >

Be aware that when you do local development and you might need to run locally a simple http server that supports the https protocol.

Once the script is included, the webgazer object is introduced into the global namespace. webgazer has methods for controlling the operation of WebGazer.js allowing us to start and stop it, add callbacks, or change out modules. The two most important methods on webgazer are webgazer.begin() and webgazer.setGazeListener(). webgazer.begin() starts the data collection that enables the predictions, so it's important to call this early on. Once webgazer.begin() has been called, WebGazer.js is ready to start giving predictions. webgazer.setGazeListener() is a convenient way to access these predictions. This method invokes a callback you provide every few milliseconds to provide the current gaze location of a user. If you don't need constant access to this data stream, you may alternatively call webgazer.getCurrentPrediction() which will give you a prediction at the moment when it is called.

webgazer.setGazeListener(function(data, elapsedTime) {
    if (data == null) {
    var xprediction = data.x; //these x coordinates are relative to the viewport 
    var yprediction = data.y; //these y coordinates are relative to the viewport
    console.log(elapsedTime); //elapsed time is based on time since begin was called

Here is the alternate method of getting predictions where you can request a gaze prediction as needed.

var prediction = webgazer.getCurrentPrediction();
if (prediction) {
    var x = prediction.x;
    var y = prediction.y;

Advanced Usage

There are several features that WebGazer.js enables beyond the example shown so far.

Saving Data Between Sessions

WebGazer.js can save and restore the training data between browser sessions by storing data to localstorage. This occurs automatically when end() is called. If you want each user session to be independent make sure that you do not call the end() function.


Changing in Use Regression and Tracker Modules

At the heart of WebGazer.js are the tracker and regression modules. The tracker module controls how eyes are detected and the regression module determines how the regression model is learned and how predictions are made based on the eye patches extracted from the tracker module. These modules can be swapped in and out at any time. We hope that this will make it easy to extend and adapt WebGazer.js and welcome any developers that want to contribute.

WebGazer.js requires the bounding box that includes the pixels from the webcam video feed that correspond to the detected eyes of the user. Currently we include three external libraries that implement different Computer Vision algorithms to detect the face and eyes.

webgazer.setTracker("clmtrackr"); //set a tracker module
webgazer.addTrackerModule("newTracker", NewTrackerConstructor); //add a new tracker module

Here are all the external tracker modules that come by default with WebGazer.js. Let us know if you want to introduce your own facial feature detection library.

webgazer.setRegression("ridge"); //set a regression module
webgazer.addRegressionModule("newReg", NewRegConstructor); //add a new regression module

Here are all the regression modules that come by default with WebGazer.js. Let us know if you would like introduce different modules - just keep in mind that they should be able to produce predictions very fast.

  • ridge - a simple ridge regression model mapping pixels from the detected eyes to locations on the screen.
  • weightedRidge - a weight ridge regression model with newest user interactions contribution more to the model.
  • threadedRidge - a faster implementation of ridge regression that uses threads.
  • linear - a basic simple linear regression that maps

Pause and Resume

It may be necessary to pause the data collection and predictions of WebGazer.js for performance reasons.

webgazer.pause(); //WebGazer.js is now paused, no data will be collected and the gaze callback will not be executed
webgazer.resume(); //data collection resumes, gaze callback will be called again

Util and Params

We provide some useful functions and objects in webgazer.util. The webgazer.params object also contains some useful parameters to tweak to control video fidelity (trades off speed and accuracy) and sample rate for mouse movements.

prediction.x; //now always in the bounds of the viewport
prediction.y; //now always in the bounds of the viewport


WebGazer.js uses the getUserMedia/Stream API to get access to the webcam. These browsers are currently supported, as seen here.

Google Chrome

Google Chrome 47+

Microsoft Edge

Microsoft Edge 13+

Mozilla Firefox

Mozilla Firefox 44+


Opera 36+

Download Instructions


Create from Source

The GitHub repository can be found here.
git clone
cd build


Empty Webpage Demo

WebGazer.js on an Empty Webpage

See how easy it is to integrate WebGazer.js on any webpage. Just with a few clicks you will get real-time predictions. The only thing you need to do is click on a few locations within the screen while looking at the cursor. Both clicks and cursor movements make the predictions more accurate. The video and the visualized gaze prediction are only shown in debugging mode.

Collision demo

Ball Collision Game

Move the orange ball with your eyes and create collisions with the blue balls. Train WebGazer.js by clicking in various locations within the screen, while looking at your cursor.


If you use WebGazer.js please cite the following paper:

author = {Alexandra Papoutsaki and Patsorn Sangkloy and James Laskey and Nediyana Daskalova and Jeff Huang and James Hays},
title = {WebGazer: Scalable Webcam Eye Tracking Using User Interactions},
booktitle = {Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI)},
pages = {3839--3845},
year = {2016},


Online discussions in:

Who We Are

Alexandra Papoutsaki

Alexandra Papoutsaki

PhD Candidate in Computer Science at Brown University.

James Laskey

James Laskey

Software Enginner at Google. Graduated from Brown in 2016.

Aaron Gokaslan

Aaron Gokaslan

Undergraduate student at Brown University.

Jeff Huang

Jeff Huang

Assistant Professor of Computer Science at Brown University.

Other Collaborators


Copyright (C) 2016 Brown HCI Group
Licensed under GPLv3.