The EOTT dataset contains data from 51 participants that participated in an eye tracking study. The data include user input data (such as mouse and cursor logs), screen recordings, webcam videos of the participants' faces, eye-gaze locations as predicted by a Tobii Pro X3-120 eye tracker, demographic information, and information about the lighting conditions. Participants completed pointing tasks including a Fitts Law study, as well as reading, Web search, and typing tasks. A 9-point calibration task can be used to evaluate the accuracy of webcam eye trackers, including that of WebGazer's. The study was conducted on a desktop PC and Macbook Pro laptop based on the participant's preference.
We describe the structure of the dataset and provide optional instructions on how to run WebGazer off-line to assess its accuracy against a Tobii Pro X3-120 eye tracker.
The dataset contains 51 folders with curated data of 51 participants. The format of the name of each folder is: P_X, where X is an ID for the users. Notice that the folders run from P_1 to P_64, as the original study was conducted with 64 participants. Out of these, only 51 have valid data which we include in the dataset. A detailed explanation of the experiment protocol can be found in Chapter 5 of Papoutsaki's dissertation and in the ETRA 2018 paper.
{"sessionId": "1491423217564_2_/study/dot_test_instructions", "webpage": "/study/dot_test_instructions.htm", "sessionString": "1491423217564_2_/study/dot_test_instructions", "epoch": 1491423557726, "time": 420.26000000000005, "type": "recording start", "event": "video started"}
{"clientX": 724, "clientY": 440, "windowY": 23, "windowX": 0, "windowInnerWidth": 1440, "time": 1273.9850000000001, "sessionId": "1491423217564_2_/study/dot_test_instructions", "webpage": "/study/dot_test_instructions.htm", "epoch": 1491423558580, "windowOuterWidth": 1440, "windowInnerHeight": 679, "pageX": 724, "pageY": 440, "windowOuterHeight": 797, "screenY": 537, "screenX": 724, "type": "mousemove"}
{"right_pupil_validity": 1, "right_gaze_point_on_display_area": [0.23851549625396729, 0.30423176288604736], "left_gaze_origin_validity": 0, "system_time_stamp": 1491423557714414, "right_gaze_origin_in_user_coordinate_system": [-9.197460174560547, -119.45834350585938, 649.9231567382812], "left_gaze_point_in_user_coordinate_system": [-1.0, -1.0, -1.0], "left_gaze_origin_in_user_coordinate_system": [-1.0, -1.0, -1.0], "left_pupil_validity": 0, "right_pupil_diameter": -1.0, "true_time": 1491423557.724913, "left_gaze_origin_in_trackbox_coordinate_system": [-1.0, -1.0, -1.0], "right_gaze_point_in_user_coordinate_system": [-135.97193908691406, 237.99029541015625, 8.616291999816895], "left_pupil_diameter": -1.0, "right_gaze_origin_validity": 1, "left_gaze_point_validity": 0, "right_gaze_point_validity": 1, "left_gaze_point_on_display_area": [-1.0, -1.0], "right_gaze_origin_in_trackbox_coordinate_system": [0.5202165842056274, 0.7625768184661865, 0.49974384903907776], "device_time_stamp": 193918205466}
An explanation of the three coordinate systems is provided by the Tobii Pro SDK.
At the same level with the 51 user folders, you will find a spreadsheet named Participant_Characteristics. Each row corresponds to a unique participant and columns capture the following columns:
If you are interested in using this dataset in conjunction with WebGazer, this software takes the dataset and creates CSV files for each video, containing per-frame WebGazer and Tobii values in normalized screen coordinates. After extraction, this makes it simple and efficent to analyse the performance of WebGazer in your favourite data science application.
https://webgazer.cs.brown.edu/data/WebGazerETRA2018Dataset_Release20180420.zip
python webgazerExtractServer.py
http://localhost:8000/webgazerExtractClient.html
Watch for outputs in ../FramesDataset/
Contains:
Watch a replay.
As it processes, the system can show the interaction events against the screen recording. Note that only laptop participants have screen recording synchronization data, and only then are they roughly aligned. Use the text box to 'try some numbers' and find the sync offset. This varies per participant.
Write out screen recording videos with interactions overlaid.
This uses OpenCV, and is a little flakey, but should work. It will slow down extraction a lot. There's a switch in the code to turn it on; let us know if it breaks (I haven't tested it in a while).
The software is currently set up to run on only the two dot tests and the four typing videos. This can be changed by editing webgazerExtractServer.py - look out for 'filter' as a keyword in comments. Likewise, the software currently processes all participants; again look for 'filter'.
Contact us at Webgazer. webgazer( at )lists.cs.brown.edu
Copyright (C) 2021 Brown HCI Group
Licensed under GPLv3.