OpenEyes: Eye Gaze in AR, VR, and in the Wild

ECCV 2020 Workshop, Glasgow, Scotland

News and Updates

28th June Due to time limitations and logistical restrictions, we have decided not to host extend abstracts at this year's OpenEyes.
4th June We are happy to announce as keynote speakers to our workshop!
30th May We have decided to extend the workshop paper submission deadline to Monday 15th June.
29th May We are happy to announce as keynote speakers to our workshop!
19th May ECCV is going virtual, and thus so is OpenEyes 2020. We will announce further details as they become available to us. In general, we will be run in the same fashion as the main conference and other ECCV workshops.


With the advent of consumer products, AR and VR as a form of immersive technology is gaining mainstream attention. However, immersive technology is still in its infancy, as both users and developers figure out the right recipe for the technology to garner mass appeal.

Eye tracking, a technology that measures where an individual is looking and can enable inference of user attention, could be a key driver of mass appeal for the next generation of immersive technologies, provided user awareness and privacy related to eye-tracking features are taken into account. As such, there is a growing interest in improving the state-of-the-art for eye tracking technology. In the past three years, investigations into gaze estimation and prediction methods produced significant improvements in robustness and accuracy by adopting increasingly unique deep neural network architectures. These improvements allow innovative applications for this technology, such as zero-shot image classification and generalized human attention and intent estimation.

Open forums of discussion provide opportunities to further improve eye tracking technology, especially in areas like scale and generalization challenges in the next generation of AR and VR systems. For that reason, Facebook organized the first challenge “Eye Tracking for VR and AR (OpenEDS)” at the ICCV 2019 and the independent GAZE committee organized a workshop titled “Gaze Estimation and Prediction in the Wild (GAZE)”.

For 2020, the Facebook and GAZE committees are partnering to host a joint workshop titled “Eye Gaze in VR, AR, and in the Wild” at the biennial ECCV conference. The workshop will host two tracks: the first focuses on gaze estimation and prediction methods, with a focus on accuracy and robustness in natural settings (in-the-wild); the second track focuses on the scale and generalization problem for eye tracking systems operating on AR and VR platforms. The second track also includes the 2020 eye tracking challenge. More information on the OpenEDS 2020 challenge can be found at

The following topics are of particular interest to the joint workshop:

  • Proposal of novel eye detection, gaze estimation pipelines using deep neural networks that incorporate one or all of the following:
    • Geometric/anatomical constraints into the network in a differentiable manner.
    • Demonstration of robustness to conditions where current methods fail (illumination, appearance, low-resolution etc.).
    • Robust estimation from different data modalities such as RGB, depth, and near IR.
    • Use of additional cues, such as task context, temporal data, eye movement classification.
  • Designing new, accurate metrics to account for rapid eye movements in the real world.
  • Semi-/un-/self-supervised learning, meta-learning, domain adaptation, attention mechanisms and other related machine learning methods for gaze estimation.
  • Methods for temporal gaze estimation and prediction including Bayesian methods.
  • Unsupervised semantic segmentation of eye regions.
  • Active learning frameworks for semantic segmentation of eye images.
  • Generative models for eye image synthesis and gaze estimation.
  • Transfer learning for eye tracking from simulation data to real data.
  • Domain transfer applications for eye tracking.

This workshop will accept submissions of both published and unpublished works. We will also solicit high-quality eye tracking-related papers rejected at ECCV 2020, accompanied by the reviews and a letter of changes which clearly states the changes made to address comments by the previous reviewers. Accepted papers may be featured as spotlight talks and posters.

Call for Contributions

Full Workshop Papers

Submission: We invite authors to submit unpublished papers (14-page ECCV format) to our workshop, to be presented at a poster session upon acceptance. All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) at this OpenReview link.

Accepted papers will be published in the official ECCV Workshops proceedings and the Computer Vision Foundation (CVF) Open Access archive.

Note: Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers' comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers. Due to potential clashes with the main conference reviewing schedule, we will accept simultaneous submissions to the ECCV main conference and OpenEyes Workshop. Simultaneous submissions are otherwise disallowed.

OpenEDS 2020 Challenges

The OpenEDS 2020 Challenges are hosted by the second track of the OpenEyes Workshop, and consist of two challenges:

  • Track 1: Gaze Prediction Challenge
  • Track 2: Sparse Temporal Semantic Segmentation Challenge

More information on the OpenEDS 2020 challenge can be found at:

Important Dates

Please note that all deadlines are at 11:59 PM (UTC -0), in line with the main conference.

Full Paper Submission Deadline Friday, 5th June 2020
Monday, 15th June 2020
Tuesday, 16th June 2020
Notification to Authors (Full Papers) Friday, 3rd July 2020
Wednesday, 8th July 2020
Camera-Ready Deadline To be confirmed
OpenEDS Challenges Participation Deadline Friday, 31st July 2020
Workshop Date Sunday, 23rd August 2020

Invited Keynote Speakers

Morning Session
Enkelejda Kasneci
University of Tübingen

Enkelejda Kasneci is a Professor of Computer Science at the University of Tübingen, Germany, where she leads the Human-Computer Interaction Lab. As a BOSCH scholar, she received her M.Sc. degree in Computer Science from the University of Stuttgart in 2007. In 2013, she received her PhD in Computer Science from the University of Tübingen. For her PhD research, she was awarded the research prize of the Federation Südwestmetall in 2014. From 2013 to 2015, she was a postdoctoral researcher and a Margarete-von-Wrangell Fellow at the University of Tübingen. Her research evolves around the application of machine learning for intelligent and perceptual human-computer interaction. She served as academic editor for PlosOne and as a TPC member and reviewer for several major conferences and journals.

Mohamed Khamis
University of Glasgow

The interplay between Privacy, Security and Eye Tracking in the Wild: Opportunities and Challenges


Many of today’s smartphones, laptops and head-mounted displays are well equipped for accurate gaze estimation in the wild. These are important developments for security applications. For example, we will soon be able to run gaze-based security methods in the wild, such as authentication in daily scenarios, and analyze gaze behavior during security critical tasks, like when reading a phishing email. On the downside, ubiquitous eye tracking comes with privacy implications. Imagine if a neighbouring train passenger’s smartphone can track your eyes; this could potentially reveal a myriad of private information including visual interests, personality traits, mental disorders, emotional valence and more. In this talk, I will discuss how the role of eye gaze in security and privacy applications evolved in the past two decades, and how the current developments in ubiquitous eye tracking present a turning point that give rise to many opportunities and also new challenges.


Mohamed Khamis is a Lecturer (Assistant Professor) at the University of Glasgow in the UK. His research is at the intersection of Ubiquitous Computing and User Privacy. He is interested in understanding the privacy implications of ubiquitous technologies, as well as developing novel systems for protecting privacy and security. He has major contributions at the intersection of security and gaze. Mohamed organized workshops at UbiComp 2016, CHI 2018, and CHI 2019. He is a program committee member for CHI 2020, and was general chair for PerDis 2019.

Afternoon Session
Pramod Khargonekar
University of Califonia, Irvine

Pramod Khargonekar received B. Tech. Degree in electrical engineering in 1977 from the Indian Institute of Technology, Bombay, India, and M.S. degree in mathematics in 1980 and Ph.D. degree in electrical engineering in 1981 from the University of Florida, respectively. He has been on faculty at the University of Florida, University of Minnesota, The University of Michigan, and the University of California, Irvine. In June 2016, he assumed his current position as Vice Chancellor for Research and Distinguished Professor of Electrical Engineering and Computer Science at the University of California, Irvine. Khargonekar's current research and teaching interests include systems and control theory, machine learning, and applications to smart electric grid and neural engineering. He has authored more than 160 refereed journal publications and 200 conference publications, and has supervised 38 doctoral students. He has received numerous awards and have served in academia and industry in many roles, the details of which can be found on his webpage.

Laurent Itti
University of Southern California

Laurent Itti received his M.S. degree in Image Processing from the Ecole Nationale Superieure des Telecommunications (Paris, France) in 1994, and his Ph.D. in Computation and Neural Systems from Caltech (Pasadena, CA) in 2000. He has since then been an Assistant, Associate, and now Full Professor of Computer Science, Psychology, and Neuroscience at USC. Dr. Itti’s research interests are in biologically-inspired computational vision, in particular in the domains of visual attention, scene understanding, control of eye movements, and surprise. This basic research has technological applications to, among others, video compression, target detection, and robotics. Dr. Itti has co-authored over 150 publications in peer-reviewed journals, books and conferences, three patents, and several open-source neuromorphic vision software toolkits.

Qiang Ji
Rensselaer Polytechnic Institute

A Bayesian Framework for Robust and Generalizable Eye Tracking


As a classical eye tracking approach, model-based eye tracking uses a geometric eye model to perform eye tracking; it generalizes well and requires no training data. Model-based eye tracking, however, requires accurate eye feature detection, which is hard to achieve in real world environments. Deep learning based methods have been employed recently to improve eye feature detection accuracy. These methods, however, do not generalize well to novel data. To achieve robust and generalizable eye tracking under different conditions, we propose a Bayesian framework to perform model-based eye tracking without explicit eye feature detection.

Specifically, the proposed method consists of a Bayesian Convolutional Neural Network (BCNN) to capture the probabilistic relationships between eye appearance and its landmarks, and a geometric model to estimate eye gaze from the eye landmarks. Under a Bayesian framework to integrate the two modules, given an eye image, the BCNN module outputs the probability distribution of the eye landmarks and their uncertainties, based on which the geometric model performs a Bayesian inference of the eye gaze by marginalizing out the eye landmarks, enabling eye gaze estimation without explicit eye landmark detection. Compared to the point-based eye landmark estimation methods, our model not only can generalize better, but is also more robust under challenging real world conditions. In addition, we extend the single-stage model to multi-stage, yielding the cascade BCNN. The cascade architecture allows feeding the uncertainty information from current stage to the next stage to progressively improve the gaze estimation accuracy. Experiments show the proposed method, while achieving comparable within-dataset performance, outperforms SoA methods in both it’s generalization capability across datasets, and in its robustness for low quality and challenging datasets.


Qiang Ji received his Ph.D degree in electrical engineering from the University of Washington. He is currently a Professor with the Department of Electrical, Computer, and Systems engineering at RPI. From January, 2009 to August, 2010, he served as a program director at the National Science Foundation, managing NSF's machine learning and computer vision programs. Prior to joining RPI in 2001, he was an assistant professor with Dept. of Computer Science, University of Nevada at Reno. He also held research and visiting positions with the Beckman Institute at University of Illinois at Urbana-Champaign, the Robotics Institute at Carnegie Mellon University, and the US Air Force Research Laboratory. Dr. Ji currently serves as the director of the Intelligent Systems Laboratory (ISL). Prof. Ji is a fellow of the IEEE and the IAPR.


Gaze Estimation and Prediction in the Wild

Hyung Jin Chang
University of Birmingham
Seonwook Park
ETH Zürich
Xucong Zhang
ETH Zürich
Otmar Hilliges
ETH Zürich
Aleš Leonardis
University of Birmingham

Eye Tracking for VR and AR

Robert Cavin
Facebook Reality Labs
Cristina Palmero
Universitat de Barcelona (UB)
Jixu Chen
Alexander Fix
Facebook Reality Labs
Elias Guestrin
Facebook Reality Labs
Oleg Komogortsev
Texas State University

Kapil Krishnakumar
Abhishek Sharma
Facebook Reality Labs
Yiru Shen
Facebook Reality Labs
Tarek Hefny
Facebook Reality Labs
Karsten Behrendt
Sachin S. Talathi
Facebook Reality Labs

Workshop sponsored by:

Gaze Estimation and Prediction in the Wild

Eye Tracking for VR and AR