OpenEyes: Eye Gaze in AR, VR, and in the Wild

ECCV 2020 Workshop, Glasgow, Scotland


Introduction

With the advent of consumer products, AR and VR as a form of immersive technology is gaining mainstream attention. However, immersive technology is still in its infancy, as both users and developers figure out the right recipe for the technology to garner mass appeal.

Eye tracking, a technology that measures where an individual is looking and can enable inference of user attention, could be a key driver of mass appeal for the next generation of immersive technologies, provided user awareness and privacy related to eye-tracking features are taken into account. As such, there is a growing interest in improving the state-of-the-art for eye tracking technology. In the past three years, investigations into gaze estimation and prediction methods produced significant improvements in robustness and accuracy by adopting increasingly unique deep neural network architectures. These improvements allow innovative applications for this technology, such as zero-shot image classification and generalized human attention and intent estimation.

Open forums of discussion provide opportunities to further improve eye tracking technology, especially in areas like scale and generalization challenges in the next generation of AR and VR systems. For that reason, Facebook organized the first challenge “Eye Tracking for VR and AR (OpenEDS)” at the ICCV 2019 and the independent GAZE committee organized a workshop titled “Gaze Estimation and Prediction in the Wild (GAZE)”.

For 2020, the Facebook and GAZE committees are partnering to host a joint workshop titled “Eye Gaze in VR, AR, and in the Wild” at the biennial ECCV conference. The workshop will host two tracks: the first focuses on gaze estimation and prediction methods, with a focus on accuracy and robustness in natural settings (in-the-wild); the second track focuses on the scale and generalization problem for eye tracking systems operating on AR and VR platforms. The second track also includes the 2020 eye tracking challenge. Stay tuned for more details about the new challenge.

The following topics are of particular interest to the joint workshop:

  • Proposal of novel eye detection, gaze estimation pipelines using deep neural networks that incorporate one or all of the following:
    • Geometric/anatomical constraints into the network in a differentiable manner.
    • Demonstration of robustness to conditions where current methods fail (illumination, appearance, low-resolution etc.).
    • Robust estimation from different data modalities such as RGB, depth, and near IR.
    • Use of additional cues, such as task context, temporal data, eye movement classification.
  • Designing new, accurate metrics to account for rapid eye movements in the real world.
  • Semi-/un-/self-supervised learning, meta-learning, domain adaptation, attention mechanisms and other related machine learning methods for gaze estimation.
  • Methods for temporal gaze estimation and prediction including Bayesian methods.
  • Unsupervised semantic segmentation of eye regions.
  • Active learning frameworks for semantic segmentation of eye images.
  • Generative models for eye image synthesis and gaze estimation.
  • Transfer learning for eye tracking from simulation data to real data.
  • Domain transfer applications for eye tracking.

This workshop will accept submissions of both published and unpublished works. We will also solicit high-quality eye tracking-related papers rejected at ECCV 2020, accompanied by the reviews and a letter of changes which clearly states the changes made to address comments by the previous reviewers. Accepted papers may be featured as spotlight talks and posters.


Call for Contributions


Full Workshop Papers

Submission: We invite authors to submit unpublished papers (14-page ECCV format) to our workshop, to be presented at a poster session upon acceptance. All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) at this OpenReview link.

Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers' comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers. Re-submissions will go through the same reviewing procedure as new submissions (see previous paragraph).



Important Dates


Paper Submission Deadline Friday, 5th June 2020
Notification to Authors Friday, 3rd July 2020
Camera-Ready Deadline Friday, 17th July 2020
Workshop Date Sunday, 23rd August 2020


Invited Keynote Speakers

TBD


Organizers


Track 1 (AM): Gaze Estimation and Prediction in the Wild


Hyung Jin Chang
University of Birmingham
Seonwook Park
ETH Zürich
Xucong Zhang
ETH Zürich
Otmar Hilliges
ETH Zürich
Aleš Leonardis
University of Birmingham


Track 2 (PM): Eye Tracking for VR and AR


Robert Cavin
Facebook Reality Labs
Cristina Palmero
Universitat de Barcelona (UB)
Jixu Chen
Facebook
Alexander Fix
Facebook Reality Labs
Elias Guestrin
Facebook Reality Labs
Oleg Komogortsev
Texas State University


Kapil Krishnakumar
Facebook
Abhishek Sharma
Facebook Reality Labs
Yiru Shen
Facebook Reality Labs
Tarek Hefny
Facebook Reality Labs
Karsten Behrendt
Facebook
Sachin S. Talathi
Facebook Reality Labs


Workshop sponsored by:


Track 1 (AM): Gaze Estimation and Prediction in the Wild



Track 2 (PM): Eye Tracking for VR and AR