好文档 - 专业文书写作范文服务资料分享网站

中文版:Eye Tracking for Human Robot Interaction

天下 分享 时间: 加入收藏 我要投稿 点赞

Eye Tracking for Human Robot Interaction

人机交互的眼动追踪

Abstract 简介

Humans use eye gaze in their daily interaction with other humans.

人类在每天与其他人的互动中使用眼睛注视。

Humanoid robots, on the other hand, have not yet taken full advantage of this form of implicit communication.

另一方面,人型机器人还没有充分利用这种隐性通信的形式。

We designed a passive monocular gaze tracking system implemented on the iCub humanoid robot [Metta et al. 2008].

我们设计了一种在iCub类人形机器人上实现的被动单眼凝视跟踪系统[Metta et al. 2008]。 The validation of the system proved that it is a viable low-cost, calibration-free gaze tracking solution for humanoid platforms, with a mean absolute error of about 5 degrees on horizontal angle estimates.

该系统的验证证明,它是一种可行的低成本,无校准的人形平台凝视跟踪解决方案,平均绝对误差约为水平角估计5度。

We also demonstrated the applicability of our system to human-robot collaborative tasks, showing that the eye gaze reading ability can enable successful implicit communication between humans and the robot.

我们还展示了我们的系统对人机器人协作任务的适用性,表明眼睛注视阅读能力能够使人与机器人之间成功的隐性通信成为可能。

Keywords: eye tracking, human robot interaction

Concepts: ? Human centered computing ~ Human computer interaction; Interaction techniques 关键词:眼睛跟踪,人机交互 概念:?以人为本的计算?人机交互;互动技巧 1 Introduction

Humans use a number of communication cues in their daily interaction with other humans: primarily speech but also gestures, pointing and gaze [George and Conty 2008].

人类在与其他人的日常互动中使用许多通信线索:主要是言语,但也是手势,指点和凝视[George和Conty 2008]。

The main purpose of gaze is to provide visual information to the subject, but at the same time a person’s gaze implicitly provides information to an outside observer about what the subjects are focusing their attention on.

注视的主要目的是为主题提供视觉信息,但同时,一个人的目光也向外部观察者隐含地提供关于主体将注意力集中在何方的信息。

There is a number of ways how eye gaze is implicitly used during communication: gaze aversion, mutual gaze, gaze pointing, join attention, etc.

沟通中隐含使用眼睛凝视的方法有很多:凝视厌恶,相互凝视,凝视指向,注意力等。 Humans are very good at reading other people’s gaze, but robots are less so. 人类非常擅长阅读别人的目光,但机器人却不如此。

This ability would be especially important for humanoid robots to be able to mimic human abilities.

这种能力对于类人机器人能够模仿人类能力尤其重要。

However, most human robot interaction experiments today use head pose as a proxy for real eye gaze because it’s easier to extract [Doniec et al. 2006; Sheikhi and Odobez 2014]. 然而,大多数人类机器人交互实验今天使用头部姿势作为真实眼睛注视的代理,因为它更容易提取[Doniec等人2006; Sheikhi和Odobez 2014]。

But “head gaze” does not provide all the information that eye gaze does [Borji et al. 2014], therefore enabling robots to perform eye tracking could significantly improve their abilities and also their acceptance by humans.

但是“头部注视”并没有提供眼睛凝视的所有信息[Borji et al。 2014],因此使机器人执行眼睛跟踪可以显着提高他们的能力和人类的接受度。

Moreover, most human robot interaction experiments focusing on gaze use external eye tracking systems [Broz and Lehmann 2012].

此外,大多数人机器人交互实验着重于注视使用外部眼睛跟踪系统[Broz和Lehmann 2012]。 Although this choice guarantees high spatial and temporal precision of the measure, it might in some contexts interfere with natural interactive behavior, inducing participants to be constantly aware of their own gaze motions.

虽然这种选择保证了度量的高空间和时间精度,但是在某些情况下可能会干扰自然的互动行为,从而引起参与者不断意识到自己的目光运动。

A proof of concept gaze tracker was realized by Matsumoto and Zelinsky [2000] and implemented on the HRP2 humanoid [Ido et al. 2006].

Matsumoto和Zelinsky [2000]实现了概念凝视追踪器的证明,并在HRP2人形体上实现[Ido et al。 2006年]。

More recently Sciutti et al. [2015] implemented a mutual gaze detection system on the iCub which facilitated a teacher/student scenario.

最近Sciutti等人[2015]在iCub上实现了一个相互注视检测系统,促进了教师/学生的情境。 Still, so far no extensive use of eye gaze tracking has been done in human-robot interaction. 然而,到目前为止,在人 - 机器人交互中没有广泛使用眼睛注视跟踪。 2 Approach and Results

2方法和结果

We implemented a monocular feature-based passive gaze tracking algorithm on the iCub platform with the goal of facilitating human robot interaction (for details see [Palinko et al. 2015]).

我们在iCub平台上实现了基于单目特征的被动注视跟踪算法,目的是促进人机交互(详见[Palinko et al。2015])。

The first step in eye tracking is detecting faces and finding face features. 眼睛跟踪的第一步是检测脸部和发现脸部特征。

For this purpose we used King’s implementation [King 2009] of Khazemi and Sullivan’s [2014] approach for finding features like the corners of the eyes and mouth.

为此,我们使用了King的实施[King 2009] Khazemi和Sullivan的[2014]方法来寻找像眼角和嘴角的特征。

We also used Baltrusaitis et al. [2012] implementation of the Constrained Local Models approach for tracking head pose.

我们也使用了Baltrusaitis等[2012]实施约束局部模型方法来跟踪头部姿势。

Once these measures were found we proceeded to apply an eye model to the detected center of the pupil similarly as in [Ishikawa et al. 2004].

一旦发现这些措施,我们开始将眼睛模型应用于瞳孔的检测中心,类似于[Ishikawa et al。 2004年]。

The model finally provided the estimate of the gaze angle of the subject, see Figure 1a. 该模型最终提供了受试者的注视角度的估计,见图1a。

We then performed a validation experiment in which we found the gaze estimates to be quite acceptable for our setup: the absolute error in the horizontal plane was 5 degrees on average. 然后,我们进行了一个验证实验,我们发现凝视估计值对于我们的设置是完全可以接受的:水平面的绝对误差平均为5度。

The accuracy of our system was limited by the cameras used in the iCub setup.

我们系统的精度受到iCub设置中使用的摄像头的限制。

We employed PointGrey Dragonfly2 cameras in 1024x768 resolution with fixed-focus 4mm lenses, which produce images of the iris with 20 pixels in diameter when the subject is at 60cm. 我们采用1024x768分辨率的PointGrey Dragonfly2摄像机,固定焦距4mm镜头,当主体达到60cm时,可以产生直径为20像素的虹膜图像。

Knowing that the average diameter of the iris [Thainimit et al. 2013] is similar in size to the average eye radius (12mm) [Bekerman et al. 2014], then one pixel difference in the middle of the iris corresponds to about 3 degrees difference in gaze.

知道虹膜的平均直径[Thainimit et al。 2013]的尺寸与平均眼睛半径相似(12mm)[Bekerman et al。 2014],则虹膜中间的一个像素差异对应于注视中约3度的差异。

Thus our accuracy is greatly influenced by the hardware used. It is foreseeable that the progressive development of cheaper and smaller cameras will allow future robotic platforms to have higher resolution sensors, with a consequent improvement of the accuracy of our system. 因此,我们的精度受到使用的硬件的很大影响。可以预见,逐渐开发更便宜和更小的相机将允许未来的机器人平台具有更高分辨率的传感器,从而提高我们系统的准确性。

In the meanwhile, the current hardware already enables a gaze estimation from the iCub robot that it can exploit to manage human-robot collaboration tasks.

同时,目前的硬件已经可以使iCub机器人能够利用它来管理人机交互工作的目光估计。 We also conducted a proof of concept human robot interaction experiment in which subjects were seated opposite of the robot and experimenter, who held toy building blocks in their hands,

see Figure 1b.

我们还进行了人机器人交互实验的概念验证,其中主体坐在机器人的对面,实验者在他们手中握着玩具积木,参见图1b。

The subject’s role was to ask for the blocks in specific order, but we did not provide information on how to communicate with the robot.

该主题的作用是以特定顺序要求块,但是我们没有提供有关如何与机器人进行通信的信息。 Participants used a combination of speech, pointing and gaze to achieve the task, but the robot really only reacted to gaze.

参与者结合使用言语,指点和注视力来实现任务,但机器人只是反应凝视。

More precisely, the robot handed over pieces of toy building blocks when it detected a succession of mutual gaze and gazing at the requested object.

更准确地说,当机器人检测到一连串相互凝视并注视所请求的物体时,机器人移交了玩具积木块。

The subjects were not aware of the robot’s gaze reading ability, but could still complete the task of building a pillar out of these blocks just by using natural eye behavior in less than 30 seconds. 受试者不知道机器人的注视阅读能力,但是仍然可以通过在不到30秒钟内使用自然眼睛行为完成从这些区块中建立支柱的任务。

Hence, the robot succeeded in exploiting naturally occurring human gaze behavior to control its helping actions in a collaborative manner.

因此,机器人成功地开发了自然发生的人眼凝视行为,以协作的方式控制其帮助。 3 Discussion 3讨论

Future benefits of a built-in gaze tracker in a humanoid robot can be manifold: it could improve turn taking, joint attention and in general the processing of all the communicative gaze cues typical of human interaction.

在人形机器人中内置注视追踪器的未来好处可以是多种多样的:它可以改善转弯,联合关注,并且一般来说,处理人类交互典型的所有交流凝视线索。

Furthermore, the robot could potentially be used for diagnosing early behavioral problems associated with gaze processing as Autism Spectrum Disorders, by monitoring subjects’ gaze in real time.

此外,通过实时监控受试者的目标,机器人可能潜在地用于诊断与凝视加工相关的早期行为问题,如自闭症谱系障碍。

The monitoring of gaze and post hoc analysis has proven to be helpful for ASD diagnosis [Mavadati et al. 2014].

注视和事后分析的监测已被证明有助于ASD诊断[Mavadati et al。 2014。

Our system would allow for appropriate contingent reaction to gaze by the robot, something that now is achieved only through remote control by the therapist.

我们的系统将允许机器人注意适当的或有反应,现在只有通过治疗师的遥控才能实现。 4 Future Work 4未来工作

In the near future we plan to improve the quality of the eye tracking system as well as to conduct more experiments which would prove the usefulness of eye tracking in human robot interaction. 在不久的将来,我们计划提高眼睛跟踪系统的质量,并进行更多的实验,这将证明眼睛跟踪在人类机器人交互中的有用性。

The quality of tracking could be increased by applying more precise tracking of the pupil, face features and head orientation.

通过对瞳孔,面部特征和头部方向进行更精确的跟踪,可以提高跟踪质量。

The planned experiments would emphasize the benefits of real eye tracking as opposed to using head tracking only.

计划的实验将强调真正的眼睛跟踪的好处,而不是仅使用头部跟踪。

We would focus on exploring phenomena as joint attention, mutual gaze and gaze aversion in both dyadic and group scenarios.

我们将重点探讨现象在双重和组合情景中的共同关注,相互凝视和凝视厌恶。

中文版:Eye Tracking for Human Robot Interaction

EyeTrackingforHumanRobotInteraction人机交互的眼动追踪Abstract简介Humansuseeyegazeintheirdailyinteractionwithotherhumans.人类在每天与其他人的互动中使用眼睛注视。Humanoidro
推荐度:
点击下载文档文档为doc格式
1viaz5kef472h8v7sa970wk4t3v47w00u3f
领取福利

微信扫码领取福利

微信扫码分享