Loki: jobs and internships

If you would like to work with us, but none of the offers below match wich your qualification, your expertise or your expectations, you can contact Stéphane Huot (stephane.huot@inria.fr) to discuss other possible opportunities for joining the team.

PhDs

Routine tasks such as controlling a system cursor or moving a virtual camera involve continuous visuo-motor control from the user, to which the system has to respond accurately and with minimal latency. Modern Human-Computer Interfaces use multi-step input pipelines between the user’s movements and the system’s feedback that are frequently opaque. Each of these steps (sensing, filtering, transforming, predicting, etc.) strongly affects the next, as well as the pipeline's outcome. And yet, many of them are designed in isolation and calibrated by trial and error using legacy or ad- hoc approaches, or limited knowledge of the underlying psychomotor phenomena. This limits user performance and experience in everyday computer use, and hinders the design and adoption of new devices and sensing methods.
see the offer...

Internships

This internship is part of the PerfAnalytics project (ANR PIA-PPR 2020) that consists of a collaboration of various Inria teams, universities, and institutes such as INSEP (French institute for sport and performance), as well as several French sport federations. > Controlling the time in a video is conventionally done by manipulating a timeline, i.e., moving a cursor between the beginning and end of the video. Direct manipulation techniques such as DIMP and DRAGON propose to identify the trajectories of moving objects by computing the optical flow and use their positions to seek specific frames in the video. These techniques work well for static points-of-view (POV) when objects are moving, but not when it changes (tilt, pan, zoom) as it becomes challenging to differentiate between a camera movement and an object moving. The goal is to 1) leverage computer vision methods to capture the optical flow of a video and identify camera movements through time, 2) capture the the moving objects’ trajectories and visualize them, 3) design interaction techniques leveraging direct manipulation to navigate the video content.
see the offer...

We are in a climate emergency. To avoid global temperatures to rise beyond a critical level, we need to drastically lower our carbon footprint. This problem has been heavily documented by the scientific community for decades, leading to three major reports written by the IPCC (Intergovernmental Panel on Climate Change) in 2022 [1]. Concrete means exist to act, like minimizing plane travels, but research communities keep organising worldwide events that hardly adapt to this factor [2], [3]. Meeting physically provides advantages over online events such as networking during the conference but also in other contexts (lunches, social events), and being fully committed to the conference instead of viewing online videos. In-person events are essential to foster collaborations between scientists, but they must adapt to minimize their carbon footprint. This internship focuses on designing and implementing an interactive tool that would estimate the travel-related carbon footprint of scientific conferences or large conventions (such as SigGraph, CES, CHI). The goal is to support decision makers in choosing appropriate locations while considering the ecological impact. The tool will visualise estimated carbon costs of travels based on specific destinations using APIs such as Google Flights [4] or Flight Stats [5]. It should visualise data uncertainty efficiently to avoid misinterpretation of the data [6], and provide tools for controlling various factors (e.g., flight time, capacity, attractiveness of the venue, ...).
see the offer...

This internship is part of the PerfAnalytics project (ANR PIA-PPR 2020) that consists of a collaboration of various Inria teams, universities, and institutes such as INSEP (French institute for sport and performance), as well as several French sport federations. > Video annotations such as chapters or time marks situating specific events facilitate reviewing significant sequences. Quantitative annotations, such as the number of hits landed by a professional boxer on their opponent, enable performing statistical analyses to summarize the video content. These annotations can be produced automatically but are error-prone. This internship focuses on designing and evaluating interaction techniques that simplify the process of reviewing automatic annotations. These techniques will enable to review a set of annotations and validate or edit them while minimizing the cognitive load induced.
see the offer...

The European Space Agency aims at sending rovers to the Moon within the next decade. Besides scientific knowledge, these missions will also pave the way for future manned lunar missions.
The distance between the Moon and Earth allows operators on Earth to control lunar orbiters, and through them lunar landers and rovers, with up-and-down transmission delays of around 6-10 seconds. 6-10 seconds is a challenge when attempting to control a system in real time, particularly under uncertain environments such as the Moon surface. There is limited understanding about how users behave under such latencies, and how to best design control systems that address these issues. In the field of HCI, the effects of latency on user behaviour is often explored below 200 ms, but some models of performance successfully describe target-acquisition tasks with (fixed) latencies up to 4 seconds. This project will investigate whether these models hold with higher levels of (variable) latency, and how to improve user control in such unfriendly environments.
see the offer...

Current systems are unable to determine the resolution of a computer mouse from classical events ("MouseEvents") or accessible system properties. The objective of this project is to develop a learning technique that allows to determine this resolution from the raw information sent by the mouse.
see the offer...

The discoverability of available interaction inputs (actions that can be used to communicate with the system) and features (commands associated with these inputs) is a critical requirement to efficiently interact with modern computing systems. Yet, there is surprisingly no accepted experimental procedure nor benchmarking method to efficiently investigate the discoverability of complex interaction methods in interactive systems. This is a major limitation to the introduction of efficient novel interaction inputs in future interactive devices. This internship will consist in developing experimental procedures to evaluate and compare the discoverability of interaction inputs in different computing systems.
see the offer...

The design of Graphical User Interfaces (GUIs) in modern systems has evolved toward interfaces providing the minimal required amount of features and decoration. As a result, many features end-up hidden by default or located off-screen with the often false assumption that users are aware of their presence. This case, where users are confronted with a GUI that does not inform of all interaction possibilities, is a fundamental HCI problem extremely frequent on modern touch-based computers. A typical example of this are Swhidgets, widgets that by default are entirely hidden under another interface element or the screen bezels.
This master internship will consist in understanding and modelling the discovery of these hidden controls in Graphical User Interfaces.
see the offer...

This project in Human-Computer Interaction (HCI) aims to assess the expressivity of a prototype HCI framework and to identify its design principles, by developing demonstrators of various interaction techniques.
see the offer...

Command selection is one of the most common activity performed on computer systems and most Graphical User Interfaces (GUIs) usually support two methods for selecting commands: either navigating through hierarchies (typically menu bars) and click on the target command; or using a dedicated shortcut (typically keyboard or gestural shortcuts) which offers faster command selection but requires user to memorize the shortcut beforehand. While some studies have been conducted comparing shortcut mechanisms in abstract contexts, user performance with these mechanisms remain unclear as more realistic tasks introduce various factors likely to influence user performance with shortcuts. This project consists in consists in designing and running user experiments comparing the performance of various command shortcuts mechanisms in realistic tasks and contexts.
see the offer...

This internship is part of a larger project which aims at designing helping tools for transcribing ancient documents. This tool will genuinely combine interactive and automatic methods. Indeed, automatic methods such as machine learning are not sufficient, first of all because they require a hand-made knowledge database. Second, the user must keep the control on ambiguities management. Third, we would like users to gain skills, which will only be possible if the user has an active role.
We are interested in designing, developing and evaluating an interactive text selection tool for scanned handwritten documents. Classical selection tools such as free forms and various magic wands are not adapted. Our approach is a selection brush with 4 degrees of freedom: x-y position, brightness threshold and selection size. Our first investigations are promising, but we still have to evaluate it with a controlled experiment. The mapping of 4 degrees of freedom is complicated with a keyboard + mouse/touchpad settings. We will investigate a pen + touch setting on a tablet.
see the offer...

“Interaction interferences” are a family of usability problems defined by a sudden and unexpected change in an interface, which occurs when the user was about to perform an action, and too late for him/her to interrupt it. They can cause effects ranging from frustration to loss of data. For example, a user is about to click on a hyperlink on his phone, but just before the ‘tap’ a pop-up window appears above the link and the user is unable to stop his action, resulting in the opening of an unwanted and possibly harmful webpage. Although quite frequent, there is currently no precise characterization or technical solutions to this family of problems.
see the offer...

End-to-end latency, measured as the time elapsed between a user action on an input device and the update of visual, auditory or haptic information is known to deteriorate user perception and performance. The synchronization between visual and haptic feedback is also known to be important for perception. While tools are now available to measure and determine the origin of latency on visual displays, a lot remains to be done for haptic actuators. In previous work, we designed a latency measurement tool, and used it to measure the latency of visual interfaces. Our results showed that with a minimal application, most of the latency comes from the visual rendering pipeline.
While visual systems are designed to run at 60Hz, haptic systems can run at much higher frequencies up to 1000Hz. This means the bottleneck of haptic interfaces might be different from visual interfaces. There might also be large differences between different kind of haptic actuators, some of them being designed to be highly responsive. Moreover, the perception of temporal parameters of haptic stimulations is different than for visual stimulations. We foresee influences on the perception of latency between haptic and visual interfaces.
see the offer...

This project consists in exploring dynamic models from the motion science and neuroscience literature, in order to validate their applicability for the automatic or semi-automatic design of acceleration functions for the mouse or touchpad. These results will make it possible to define more effective methods for designing acceleration functions for general public applications (default functions in OSs, acceleration adapted for a given person or task), advanced applications (gaming, art), or even for the assistance of motor disabilities.
see the offer...

The total, or "end-to-end" latency of an interactive system comes from many software and hardware sources that may be incompressible - in today's state of the art. An alternative solution, and one that is immediately available, is to predict the user's movements in the near future, in order to represent on the screen a return corresponding to the user's actual position, rather than to his last captured position. Naturally, the amount of latency that can be compensated for depends on the quality of the prediction.
The objective of this project is to work either on the dynamic adjustment of the prediction time distance or to develop new prediction methods specifically for augmented reality (AR).
see the offer...