Loki: jobs and internships

If you would like to work with us, but none of the offers below match wich your qualification, your expertise or your expectations, you can contact Stéphane Huot (stephane.huot@inria.fr) to discuss other possible opportunities for joining the team.

Post-docs

The PerfAnalytics project, involves several academic partners (Inria, INSEP, Univ. Grenoble Alpes, Univ. Poitiers, Univ. Aix-Marseille, Univ. Eiffel), as well as elite staff and athletes from different Olympic disciplines (Climbing, BMX Race, Gymnastics, Boxing and Wrestling).
The project investigates how computing systems can better assist professional coaches (and athletes) in their performance development. More precisely, its ambition is to capitalize on the high throughput capability offered by in-situ video analysis. However, because of tedious manual annotation pipelines, the in-situ exploitation of video content by coaches and athletes is still limited nowadays to a qualitative assessment through simple playback visualization.
In this project, we will design innovative workflows, interactions and interfaces to ease the process of in-situ video analysis of athlete performance in order to facilitate the optimal adjustment of strategy and training programs with respect to the athlete’s current state of performance. Notably, we will identify which level of human-machine partnership is recommended to bring the best of both worlds and combine semi-automatic rapid and precise video annotation and analysis, with efficient visual representation of performance.
see the offer...

This postdoc is part of a larger project which aims at designing helping tools for transcribing ancient documents. Automatic methods based on statistical analyses require a knowledge database with ground truth information. In our context, such a database does not exist. Worse, a new database is required for every document corpus because the language and writing is different from one another. Therefore the knowledge database must be built by hand, which is a tedious task. We propose to design a transcription tool that helps manual transcription of document in order to build ground truth knowledge databases for automatic transcribing methods. This tool will also leverage automatic methods as the knowledge database is growing to facilitate manual work. With this method, not only the system will gain efficiency at transcribing documents, but users will also gain expertise in this. The tool will consist in selecting words on scanned documents, and associate tags, such as family name, location, date or occupation for example. The user will also have the possibility to tag the textual transcription, corrected transcription, as well as translations in other languages. The work will consist in designing, developing and evaluating an interactive text selection tool, and an annotation tool for scanned handwritten documents. Classical selection tools such as free forms and various magic wands are not adapted. Our approach is a selection brush with 4 degrees of freedom: x-y position, brightness threshold and selection size. Our first investigations are promising, but we still have to evaluate it with a controlled experiment. The mapping of 4 degrees of freedom is complicated with a keyboard + mouse/touchpad settings. We will investigate a pen + touch setting on a tablet. Tags have several advantages: they are non-hierarchical, and multiple tags can be attached to a single item. The system will feature different visualization techniques to highlight the structure of the document, and the progress of transcription. Thanks to these techniques, the researcher will be able to identify words with glossaries, or with machine learning techniques. These techniques will most likely be more efficient with knowledge about the context.
see the offer...

PhDs

This project stems from the observation that today's default models of interaction design remain streamlined towards computers rather than users, especially when it comes to fine-grained temporal interaction. The PhD aims to identify and characterize the resulting usability issues, to understand their causes, both on the human and system side, and to design, build and evaluate interactive systems that are better adapted to the capabilities of their users.
see the offer...

Internships

Parkinson disease (PD) represents more than 150 000 cases in France and is thus considered as the 2nd cause of adult motor disability. It is essentially characterized with extremities tremors, slow movement, muscular stiffness as well as fine motor skills impairment. Medics usually use standard tests in order to evaluate PD evolution but these tests do not allow to analyze in isolation these numerous symptoms.
In the ParkEvolution project, we study PD patients fine motor skill in an ecological environment, and in a longitudinal way; through the analysis of cursor position and raw computer mouse information associated to common task such as pointing when PD patients use their own computer.
see the offer...

This internship is part of a larger project which aims at designing helping tools for transcribing ancient documents. This tool will genuinely combine interactive and automatic methods. Indeed, automatic methods such as machine learning are not sufficient, first of all because they require a hand-made knowledge database. Second, the user must keep the control on ambiguities management. Third, we would like users to gain skills, which will only be possible if the user has an active role.
We are interested in designing, developing and evaluating an interactive text selection tool for scanned handwritten documents. Classical selection tools such as free forms and various magic wands are not adapted. Our approach is a selection brush with 4 degrees of freedom: x-y position, brightness threshold and selection size. Our first investigations are promising, but we still have to evaluate it with a controlled experiment. The mapping of 4 degrees of freedom is complicated with a keyboard + mouse/touchpad settings. We will investigate a pen + touch setting on a tablet.
see the offer...

“Interaction interferences” are a family of usability problems defined by a sudden and unexpected change in an interface, which occurs when the user was about to perform an action, and too late for him/her to interrupt it. They can cause effects ranging from frustration to loss of data. For example, a user is about to click on a hyperlink on his phone, but just before the ‘tap’ a pop-up window appears above the link and the user is unable to stop his action, resulting in the opening of an unwanted and possibly harmful webpage. Although quite frequent, there is currently no precise characterization or technical solutions to this family of problems.
see the offer...

End-to-end latency, measured as the time elapsed between a user action on an input device and the update of visual, auditory or haptic information is known to deteriorate user perception and performance. The synchronization between visual and haptic feedback is also known to be important for perception. While tools are now available to measure and determine the origin of latency on visual displays, a lot remains to be done for haptic actuators. In previous work, we designed a latency measurement tool, and used it to measure the latency of visual interfaces. Our results showed that with a minimal application, most of the latency comes from the visual rendering pipeline.
While visual systems are designed to run at 60Hz, haptic systems can run at much higher frequencies up to 1000Hz. This means the bottleneck of haptic interfaces might be different from visual interfaces. There might also be large differences between different kind of haptic actuators, some of them being designed to be highly responsive. Moreover, the perception of temporal parameters of haptic stimulations is different than for visual stimulations. We foresee influences on the perception of latency between haptic and visual interfaces.
see the offer...

Command selection is one of the most common activity performed on computer systems and most Graphical User Interfaces (GUIs) usually support two methods for selecting commands: either navigating through hierarchies (typically menu bars) and click on the target command; or using a dedicated shortcut (typically keyboard or gestural shortcuts) which offers faster command selection but requires user to memorize the shortcut beforehand. While some studies have been conducted comparing shortcut mechanisms in abstract contexts, user performance with these mechanisms remain unclear as more realistic tasks introduce various factors likely to influence user performance with shortcuts. This project consists in consists in designing and running user experiments comparing the performance of various command shortcuts mechanisms in realistic tasks and contexts.
see the offer...

With multi-device ecosystems, our interaction becomes more “mobile” and less attached to a single device. Handling phone calls is a typical example: when the user receives a phone call, all her connected device notify her so she can answer on the device she prefers to use in this specific situation. Several problems are associated with this naive approach. This project consists in designing, implementing and evaluating novel solutions to propagate notifications across a multi-device environments in a way that is more adapted to users needs.
see the offer...

This project consists in exploring dynamic models from the motion science and neuroscience literature, in order to validate their applicability for the automatic or semi-automatic design of acceleration functions for the mouse or touchpad. These results will make it possible to define more effective methods for designing acceleration functions for general public applications (default functions in OSs, acceleration adapted for a given person or task), advanced applications (gaming, art), or even for the assistance of motor disabilities.
see the offer...

The total, or "end-to-end" latency of an interactive system comes from many software and hardware sources that may be incompressible - in today's state of the art. An alternative solution, and one that is immediately available, is to predict the user's movements in the near future, in order to represent on the screen a return corresponding to the user's actual position, rather than to his last captured position. Naturally, the amount of latency that can be compensated for depends on the quality of the prediction.
The objective of this project is to work either on the dynamic adjustment of the prediction time distance or to develop new prediction methods specifically for augmented reality (AR).
see the offer...