Projects

Chiron

 

Dates :  2021 – 2024

Acronyme : Chiron

Title : AI empowered general purpose assistive robotic system for
dexterous object manipulation through embodied teleoperation and shared control

Type : Trilateral Call on Artificial Intelligence

Website : https://chiron.website

  • Technische Universitat Darmstadt, Germany
  • Nagoya University, Japan
  • Prof. Liming Chen
  • Dr. Emmanuel Dellandréa
  • Dr. Nicolas Cazin
  • Quentin Gallouedec
  • Rui Yang

Dexterous manipulation of objects is a core task in robotics. Because of the design complexity needed for robot controllers, robots currently in use are mostly limited to specific tasks within a known environment, even for simple manipulation tasks. Within the CHIRON project, we aim to develop an AI-empowered general-purpose robotic system for dexterous manipulation of complex and unknown objects in rapidly changing, dynamic and unpredictable real-world environments. To achieve these goals, we will use intuitive embodied robotic teleoperation under optimized shared-control between the human operator enhanced with an intuitive haptic interface and the robot controller empowered with vision and learning skills. The privileged use case of such a system is assistance for “stick-to-bed” patients or elders with limited physical ability in their daily life object manipulation tasks, e.g., fetching a bottle of water and pouring it into a glass, through an intuitive and embodied robot teleoperated by themselves. Such object manipulations would otherwise not be possible for them.

 

Fair Wastes

 

Dates :  2020 – 2023

Acronyme : Fair Wastes

Title : Development of a robotic sorting platform based on vision and artificial intelligence for improving automatic sorting of wastes

Type : PSPC

  • Excoffier
  • MTB
  • Siléane
  • Prof. Liming Chen
  • Dr. Emmanuel Dellandréa
  • Dr. Nicolas Cazin
  • Rui Yang

The Fair Wastes project aims at developing a robotic sorting platform (robot + vision + AI) suitable for a sorting line of wastes so that performances could be improved (sorting pace too high for human operators, painfulness, optical sorting machines not efficient enough).

Learn-Real

 

Dates :  April 2019 – March 2022

Acronyme : Learn-Real

Title : Improving reproducibility in LEARNing physical manipulation skills with simulators using REAListic variations

Type : EU project CHIST-ERA

Website : https://learn-real.eu

  • IIT, Genova, Italy
  • Idiap (EPFL), Martigny, Switzerland
  • Prof. Liming Chen
  • Dr. Emmanuel Dellandréa
  • Dr. Nicolas Cazin
  • Quentin Gallouedec

The acquisition of manipulation skills in robotics involves the combination of object recognition, action-perception coupling and physical interaction with the environment. Several learning strategies have been proposed to acquire such skills. As for humans and other animals, the robot learner needs to be exposed to varied situations. It needs to try and refine the skill many times, and/or needs to observe several attempts of successful movements by others to adapt and generalize the learned skill to new situations. Such skill is not acquired in a single training cycle, motivating the need to compare, share and re-use the experiments.

In LEARN-REAL, we propose to learn manipulation skills through simulation for object, environment and robot, with an innovative toolset comprising: 1) a simulator with realistic rendering of variations allowing the creation of datasets and the evaluation of algorithms in new situations; 2) a virtual-reality interface to interact with the robots within their virtual environments, to teach robots object manipulation skills in multiple configurations of the environment; and 3) a web-based infrastructure for principled, reproducible and transparent benchmarking of learning algorithms for object recognition and manipulation by robots.

These features will extend existing softwares in several ways. 1) and 2) will capitalize on the widespread development of realistic simulators for the gaming industry and the associated low-cost virtual reality interfaces. 3) will harness the existing BEAT platform developed at Idiap, which will be extended to object recognition and manipulation by robots, including the handling of data, algorithms and benchmarking results. As use case, we will study the scenario of vegetable/fruit picking and sorting.

Arès

 

Dates : Oct 2017 – Sep 2020

Acronyme : LabCom Arès

Title : Learning and Computer Vision for Intelligent Robots

  • Siléane
  • Prof. Liming Chen
  • Dr. Emmanuel Dellandréa
  • Dr. Maxime Petit
  • Dr. Matthieu Grard

The project aims to develop new methods for learning and computer vision tasks related to industrial robotics application. The goal is to build a functional robotics prototype within 3 years for flexible and adaptable bin-picking and kitting tasks that 1) guarantees a top productivity and 2) manages quickly to handle and manipulate new introduced objects, as it is often the case for industrial applications.

Pikaflex


Dates : 2016-2020

Acronyme : FUI Pikaflex

Title : Automated and Flexible Picking Kitting

  • Renault
  • Siléane
  • Prof. Liming Chen
  • Dr. Emmanuel Dellandréa
  • Dr. Maxime Petit
  • Dr. Matthieu Grard
  • Dr. Ying Lu
  • M. Amaury Depierre

The project aims to tackle Computer-Vision based object manipulation by industrial robotics arms. The challenge is to design a flexible systems that can either quickly learn and adapt in order to manipulate new introduced objects (based on visual similarity) based on classical Machine Learning paradigm (Bayesian Optimization) upgrading the existing Kamido software from Siléane, and/or create a completely new solution based on Deep Learning.

Alumnis

Dates : 2016-2019 

Acronyme : 4D Vision

Type : Partner University Fund (PUF)

Description : The goal of the 4D vision project is to investigate several fundamental computer vision problems and applications using RGB-D images and their sequences:

  1. Surface measurement, parameterization, characterization, registration and tracking
  2. Object and scene property (pose, illumination, etc.) estimation
  3. Face analysis and recognition
  4. Human Behavior and Medical Image Analysis
  5. High Order Graphs for 4D data

Partenaires : New York State University Stony Brook (Prof. Dimitris Samaras and Prof. David Xianfeng GU), Ecole Centrale de Paris (Prof. Nikos Paragios) and University of Houston (Prof. Ioannis Kakadiaris).


Dates : 2012-2016

Acronyme : ViSen

Titre : Visual Sense 

Type : CHIST ERA-NET

Description : The Visual Sense project aims at mining automatically the semantic content of visual data to enable “machine reading” of images. The goal of this project is to predict semantic image representations that can be used to generate more informative sentence-based image annotations, thus, facilitating search and browsing of large multi-modal collections. More specifically, the project targets three case studies, namely image annotation, re-ranking for image search, and automatic image illustration of articles. For this purpose, the current project will build on expertise from multiple disciplines, including computer vision, machine learning and natural language processing (NLP).

Partenaires : University of Surrey (Dr. Krystian Mikolajczyk, Dr. Fei Yan), Institut de Robòtica i Informàtica Industrial (Dr. Francesc Moreno-Noguer)  and University of Sheffield (Prof. Robert Gaizauskas, Dr. Paul Clough).


Dates : 2011-2014

Acronyme : 3D Face Analyzer

Type : ANR

Description : Human face conveys a significant amount of information, including information about head orientation, the identity, emotional state, gender, age, ethnic origin, education level, etc., and plays an important role in face-to-face communication between humans. The use of these facial clues during interaction is made possible by the remarkable human ability to recognize and interpret faces and facial behaviors. The use of these facial clues during interaction is made possible by the remarkable human ability to recognize and interpret faces and facial behaviors.

This project aims at automatic interpretation of 3D face images so that contactless human-computer interaction based on typical user’s facial attributes, such as facial expressions, gender, age and ethnic origin, can be developed for an improved HCI.

Partenaires : Irip lab at Beihang University (China) (Prof. Yunhong Wang), North China University of Technology (China) (Prof.  Yiding Wang), LIFL lab at University Lille 1 (Prof. Mohamed Daoudi).


Dates : 2005-2006

Acronyme : PHENIX SSA

Type : PCRD IST Digital Olympics

Description : This project aims to study various multimedia services which could be delivered to mobile devices for 2008 Olympic Games in Beijing. 

Partenaires :  France Telecom, Philips, China Central Television (CCTV), China Broadcast Networks (CBN), and China Radio International (CRI).

Dates : 2013-2018

Acronyme : Jemime

Titre : Serious Game for Children with autistic spectrum disorders based on Multimodal Emotion Imitation.

Type : ANR

Description : Interpersonal communication relies on complex processing of multimodal emotional cues such as facial expression and tone-of-voice. Unfortunately, children with autistic spectrum disorders (ASD) have disabilities to understand and produce these socio-emotional signals. JEMImE aims at designing new emotion recognition algorithms in order to help children with ASD to learn to mimic facial and vocal emotions and to express the proper emotion depending on the context. Such a tool will be very useful for the children to learn to produce emotions congruent to what they feel and for the practitioner to quantify progress. Scientific and technological breakthroughs in emotion characterization may significantly improve understanding and evaluating natural productions of children. 

Partenaires : UPMC-ISIR (Dr. Kévin Bailly, Dr. Mohamed Chétouani), Genious group, CoBTek at University of Nice (Dr. Sylvie Serret). 

 

Dates : 2013-2017

Acronyme : Biofence

Type: ANR

Description : The evaluation and certification of spoofing resistance is one of the major issues concerning biometrics technologies in their present and future implementation. The biometric solutions based on fingerprints are entering our everyday life but new biometrics technologies based on face, iris or vein pattern recognition have emerged and starts to be part of the overall access control solutions.

In this context, BIOFENCE proposes a systematic study of the spoofing resistance of face, iris and vein patterns biometrics in order to come up with a suitable evaluation methodology and certification criteria. The project will consider the following topics: after an overview of the existing spoofing techniques concerning the three biometrics modalities, we will evaluate their resistance against these attacks. Also, the project will attempt to consider and anticipate all possible attacks in order to think of appropriate protection solutions, thus to strive for development of “invulnerable” systems. To achieve this goal, new spoofing techniques and methods will be developed for the face, iris and vein pattern biometrics. In a second part of the project, countermeasures to these direct attacks will be studied in order to improve the resistance of the three modalities to fakes. In a third part of the project, a methodology of security evaluation and certification will be proposed and several tests will be performed in order to evaluate the resistance of the proposed biometric solutions. Finally, a particular attention will be paid to the compliance of the developments in terms of ethics and respect of privacy as well as to the societal impacts of the project.

Partenaires : Morpho, Gipsa-lab (Prof. Alice Caplier), Institut Mines-Télécom-Télécom Sud Paris (Prof. Bernadette Dorizzi, Dr. Yaneck Gottesman), CEA-Leti  (Assia Tria, Jean-François Mainguet) and CERAPS (Prof. Jean-Jacques Lavenue, Dr. Bruno Villalba, Dr. Gaylord Hamerel). 

 

Dates : 2010-2013

Acronyme : VideoSense

Type : ANR

Description : The Videosense project aims at developing cutting edge techniques for automatic concept-based video sequence tagging. The issues of interest to users would cover not only the “objective” content of video data, but also their “emotional” content, which determines their potential impact. In this context, the VideoSense project aims to explore, develop and experiment new techniques and tools to index and classify video data including aspects related to the independent analysis of its composite modalities, covering spatial-temporal visual, audio and emotional content and the closed captions for the textual modality, and their combination as regard to the multimodal nature of a video data.

Partenaires : Eurecom (Prof. Bernard Mérialdo), LIG laboratory (Dr. Georges Quenot, Dr. Gilles Serasset), LIF laboratory (Dr. Stéphane Ayache) and Ghanni (Dr. Hardi Harb, Dr. Aliaksandr Paradzinets).

 

Dates : 2008-2012

Acronyme : FAR3D

Titre : Face Analysis and Recognition using 3D

Type : ANR

Description : We investigate in the project the possible contribution of considering an additional dimension in face recognition “3-D” to improve performances of authentication while keeping existing advantages of face recognition from 2D still images like no contact, low cooperation from user needed, well-accepted modality.

Partenaires :  USTL Telecom Lille 1 (Prof. Mohamed Daoudi, Dr. Boulbaba Ben Amor), EURECOM (Prof. Jean-Luc Dugelay) and Thales (Joseph Colineau).

 

Dates : 2008-2013

Acronyme : OMNIA

Titre : Categorization and retrieval of multimedia documents in different languages.

Type : ANR

Description : The Omnia project aims at categorization and retrieval of captioned images based on their content. It also aims to analyze the emotional content of images along with the analysis of the captions in several languages.

Partenaires : Xerox Research Centre Europe (Dr. Luca Marchesoti) and LIG laboratory at University Joseph Fourrier Grenoble I (Prof. Christian Boitet). 

 

Dates : 2004-2007

Acronyme : MusicDiscover

Type : ACI

Description : The MusicDiscover project aims at content-based indexing of music titles to enable the identification, the analysis and retrieval of music titles. 

Partenaires : IRCAM (Prof. Xavier Rodet) and ENST (Prof. Gaël Richard).

 

Dates : 2004-2006 

Acronyme : IV2

Titre : Identification by Iris and Visage via Video

Type : Technovision

Description : The IV2 project  aims to develop an evaluation platform for benchmarking biometric algorithms using iris and face.

Partenaires : INT, ENST, EURECOM, INRIA, University of Evry, Thalès, Let it wave and Uratek.

 

Dates : 2001-2003

Acronyme : MUSE

Titre : Multimedia Search Engine

Type : RNTL

Description : The MUSE project aims to develop a multimedia search engine on the web. 

Partenaires : University of Versailles (Prof. Georges Gardarin), University of Toulon (Prof.Jacques Le Maitre), e-XMLmedia and Editing.

 

Dates : 2001-2004

Acronyme : Cyrano 

Type : RNRT

Description :  The Cyrano project aims at delivering interactive videos on the Internet.

Partenaires : France Telecom R&D (Dr. Luigi Lancieri) and INRIA (Dr. Mesaac Makpangou).

Shopping Basket