Project proposals

The below proposals are general descriptions of the current research pursued inside our group. A number of projects of different complexity can be defined related to them dependent on whether you want to do Master thesis, internship or just a small course project. If you are interested, contact Dani.

Recognition of Human Activity using Magnetic Data

In this project, methods for recognition of simple human activity - gestures will be developed. Each gesture is composed of 3D position and orientation data generated by a number of magnetic sensors placed oh a human hand and arm. (Hidden) Conditional Random Field (CRF, HCRF) will be used as the basic methodology and trained with a number of training sequences. Trained CRF can then be used for classification of new gestures/activities. The main objective of the project is development of (H)CRFs. Applications are a gesture recognition system for human-robot interaction; a scenario where a robot learns how to execute specific tasks by obesrving a human performing it. If possible, this will be implemented both in a simulator and on a robot. This master thesis project can possibly results in a PhD position in computer vision/robotics.

Recognition of Human Activity in Video

Similarly to the above, a (H)CRF approach will be used for gesture classification in video data generated with a number of different persons. The task will be to first extract a number of different image cues from the input video frames. As an example, an image cue can be pricipal components of optical flow. The whole sequence of parameters then describes the input data for the training process. This means that the same theoretical approach can be used as in the above proposal. The main objective of this project will be the extraction of visual cues. One possible application is activity recognition for surveillance purposes; a proactive surveillance system that can detect violent behaviour. However, this will not be a part of the thesis but something that the results of the thesis can be used for in the future. We plan to publish the results of this project in a conference/journal in the area of computational vision. As in the above case, the project can possibly results in a PhD position in computer vision/robotics.

Human-Machine Collaborative Systems

Dividing the task that the operator is executing into several subtasks is one of the key research areas in teleoperative and human-machine collaborative settings. Hence, segmentation and recognition of operator generated motions are commonly facilitated to provide appropriate assistance during task execution. This assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online thus improving the performance both in terms of execution time and overall precision. However, the fixtures are typically inflexible, resulting in a degraded performance in cases of unexpected obstacles or incorrect fixture models. In this project, we are interested in the problem of on-line task tracking and propose the use of adaptive virtual fixtures that can cope with the above problems. The operator may remain in each of these subtasks as long as necessary and switch freely between them. Hence, rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. Here, the probability that the user is following a certain trajectory (subtask) can be estimated and used to automatically adjusts the compliance. Thus, an on-line decision of how to fixture the movement can be provided.

Robot Control

In our lab there is a mobile robot with an attached arm (manipulator). A natural way to guide the robot would be to take its "hand" and have it follow you around. Since the end-effector is equipped with a force sensor giving 6 DOF force-torque measurments this can be achieved. However, the base is quite unstable and it is therefore important to carefully design the control system to achieve a good user experience. The higher bandwidth of the arm can be utilized to have a decoupling of the applied forces from the robot motion.

Possible tasks for a project would be:

  • Design a coordinated controller for the manipulator/base motion
    • Should the controller try to mimic a human?
  • Analyze the performance of the controller w.r.t stability, delay etc
    • Is the controller robust to changes in parameters?
  • Evaluate the controller parameters w.r.t user preferences
    • Is the "optimal" controller the one that user preferes?

Programming by Demonstration

Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this project, we are interested in developing a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, Programming by Demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We have developed a system capable of learning pick-and place tasks by manually demonstrating them. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it. The project will continue with development of visual and action recognition strategies.

Object Detection in Natural Scenes

Object recognition is one of the major research topics in the field of computer vision. In robotics, there is often a need for a system that can locate certain objects in the environment - the capability which we denote as object detection. Our current method is especially suitable for detecting objects in natural scenes, as it is able to cope with problems such as complex background, varying illumination and object occlusion. The proposed method uses the receptive field representation where each pixel in the image is represented by a combination of its color and response to different filters. Thus, the cooccurrence of certain filter responses within a specific radius in the image serves as information basis for building the representation of the object. The specific goal of this project is the development of an on-line learning scheme that is effective after just one training example but still has the ability to improve its performance with more time and new examples.

Vision based Simultaneous Localization and Mapping

One key competence for a fully autonomous mobile robot system is the ability to build a map of the environment from sensor data. Hence, natural landmark detection and incremental building of consistent maps for SLAM purposes have been a center point of robotic research for the last several years. For cases of simple 2D scenarios using laser scanner or sonar sensors, the SLAM problem is considered to be solved. However, for large scale and complex environments especially regarding full 3D SLAM, the prolem is still unsolved. Solving the SLAM problem with vision as the only external sensor is now the goal of much of the effort in the area. Monocular vision is especially interesting as it offers a highly affordable solution with todays inexpensive webcamera. There are many aspect of the problem and thus a variety of possible projects: feature selection, delayed vs. undelayed approach, using structure-from-motion approaches, etc.

 

 

 


Back to Dani's homepage