We are happy to announce that two papers of our research group were accepted for presentation in different conferences.
CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.
Paper #2: Joint Pre-Alignment and Robust Rigid Point Set Registration by Vladislav Golyanik, Bertram Taetz, Didier Stricker at ICIP 2016 from September 25th -28th in Phoenix.
ICIP is the world’s largest and most comprehensive technical conference focused on image and video processing and computer vision. The conference will feature world-class speakers, tutorials, exhibits, and a vision technology showcase.
Our avatar reconstruction pipeline is currently being tested and helping to develop new therapies for schizophrenic, autism and other social phobias.
The European Community Research and Development Information Service has a news piece on AlterEgo:
Best Paper Award at WACV 2016
For their paper “Occlusion-Aware Video Registration for Highly Non-Rigid Objects”, Bertram Taetz, Gabriele Bleser, Vladislav Golyanik, and Didier Stricker have won the Best Paper Award at the Winter Conference on the Applications of Computer Vision – WACV 2016 on March 9, 2016 in Lake Placid, USA. Congratulations to the authors of the paper!
From 14 to 18 March 2016 our spin-offs will be presenting the results of the department’s research work at CeBIT 2016.
We are looking forward to welcoming you at our booths!
Three papers have been accepted at IEEE Winter Conference on Applications of Computer Vision (WACV 2016):
The WACV is one of the most renowned conferences covering all areas of Computer Vision.
It will be held March 7-9, 2016 at the Crowne Plaza resort in Lake Placid, NY, USA.
The Augmented Vision department and its partners are presenting three projects at the ICT 2015.
Come and visit:
Location: Off-site Area (Praça do Comércio, Lisbon)
Exhibition date: 18 – 22 October 2015
Project homepage: http://www.easy-imp.eu/
Location: Transform Area
Exhibition date: 20 – 22 October 2015
Project homepage: http://eyesofthings.eu
Location: Innovate Area
Exhibition date: 20 – 22 October 2015
Project Homepage: www.euromov.eu/alterego
A paper was accepted as oral presentation at the International Conference on Computer Vision (ICCV) 2015:
ICCV is one of the premier conferences for researchers in the field of computer vision.
The conference will be held from December 11th to September 18th in Santiago (Chile).
The goal of the competition is to extract the textual content from document images which are captured by mobile phones. The images are taken under varying conditions to provide a challenging input (full description of the challenge).
The method is based on “Combining Clustering and Classfication Results” (CCC); initially the background color is used to first detect and dewarp the document. The image is then binarized to extract lines, words and subwords. Those are then clustered incrementally across all the corpus. A 1D LSTM is trained on both sharp and blurry gray-scale text-lines for recognizing subwords. Clusters of subwords are labeled by majority voting.
Each year an expert jury from the University of Kaiserslautern is nominating outstanding doctoral theses for several academic subjects together with the Kreissparkasse Kaiserslautern..
We congratulate Dr.-Ing. Petersen on winning this award for Computer Science.
This award was granted for his doctoral thesis that deals with the understanding of manual workflows from video examples. An important application of his research work is the automatic generation of interactive Augmented Reality manuals from the observation of a reference performance.
Abstract doctoral thesis:
Workflow knowledge comprises both explicit, verbalizable knowledge and implicit knowledge, which is acquired through practice. Learning a complex workflow therefore benefits from training with a permanent corrective. Augmented Reality manuals that display instructive step-by-step information directly into the user’s field of view provide an intuitive and provably effective learning environment. However, their creation process is rather work intensive and current technological approaches lead to insufficient interactivity with the user. In this thesis we present a comprehensive technical approach to algorithmically analyze manual workflows from video examples and use the acquired information to teach explicit and implicit workflow knowledge using Augmented Reality. The technical realization starts with unsupervised segmentation of single work steps and their categorization into a coarse taxonomy. Thereafter, we analyze the single steps for their modalities using a hand and finger tracking approach optimized for this particular application. Using explicit, work step specific generalization we are able to compensate for morphological differences among different users and thus to reduce the need for large amounts of training data. To render this information communicable, i.e., understandable by a different person, we present the further processed data using Augmented Reality as an interactive tutoring system. The resulting system allows for fully or semi-automatic creation of Augmented Reality (AR-) manuals from video examples as well as their context-driven presentation in AR. The method is able to extract and to teach procedural, implicit workflow knowledge from given video examples. In an extensive evaluation, we demonstrate the applicability of all proposed technical components for the given task.