EASY IMP – Promotional Video
We proudly congratulate the start-up teams of ioxp and 3Digify!

Our former colleagues took the 1st  (ioxp) and 3rd  (3Digify) place in the 4th Südwest-Pitch organized by Startup Westpfalz and SAR.FACTORY.

For more details visit Südwest-Pitch.


Papers accepted at CVPR 2016 and ICIP 2016

We are happy to announce that two papers of our research group were accepted for presentation in different conferences.

Paper #1: Gravitational Approach for Point Set Registration by  Vladislav Golyanik, Sk Aziz Ali, Didier Stricker at CVPR 2016 from June26th – July 1st in Las Vegas.

CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

Paper #2: Joint Pre-Alignment and Robust Rigid Point Set Registration by Vladislav Golyanik, Bertram Taetz, Didier Stricker  at ICIP 2016 from September 25th -28th in Phoenix.

ICIP is the world’s largest and most comprehensive technical conference focused on image and video processing and computer vision. The conference will feature world-class speakers, tutorials, exhibits, and a vision technology showcase.

AlterEgo is demonstrating how to use avatars and robots to help treat social disorders

Our avatar reconstruction pipeline is currently being tested and helping to develop new therapies for schizophrenic, autism and other social phobias.

The European Community Research and Development Information Service has a news piece on AlterEgo:



Best Paper Award at WACV 2016

Best Paper Award at WACV 2016

For their paper “Occlusion-Aware Video Registration for Highly Non-Rigid Objects”, Bertram Taetz, Gabriele Bleser, Vladislav Golyanik, and Didier Stricker have won the Best Paper Award at the Winter Conference on the Applications of Computer Vision – WACV 2016 on March 9, 2016 in Lake Placid, USA. Congratulations to the authors of the paper!

Visit Augmented Vision’s spin-offs ioxp and 3Digify at CeBIT 2016

From 14 to 18 March 2016 our spin-offs will be presenting the results of the department’s research work at CeBIT 2016.

Our colleagues from ioxp will be presenting the AR Handbook in hall 6, booth B48.
If you are interested in 3D scanners for everyone, don’t miss 3Digify in hall 6, booth C17.

We are looking forward to welcoming you at our booths!

Three papers accepted at WACV’16
Augmented Vision@ICT 2015 in Lisbon

The Augmented Vision department and its partners are presenting three projects at the ICT 2015.
Come and visit:

Location: Off-site Area (Praça do Comércio, Lisbon)
Exhibition date: 18 – 22 October 2015
Project homepage: http://www.easy-imp.eu/
Location: Transform Area
Booth: T26
Exhibition date: 20 – 22 October 2015
Project homepage: http://eyesofthings.eu
Location: Innovate Area
Booth: i33
Exhibition date: 20 – 22 October 2015
Project Homepage: www.euromov.eu/alterego
ICCV Paper accepted as oral presentation

A paper was accepted as oral presentation at the International Conference on Computer Vision (ICCV) 2015:

Christian Bailer, Bertram Taetz, Didier Stricker
Flow Fields: Dense Correspondence Fields for Highly Accurate Large Displacement Optical Flow Estimation

ICCV is one of the premier conferences for researchers in the field of computer vision.
The conference will be held from December 11th to September 18th in Santiago (Chile).

Winning of smartDoc competition – ICDAR 2015 Conference

We are delighted to announce the winning of smartDoc competition (challenge 2) as part of the competitions in ICDAR 2015 Conference.

The goal of the competition is to extract the textual content from document images which are captured by mobile phones. The images are taken under varying conditions to provide a challenging input (full description of the challenge).

The method is based on “Combining Clustering and Classfication Results” (CCC); initially the background color is used to first detect and dewarp the document. The image is then binarized to extract lines, words and subwords. Those are then clustered incrementally across all the corpus. A 1D LSTM is trained on both sharp and blurry gray-scale text-lines for recognizing subwords. Clusters of subwords are labeled by majority voting.