Source code and datasets

Main content

Massively Parallel Multiview Stereopsis by Surface Normal Diffusion (Galliani et al., ICCV 2015)

Project page

Hyperspectral Image Super Resolution (Lanaras et al., ICCV 2015)

Project page

K-4PCS Pairwise Registration (Theiler et al., 2014, IJSPRS)

Project page

Global Consistent Point Cloud Registration (Theiler et al., 2015, IJSPRS)

Project page

Piecewise rigid scene flow (Vogel et al., IJCV 2015)

Code available

Dataflow code (Vogel et al., GCPR 2013)

Code available

RQE Feature Extraction (Tokarczyk et al., TGRS 2015) 

Code and documentation

Random forest template library - Stefan Walk

Code and documentation (ZIP, 14 KB)

VocMatch: Efficient Multiview Correspondence for Structure from Motion (Havlena et al., ECCV 2014)

Code available

Predicting Matchability (Hartmann et al., CVPR 2014)

Code and documentation

Are Cars Just 3D Boxes? - Jointly Estimating the 3D Shape of Multiple Objects (Zia et al., CVPR 2014) Towards Scene Understanding with Detailed 3D Object Representations (Zia et al., IJCV 2015)

Contact Zeeshan Zia <zia.zeeshan at outlook dot com> for any questions.
Code and trained models, Evaluation Script and Test set

Explicit Occlusion Modeling for 3D Object Class Representations (Zia et al., CVPR 2013)

Contact Zeeshan Zia <mzia at ethz dot ch> for any questions.

Test set (260 MB, ~7 mins download time), Training set for first layer DPMs (1.5 GB, ~30 mins download time), Code and trained models

Detailed 3D Representations for Object Recognition and Modeling (Zia et al., TPAMI 2013, 3dRR 2011)

Annotations (download link) used in our '3D geometric models for objects' papers:

- Part level annotations on the 3D Object Classes dataset (Savarese et al. ICCV 2007)
- Point correspondences for ultrawide baseline matching in the same dataset

Multi-Target Tracking

Project page with download links (external page maintained by Anton Andriyenko)

Data used in a series of papers on multi-target tracking, comprising of annotations done by manually placing bounding boxes around pedestrians and interpolating their trajectories between key frames.

CVPR 2012 code

CVPR 2011 code

GPU-SURF

Project page with source code (external page hosted by MPII / Christian Wojek)

A GPU implementation of the popular SURF method in C++/CUDA, which achieves real-time performance even on HD images. Includes interest point detection, descriptor extraction, and basic descriptor matching.

Action Snippets

MATLAB code (including Weizmann test data)

The code used for our Action Snippets paper on activity recognition, published in CVPR'08. Included is also some test data to play with. If you use this data, please cite the corresponding paper as source.

Pedestrian Motion Models

Dataset (external page maintained by Stefano Pellegrini)

Data used in a paper on an advanced motion model for tracking, which takes into account interactions between pedestrians, inspired by social force models used for crowd simulation (joint work with Stefano Pellegrini, Andreas Ess, and Luc van Gool). If you use this data, please cite the corresponding paper as source.

Tracking with a Mobile Observer

Project page with download links (external page maintained by Andreas Ess).

Data used in a series of papers (CVPR'08, ICRA'09, PAMI'09) on pedestrian and vehicle tracking with a moving stereo rig, by Andreas Ess, Konrad Schindler, Bastian Leibe and Luc van Gool. Synchronized stereo videos observing busy inner-city streets with large and varying numbers of pedestrians. If you use this data, please cite the above-mentioned papers as source.

Coupled Detection and Tracking

Three pedestrian crossing sequences (91 MByte)

Data used in the ICCV'07 paper Coupled Detection and Trajectory Estimation for Multi-Object Tracking by Bastian Leibe, Konrad Schindler and Luc van Gool. Monocular videos observing pedestrian crossings with large and varying numbers of pedestrians in challenging conditions (natural lighting, occlusions, background changes). If you use this data, please cite the above-mentioned paper as source.

Shape-based Object Detection

4x50 closed shapes (swans, hats, starfish, applelogos)

A database of object categories defined by their shape. Each category has 50 images, which contain no instances of the remaining classes, but sometimes contain multiple instances of the same category. The swan and applelogo categories are extended versions of Vitto Ferrari's ETHZ shape classes. The images were collected from Google image search and Flickr, and contain significant amounts of background clutter. The category templates were drawn by hand. For each image there is:
- XX.jpg (original colour or grayscale image in JPG-format)
- XX_srmseg.tif (an over-segmentation created with the srm method of Nock and Nielsen)
- XX_CLASS.groundtruth (manually annotated ground truth bounding boxes as ASCII text)

Source code for detection by elastic shape matching (Schindler and Suter, Pattern Recognition 2013)

Extended ETHZ shape classes (swans, bottles, mugs, giraffes, applelogos, hats, starfish)

A larger database of shape categories, created by merging the above dataset with the ETHZ shape classes of Vitto Ferrari. This is (almost) a superset of each of the two older databases, but has not yet been used by either of us. Please refer to the README for details on the differences and how to use the new dataset.

n-View Multibody Structure and Motion

spinningwheels.mat (synthetic test sequence. 5 frames, 4 objects)
boxes.mat (piles of boxes on a table. 10 frames, 2 objects)
lightbulb.mat (textured objects on neutral background. 10 frames, 2-3 objects)
flowershirt.mat (a person moves though a room, camera also moves. 5 frames, 2 objects)
deliveryvan.mat (movie sequence, courtesy of Andrew Zisserman. 11 frames, 1-2 objects)

Each MATLAB-workspace contains the three variables K, X, and img.
- K is the (3 x 3) camera calibration matrix.
- X is a (N x 2 x F) array of image points (N ... number of image points, F ... number of frames).
- img is the image sequence of image size (m x n) in a (m x n x F) array.

Cameras were calibrated off-line, except for the delivery van, for which an approximate focal length was guessed. If a point is not visible in a given frame, it is marked with the imaginary i (square root of -1). All tracks were produced with the standard implementation of the KLT-tracker. In all sequences, intermediate frames between the given ones were dropped after feature tracking.

Two-View Multibody Structure and Motion

desk.mat (3 objects on desk, manual correspondences)
office.mat (3 objects on floor, MSER correspondences)

Each MATLAB-workspace contains the four variables X1, X2, img1, and img2.
- X1, X2 are the (N x 2) image coordinates of corresponding points
- img1, img2 are the two images of size (m x n).

 

 
 
Page URL: http://www.prs.igp.ethz.ch/research/Source_code_and_datasets.html
27.06.2017
© 2017 Eidgenössische Technische Hochschule Zürich