The Hopkins 155 dataset was introduced in  and has been created with the goal of providing an extensive benchmark for testing feature based motion segmentation algorithms. It contains video sequences along with the features extracted and tracked in all the frames. The ground-truth segmentation is also provided for comparison purposes. For a more comprehensive description of the dataset, please refer to the main Hopkins 155 page.
Each sequence in the Hopkins 155 dataset contains only complete trajectories and no outliers. We provide 16 additional sequences (used in ,  and ) that present missing data and outliers. These sequences have the same format as the Hopkins 155 sequences. We refer to the entire dataset (standart plus the additional sequences) as the Hopkins 155+16 dataset. The additional sequences have been made publicly available on April 20th, 2010.
We are making available the "Hands" sequence, a dataset (first appeared in ) used for testing Non-Rigid Structure from Motion algorithms.Download (registration required)
This package is a subset of the Hopkins 155 dataset and contains only the sequences used in .Download (registration required)
Existing dynamic texture databases are not well-suited for testing joint segmentation and categorization algorithm. This is because most of the video sequences in these databases either contain only a single texture and seldom any background. We have annotated 117 videos at the pixel level from the 3 largest classes of the Dyntex database: waves, flags and fountains. This mat file contains the pixel wise annotations. For more details please refer to Joint Segmentation and Categorization of Dynamic TexturesDownload (registration required)