The multi-modal/multi-view datasets are created in a cooperation between University of Surrey and Double Negative within the EU FP7 IMPART project. The source of the datasets should be acknowledged in all publications in which it is used as by referencing the following paper and this web-site: H. Kim and A. Hilton, Influence of Colour and Feature Geometry on Multi-modal 3D Point Clouds Data Registration, Proc. 3DV, 2014. In order to access the full datasets, please read this license agreement and send email Dr. Hansung Kim with the following information if you agree: Your name/affiliation, name/email of your supervisor (if you are a student). Multi-modal data footage and 3D reconstructions for various indoor/outdoor scenes - LIDAR scans - Video sequence - Digital snapshots and reconstructed 3D models - Spherical camera scans and reconstructed 3D models - Xtion RGBD video sequence and reconstructed 3D models Multi-view video sequences for dynamic actions in the same environments (indoor/outdoor) - Fixed multiple HD camera sequences ( 360 / 120 set up) - Free-moving principal HD camera - Nodal cameras - GoPro 2.7K cameras - Stereo sequence from GoPro HD camera pair - Xtion RGBD video sequence - Intrinsic and extrinsic calibration sequences (calibration result is partially provided) - Performers: 6 male and 1 female performers - Actions: Basic single actions, sequential actions, interactive actions, Camsetup Multi-view facial expression capture - Fixed five HD cameras sequences - One Xtion RGBD video sequence - Intrinsic and extrinsic calibration sequences (calibration result is not provided) - Performers: 7 male and 3 female performers - Facial expressions: Neutral - Anger - Fear - Happiness - Sadness - Surprise

Related datasets