The Multi-FoV synthetic datasets are two synthetic scenes (vehicle moving in a city, and flying robot hovering in a confined room). For each scene, three different optics were used (perspective, fisheye and catadioptric), but the same sensor is used (keeping the image resolution constant). These datasets were generated using Blender, using a custom omnidirectional camera model, which we release as an open-source patch for Blender. The camera calibrations, groundtruth trajectories and depth maps are provided in the archives. For any question relative to these datasets, please send an e-mail at rebecq (at) ifi (dot) uzh (dot) ch. Z. Zhang, H. Rebecq, C. Forster, D. Scaramuzza Benefit of Large Field-of-View Cameras for Visual Odometry IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016.