Unimelb Corridor Synthetic dataset Debaditya Acharya KOUROSH KHOSHELHAM STEPHAN WINTER 10.26188/5dd8b8085b191 https://melbourne.figshare.com/articles/dataset/UnimelbCorridorSynthetic_zip/10930457 <div>This data-set is a supplementary material related to the generation of synthetic images of a corridor in the University of Melbourne, Australia from a building information model (BIM). This data-set was generated to check the ability of deep learning algorithms to learn task of indoor localisation from synthetic images, when being tested on real images. </div><div><br></div><div>=============================================================================</div><div><br></div><div>The following is the name convention used for the data-sets. The brackets show the number of images in the data-set.</div><div><br></div><div><u>REAL DATA</u></div><div><u><br></u></div><div><p>Real ---------------------> Real images (949 images)</p> <p>Gradmag-Real -------> Gradmag of real data (949 images)</p></div><div><u><br></u></div><div><u>SYNTHETIC DATA</u></div><div><br></div><div><p>Syn-Car ----------------> Cartoonish images (2500 images)</p> <p>Syn-pho-real ----------> Synthetic photo-realistic images (2500 images)</p> <p>Syn-pho-real-tex -----> Synthetic photo-realistic textured (2500 images)</p> <p>Syn-Edge --------------> Edge render images (2500 images)</p> <p>Gradmag-Syn-Car ---> Gradmag of Cartoonish images (2500 images)</p></div><div><br></div><div>=============================================================================</div><div><br></div><div>Each folder contains the images and their respective groundtruth poses in the following format [ImageName X Y Z w p q r].</div><div><br></div><div>To generate the synthetic data-set, we define a trajectory in the 3D indoor model. The points in the trajectory serve as the ground truth poses of the synthetic images. The height of the trajectory was kept in the range of 1.5–1.8 m from the floor, which is the usual height of holding a camera in hand. Artificial point light sources were placed to illuminate the corridor (except for Edge render images). The length of the trajectory was approximately 30 m. A virtual camera was moved along the trajectory to render four different sets of synthetic images in Blender*. The intrinsic parameters of the virtual camera were kept identical to the real camera (VGA resolution, focal length of 3.5 mm, no distortion modeled). We have rendered images along the trajectory at 0.05 m interval and ± 10° tilt.</div><div><br></div><div>The main difference between the cartoonish (Syn-car) and photo-realistic images (Syn-pho-real) is the model of rendering. Photo-realistic rendering is a physics-based model that traces the path of light rays in the scene, which is similar to the real world, whereas the cartoonish rendering roughly traces the path of light rays. The photorealistic textured images (Syn-pho-real-tex) were rendered by adding repeating synthetic textures to the 3D indoor model, such as the textures of brick, carpet and wooden ceiling. The realism of the photo-realistic rendering comes at the cost of rendering times. However, the rendering times of the photo-realistic data-sets were considerably reduced with the help of a GPU. Note that the naming convention used for the data-sets (e.g. Cartoonish) is according to Blender terminology.</div><div><br></div><div>An additional data-set (Gradmag-Syn-car) was derived from the cartoonish images by taking the edge gradient magnitude of the images and suppressing weak edges below a threshold. The edge rendered images (Syn-edge) were generated by rendering only the edges of the 3D indoor model, without taking into account the lighting conditions. This data-set is similar to the Gradmag-Syn-car data-set, however, does not contain the effect of illumination of the scene, such as reflections and shadows.</div><div><br></div><div>*Blender is an open-source 3D computer graphics software and finds its applications in video games, animated films, simulation and visual art. For more information please visit: http://www.blender.org</div><div><br></div><div>Please cite the papers if you use the data-set:</div><div><br></div><div><i>1) Acharya, D., Khoshelham, K., and Winter, S., 2019. BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images. ISPRS Journal of Photogrammetry and Remote Sensing. 150: 245-258.</i></div><div><i><br></i></div><div><i><div>2) Acharya, D., Singha Roy, S., Khoshelham, K. and Winter, S. 2019. Modelling uncertainty of single image indoor localisation using a 3D model and deep learning. In ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, IV-2/W5, pages 247-254.</div></i></div> 2019-11-23 06:47:15 Indoor localisation Deep learning camera pose regression synthetic images BIM 3D building model Computer Vision Computer Graphics Knowledge Representation and Machine Learning Geospatial Information Systems Infrastructure Engineering and Asset Management