Modeling Spatial Uncertainty for the iPad Pro Depth Sensor
paper Menu
Depth sensors, once exclusively found in research laboratories, are quickly becoming ubiquitous in the mass market. After Apple’s intro-duction of the iPad Pro 2020 with an integrated light detection and ranging (LIDAR) sensor, now even tablets and smartphones are capable of obtaining accurate 3-D information from their environments. This, in turn, increases the reach of applications from technical fields, such as SLAM, object tracking, and object classification, which can now be downloaded on millions of hand-held devices with a couple of taps. This motivates an analysis of the capabilities, strengths, and weaknesses of these depth streams. In this paper, we present a study of the spatial uncertainties of the iPad Pro 2021 depth sensor. First, we describe the hardware used by the device, and provide an overview of the machine learning algorithm that fuses information from the LIDAR sensor with color data to produce a depth image. Then, we analyze the accuracy and precision of the measured depth values, while giving attention to the resulting temporal and spatial correlations. An-other important topic of discussion are the tradeoffs involved in the ex-trapolations that the depth system implements, such as how curvatures change at different distances. In order to establish a reference baseline, we also compare the obtained results to another widely known time-of-flight sensor, the Microsoft Kinect 2.