TY - GEN
T1 - HiMODE
T2 - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022
AU - Junayed, Masum Shah
AU - Sadeghzadeh, Arezoo
AU - Islam, Md Baharul
AU - Wong, Lai Kuan
AU - Aydin, Tarkan
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Monocular omnidirectional depth estimation is receiving considerable research attention due to its broad applications for sensing 360° surroundings. Existing approaches in this field suffer from limitations in recovering small object details and data lost during the ground-truth depth map acquisition. In this paper, a novel monocular omnidirectional depth estimation model, namely HiMODE is proposed based on a hybrid CNN+Transformer (encoder-decoder) architecture whose modules are efficiently designed to mitigate distortion and computational cost, without performance degradation. Firstly, we design a feature pyramid network based on the HNet block to extract high-resolution features near the edges. The performance is further improved, benefiting from a self and cross attention layer and spatial/temporal patches in the Transformer encoder and decoder, respectively. Besides, a spatial residual block is employed to reduce the number of parameters. By jointly passing the deep features extracted from an input image at each backbone block, along with the raw depth maps predicted by the transformer encoder-decoder, through a context adjustment layer, our model can produce resulting depth maps with better visual quality than the ground-truth. Comprehensive ablation studies demonstrate the significance of each individual module. Extensive experiments conducted on three datasets; Stanford3D, Matterport3D, and SunCG, demonstrate that HiMODE can achieve state-of-the-art performance for 360° monocular depth estimation. Complete project code and supplementary materials are available at https://github.com/himode5008/HiMODE.
AB - Monocular omnidirectional depth estimation is receiving considerable research attention due to its broad applications for sensing 360° surroundings. Existing approaches in this field suffer from limitations in recovering small object details and data lost during the ground-truth depth map acquisition. In this paper, a novel monocular omnidirectional depth estimation model, namely HiMODE is proposed based on a hybrid CNN+Transformer (encoder-decoder) architecture whose modules are efficiently designed to mitigate distortion and computational cost, without performance degradation. Firstly, we design a feature pyramid network based on the HNet block to extract high-resolution features near the edges. The performance is further improved, benefiting from a self and cross attention layer and spatial/temporal patches in the Transformer encoder and decoder, respectively. Besides, a spatial residual block is employed to reduce the number of parameters. By jointly passing the deep features extracted from an input image at each backbone block, along with the raw depth maps predicted by the transformer encoder-decoder, through a context adjustment layer, our model can produce resulting depth maps with better visual quality than the ground-truth. Comprehensive ablation studies demonstrate the significance of each individual module. Extensive experiments conducted on three datasets; Stanford3D, Matterport3D, and SunCG, demonstrate that HiMODE can achieve state-of-the-art performance for 360° monocular depth estimation. Complete project code and supplementary materials are available at https://github.com/himode5008/HiMODE.
UR - http://www.scopus.com/inward/record.url?scp=85137787798&partnerID=8YFLogxK
U2 - 10.1109/CVPRW56347.2022.00569
DO - 10.1109/CVPRW56347.2022.00569
M3 - Conference contribution
AN - SCOPUS:85137787798
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 5208
EP - 5217
BT - Proceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022
PB - IEEE Computer Society
Y2 - 19 June 2022 through 20 June 2022
ER -