Global, Local and Intrinsic based Dense Embedding NETwork

for Multi-category Attributes Prediction

CVPR 2022


Attaching attributes (such as color, shape, state, action) to object categories is an important computer vision problem. Attribute prediction has seen exciting recent progress and is often formulated as a multi-label classification problem. Yet significant challenges remain in:
1) predicting a large number of attributes over multiple object categories,
2) modeling category-dependence of attributes,
3) methodically capturing both global and local scene context, and
4) robustly predicting attributes of objects with low pixel-count.
To address these issues, we propose a novel multi-category attribute prediction deep architecture named GlideNet, which contains three distinct feature extractors. A global feature extractor recognizes what objects are present in a scene, whereas a local one focuses on the area surrounding the object of interest. Meanwhile, an intrinsic feature extractor uses an extension of standard convolution dubbed Informed Convolution to retrieve features of objects with low pixel-count utilizing its binary mask. GlideNet then uses gating mechanisms with binary masks and its self-learned category embedding to combine the dense embeddings. Collectively, the Global-Local-Intrinsic blocks comprehend the scene's global context while attending to the characteristics of the local object of interest. The architecture adapts the feature composition based on the category via category embedding. Finally, using the combined features, an interpreter predicts the attributes, and the length of the output is determined by the category, thereby removing unnecessary attributes. GlideNet can achieve compelling results on two recent and challenging datasets -- VAW and CAR -- for large-scale attribute prediction. For instance, it obtains more than 5\% gain over state of the art in the mean recall (mR) metric. GlideNet's advantages are especially apparent when predicting attributes of objects with low pixel counts as well as attributes that demand global context understanding. Finally, we show that GlideNet excels in training starved real-world scenarios.


Supplementeray document containing achitecture-specific details can be found here.

Code and building blocks of GlideNet can be found here.

CAR Dataset can be accessed through the API here.

GlideNet Structure


Related Publications

  1. K. Metwaly, A. Kim, E. Branson, and V. Monga, “GlideNet: Global, Local and Intrinsic based Dense Embedding NETworkfor Multi-category Attributes Prediction”, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. [arXiv]

  2. K. Metwaly, A. Kim, E. Branson, and V. Monga, “CAR - Cityscapes Attributes Recognition A Multi-category Attributes Dataset for Autonomous Vehicles”, arxiv, 2022. [arXiv]

Selected References

  1. Khoi Pham, Kushal Kafle, Zhe Lin, Zhihong Ding, ScottCohen, Quan Tran, and Abhinav Shrivastava "Learning to predict visual attributes in the wild," in Proc. IEEE Conf. on Comp. Vis. Patt. Recog., 2021, pp 13018–13028.

  2. Nikolaos Sarafianos, Xiang Xu, and Ioannis A Kakadiaris "Deep imbalanced attribute classification using visual attention aggregation," in Proc. Europ. Conf. Comp. Vis., 2018, pp. 680–697.

  3. Huaizu Jiang, Ishan Misra, Marcus Rohrbach, Erik Learned-Miller, and Xinlei Chen "In defense of grid features for visual question answering," in Proc. IEEE Conf. on Comp. Vis. Patt. Recog., 2020, pp. 10267–10276.

  4. Thibaut Durand, Nazanin Mehrasa, and Greg Mori "Learning a Deep ConvNet for Multi-Label Classification With Partial Labels," in Proc. IEEE Conf. Comp. Vis. Patt. Recog., 2019, pp. 647–657.


104 Electrical Engineering East,
University Park, PA 16802, USA

Lab Phone: