利用視覺Transformer之多標籤深度視覺語義嵌入模型

dc.contributor葉梅珍zh_TW
dc.contributorYeh, Mei-Chenen_US
dc.contributor.author來毓庭zh_TW
dc.contributor.authorLai, Yu-Tingen_US
dc.date.accessioned2022-06-08T02:43:30Z
dc.date.available9999-12-31
dc.date.available2022-06-08T02:43:30Z
dc.date.issued2021
dc.description.abstract多標籤影像分類是一項具挑戰性的工作,目標是同時找出不同大小的物件並且辨識正確的標籤。然而,常見的做法是使用整張影像抽取特徵,較小物體的資訊可能會因此被稀釋,或是成為雜訊,造成辨識困難。在先前的研究裡顯示,使用關注機制和標籤關係能各自增進特徵擷取和共生關係,以取得更強健的資訊,幫助多標籤分類任務。在本工作中,我們使用Transformer之架構,將視覺區域特徵關注至全域特徵,同時考慮標籤之間的共生關係,最後將加權後之新特徵產生出一動態的語義分類器,在語義空間內分類得出預測標籤。在實驗中,顯示我們的模型可達到很好的成效。zh_TW
dc.description.abstractMulti-label classification is a challenge task since we must identify many kinds of objects in different scales. While using global features of an image may discard small object information, many researches have shown that an attention mechanism improves feature extraction and that label relations reveal label co-occurrence, both of which benefit a multi-label classification task.In this work, we extract attended features from one image by Transformer and simultaneously consider labels’ co-occurrence. Then, we use the attended features to generate a classifier applied on the semantic space to predict the labels. Experiments validate the proposed method.en_US
dc.description.sponsorship資訊工程學系zh_TW
dc.identifier60847029S-40527
dc.identifier.urihttps://etds.lib.ntnu.edu.tw/thesis/detail/7bb0f9321ecc32df3057b4e5f01722d4/
dc.identifier.urihttp://rportal.lib.ntnu.edu.tw/handle/20.500.12235/117318
dc.language中文
dc.subject多標籤分類zh_TW
dc.subject視覺語義嵌入模型zh_TW
dc.subject關注機制zh_TW
dc.subjectmulti-label classificationen_US
dc.subjectvisual-semantic embeddingen_US
dc.subjectTransformeren_US
dc.title利用視覺Transformer之多標籤深度視覺語義嵌入模型zh_TW
dc.titleMulti-Label Deep Visual-Semantic Embedding with Visual Transformeren_US
dc.type學術論文

Files

Collections