Log in to save to my catalogue

MT-ORL: Multi-Task Occlusion Relationship Learning

MT-ORL: Multi-Task Occlusion Relationship Learning

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2560959092

MT-ORL: Multi-Task Occlusion Relationship Learning

About this item

Full title

MT-ORL: Multi-Task Occlusion Relationship Learning

Publisher

Ithaca: Cornell University Library, arXiv.org

Journal title

arXiv.org, 2021-08

Language

English

Formats

Publication information

Publisher

Ithaca: Cornell University Library, arXiv.org

More information

Scope and Contents

Contents

Retrieving occlusion relation among objects in a single image is challenging due to sparsity of boundaries in image. We observe two key issues in existing works: firstly, lack of an architecture which can exploit the limited amount of coupling in the decoder stage between the two subtasks, namely occlusion boundary extraction and occlusion orientation prediction, and secondly, improper representation of occlusion orientation. In this paper, we propose a novel architecture called Occlusion-shared and Path-separated Network (OPNet), which solves the first issue by exploiting rich occlusion cues in shared high-level features and structured spatial information in task-specific low-level features. We then design a simple but effective orthogonal occlusion representation (OOR) to tackle the second issue. Our method surpasses the state-of-the-art methods by 6.1%/8.3% Boundary-AP and 6.5%/10% Orientation-AP on standard PIOD/BSDS ownership datasets. Code is available at https://github.com/fengpanhe/MT-ORL....

Alternative Titles

Full title

MT-ORL: Multi-Task Occlusion Relationship Learning

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_proquest_journals_2560959092

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2560959092

Other Identifiers

E-ISSN

2331-8422

How to access this item