Sketch, ground, and refine: top-down dense video captioning
Date
2021
Authors
Deng, C.
Chen, S.
Chen, D.
He, Y.
Wu, Q.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, pp.234-243
Statement of Responsibility
Chaorui Deng, Shizhe Chen, Da Chen, Yuan He, Qi Wu
Conference Name
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (19 Jun 2021 - 25 Jun 2021 : virtual online)
Abstract
The dense video captioning task aims to detect and describe a sequence of events in a video for detailed and coherent storytelling. Previous works mainly adopt a "detect-then-describe" framework, which firstly detects event proposals in the video and then generates descriptions for the detected events. However, the definitions of events are diverse which could be as simple as a single action or as complex as a set of events, depending on different semantic con-texts. Therefore, directly detecting events based on video information is ill-defined and hurts the coherency and accuracy of generated dense captions. In this work, we reverse the predominant "detect-then-describe" fashion, proposing a top-down way to first generate paragraphs from a global view and then ground each event description to a video segment for detailed refinement. It is formulated as a Sketch, Ground, and Refine process (SGR). The sketch stage first generates a coarse-grained multi-sentence paragraph to describe the whole video, where each sentence is treated as an event and gets localised in the grounding stage. In the re-fining stage, we improve captioning quality via refinement-enhanced training and dual-path cross attention on both coarse-grained event captions and aligned event segments. The updated event caption can further adjust its segment boundaries. Our SGR model outperforms state-of-the-art methods on ActivityNet Captioning benchmark under traditional and story-oriented dense caption evaluations. Code will be released at ithub.com/bearcatt/SGR.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
© 2021 IEEE.