欢迎来到得力文库 - 分享文档赚钱的网站! | 帮助中心 好文档才是您的得力助手!
得力文库 - 分享文档赚钱的网站
全部分类
  • 研究报告>
  • 管理文献>
  • 标准材料>
  • 技术资料>
  • 教育专区>
  • 应用文书>
  • 生活休闲>
  • 考试试题>
  • pptx模板>
  • 工商注册>
  • 期刊短文>
  • 图片设计>
  • ImageVerifierCode 换一换

    Query-aware sparse coding for web multi-video summarization.pdf

    • 资源ID:789511       资源大小:3.07MB        全文页数:15页
    • 资源格式: PDF        下载积分:10金币
    快捷下载 游客一键下载
    会员登录下载
    微信登录下载
    三方登录下载: 微信开放平台登录   QQ登录  
    二维码
    微信扫一扫登录
    下载资源需要10金币
    邮箱/手机:
    温馨提示:
    快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如填写123,账号就是123,密码也是123。
    支付方式: 支付宝    微信支付   
    验证码:   换一换

     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    Query-aware sparse coding for web multi-video summarization.pdf

    Information Sciences 478 (2019) 152166 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins Query-aware sparse coding for web multi-video summarization Zhong Ji a , Yaru Ma a , Yanwei Pang a , , Xuelong Li b a School of Electrical and Information Engineering, Tianjin University, Tianjin 30 0 072, China b Xian Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xian 710119, China a r t i c l e i n f o Article history: Received 28 March 2018 Revised 20 September 2018 Accepted 23 September 2018 Available online 8 November 2018 Keywords: Video summarization Sparse coding Query-aware Multi-video a b s t r a c t Given the explosive growth of online videos, it is becoming increasingly important to re- lieve the tedious work of browsing and managing the video content of interest. Video sum- marization aims at providing such a technique by transforming one or multiple videos into a compact one. However, conventional multi-video summarization methods often fail to produce satisfying results as they ignore the users search intents. To this end, this pa- per proposes a novel query-aware approach by formulating the multi-video summariza- tion in a sparse coding framework, where the web images searched by a query are taken as the important preference information to reveal the query intent. To provide a user- friendly summarization, this paper also develops an event-keyframe presentation structure to present keyframes in groups of specific events related to the query by using an unsu- pervised multi-graph fusion method. Moreover, we release a new public dataset named MVS1K, which contains about 10 0 0 videos from 10 queries and their video tags, manual annotations, and associated web images. Extensive experiments on the MVS1K and TVSum datasets demonstrate that our approaches produce competitively objective and subjective results. © 2018 Published by Elsevier Inc. 1. Introduction The rapid growth of video data has steadily occupied the vast majority of network traffic. For example, YouTube, as one of the primary online video sharing website, serves over 300 h video upload per minute in April 2018. This massive amount of video has increased the demand for efficient ways to browse and manage desired video content 17,24,29,30,37 . However, given an event query, search engines usually return thousands or even more videos, which are quite noisy, redundant, and even irrelevant. This makes it difficult for users to grasp the focus of the whole event, forcing them to spend a lot of time and effort to explore the main content of the returned videos. Multi-Video Summarization (MVS) is one of the effective ways to tackle this problem. It extracts the essential information of multi-video frames as keyframes to produce a condensed and informative version. That is to say, its goal is to generate a single summary to describe a large number of retrieved videos. In this way, it empowers the users to quickly browse and comprehend a large amount of video content. One key challenge of MVS is to accurately access the users search intents, that is, to generate query-aware summa- rization. Consequently, a surge of effort s have been carried out along this thread. These effort s can be divided into three Corresponding author. E-mail addresses: jizhongtju.edu.cn (Z. Ji), myr2015tju.edu.cn (Y. Ma), pywtju.edu.cn (Y. Pang), xuelong_liopt.ac.cn (X. Li). https:/doi.org/10.1016/j.ins.2018.09.050 0020-0255/© 2018 Published by Elsevier Inc. Z. Ji, Y. Ma and Y. Pang et al. / Information Sciences 478 (2019) 152166 153 Fig. 1. The MVS pipeline of the proposed QUASC and MGF approaches. categories: searching-based approach 1,13,45 , learning-based approach 16,29,30,37 , and fusion-based approach 14,20,34 . Specifically, the searching-based one prefers to select those video frames with high similarities to the searched web images as the keyframes in summarization 1,13,45 . The idea behind it is that the searched web images returned by the search engines generally reflect the search intents for a specific query, thus the generated MVS is query-aware. However, this type of approach tends to produce several redundant keyframes in a summarization since there are always some frames having high similarity in multiple videos. The learning-based one selects the keyframes by building a learning model 16,29,30,37 . For example, Besiris et al. 2 apply a multiple instance learning model to localize the tags into video shots and select the query-aware keyframes in accordance with the tags. It achieves satisfactory performance on query-video dataset. However, there is an obstacle to scale such N-way discrete classifiers beyond a limited number of discrete query categories 20 . Recently, there are considerable interests on fusing the ideas of the above two types of approaches to overcome their re- spective drawbacks. Some pioneering fusion-based approaches formulate the MVS problem in a graph model 14 , concept learning model 34 , and multi-task learning model 20 , respectively. On the other hand, sparse coding technique is effective and widely used in Single Video Summarization (SVS) 6,21 . It formulates the keyframe selection problem as a coefficient selection one, which guarantees the general properties of SVS, such as conciseness and representativeness. However, it is inappropriate for MVS to directly utilize sparse coding since there are plenty of irrelevant or less relevant content to the query in multiple videos. Otherwise, the summarization will contain several noisy or unimportant keyframes, which weakens the conciseness and representativeness. A natural idea is taking advantage of the searched web images to emphasize the important content in the sparse coding framework. However, it is still an unsolved challenging problem. To deal with this challenge, we present a QUery-Aware Sparse Coding (QUASC) method that generates the query- dependent MVS by fusing the ideas of sparse coding technique and searching-based MVS approach. Moreover, to present the summarization in a friendly manner, we also develop a novel Event-Keyframe Presentation (EKP) structure with a novel Multi-Graph Fusion (MGF) approach to present keyframes in groups of specific events related to the query. The MVS frame- work of the proposed QUASC and MGF is illustrated in Fig. 1 . It is worthwhile to highlight several aspects of the proposed methods: (1) A novel QUery-Aware Sparse Coding (QUASC) method for multi-video summarization is proposed. It formulates the multi-video summarization in a sparse coding framework, where the web images searched by the query are taken as the important preference information to reveal the query intent. 154 Z. Ji, Y. Ma and Y. Pang et al. / Information Sciences 478 (2019) 152166 (2) A user-friendly summarization representation structure is developed, which presents the keyframes in groups of specific events related to the query. (3) A new public dataset named MVS1K is released. 1 It contains about 1, 0 0 0 videos from 10 queries and their video tags, manual annotations, and associated web images. To the best of our knowledge, it is the largest public multi-video sum- marization dataset. Both our data and code will be made available. The rest of the paper is organized as follows. Previous work on video summarization and sparse coding-based video summarization methods are discussed in the following section. The proposed QUASC method is introduced in Section 3 . Section 4 describes the proposed keyframe presentation method in detail, followed by a description of the MVS1K dataset in Section 5 . Section 6 concludes the paper. 2. Related work 2.1. Video summarization Video summarization has received much attention in recent years due to the urgent demand to digest a long video or a considerable amount of short videos for users efficient browsing and understanding. Although great progress has been made, creating relevant and compelling summary for many arbitrary length of videos with a small number of keyframes or clips is still a challenging task. Generally, a good summarization should satisfy three properties: (1) conciseness, (2) representativeness, and (3) infor- mativeness. In particular, conciseness is also called minimum information redundancy, which refers to there should be few duplicate or similar content in a video summarization. It guarantees that the video summary is not only easy to be browsed, but also reduces the requirements for storage. Representativeness is also known as maximum information coverage, which refers to that the summarization should represent the video content as much as possible, so that it is conducive to the overall understanding of the video. Informativeness means the criterion of important information preference, which refers to the most important and relevant information is preferred in the summarization. According to the number of videos to be summarized, there are Single-Video Summarization (SVS) and Multi-Video Sum- marization (MVS). Although they share similar goals, MVS differs SVS in the following aspects. (1) MVS should be query- aware to accurately reflect the users search intents, whereas SVS does not require to consider the users intents. (2) The multiple videos have mutual influence on each other since their contents are about the same query. (3) The final summa- rization presentation in MVS is harder than that in SVS, since there is no chronological order for multiple videos. SVS has a relatively long research history, and a detailed review can be referred to 28,36 . In the following, we will introduce the recent work on MVS in detail. Recently, many studies address their attentions to MVS. For example, Lu and Grauman 22 propose a saliency based approach by training a linear regression model to predict the importance score for each frame in egocentric videos. Mo- tivated by the observation that important visual concepts tend to appear repeatedly across videos of the same topic, Chu et al. 4 propose a Maximal Biclique Finding (MBF) algorithm that is optimized to find sparsely co-occurring patterns across videos collected by using a topic keyword. Nie et al. 29 propose a novel MVS method for handheld videos. They first de- sign a weakly supervised video saliency model to select those frames with semantically important regions as keyframes, and then develop a probabilistic model to fit the keyframes into MVS by jointly optimizing multiple attributes of aesthetics, coherence, and stability. Besides the visual information, Li and Merialdo 18 also exploit acoustic information in the videos to assist the construction of MVS with the idea of Maximal Marginal Relevance borrowed from text summarization domain. However, these approaches neglect the users search intents, which may not be adequate to satisfy their requirement. Consequently, several researches tend to study the methods associated with query to cater to the search intents. One of the promising trends is fusing the idea of searching-based and learning-based approaches. For example, Kim et al. 14 ad- dress the problem of jointly summarizing large sets of Flickr images and YouTube videos, where the video summarization is achieved by diversity ranking on the similarity graphs between images and video candidate frames. The reconstruction of storyline graphs is formulated as the inference of sparse time-varying directed graphs from a set of photo streams with the assistance of videos. Observed that images related to the query can serve as a proxy for important visual concepts of the main topic, CAA method 34 uses title-based image search results to find the visually important keyframes as video summarization. Specifically, it learns canonical visual concepts shared between video and images, by finding a joint-factorial representation of two data sets. Motivated by the idea of zero-shot learning 8,12 , Liu et al. 20 adopt a large-scale click- through based video and image data to learn a visual-semantic embedding model to bridge a mapping between the visual information and the textual query. Thus, it could predict the relevance between unseen textual and visual information. In this way, only those frames related to the query can be chosen as keyframes. 1 http:/tinyurl.com/jizhong-dataset . Z. Ji, Y. Ma and Y. Pang et al. / Information Sciences 478 (2019) 152166 155 Fig. 2. The diagram of the QUASC approach. 2.2. Sparse coding approaches in video summarization Spare coding has been widely used in data representation 15,19,35,41 . There are several methods that formulate the single video summarization as a sparse coding problem. That is to say, using the sparse coding method builds a learning model to obtain the video summarization 6,21,23,25,26 . It satisfies the properties of representativeness and conciseness. For example, Cong et al. 6 propose a summarization method for consumer videos, which uses an L 2, 1 norm to regulate the coefficient matrix. Liu et al. 21 adopt a similar method with 6 to generate a summarization for user-generated-video. To overcome the weakness of L 1 norm and L 2,1 norm, Mei et al. use L 2,0 norm 25 and L 0 norm 26 in the sparse coding framework to generate video summarization, respectively. All the above sparse coding-based methods focus on single video summarization, in which the keyframes are taken as the base vectors in the dictionary model. In addition, they consider little about criterion of informativeness, i.e., the most important and relevant information should be preferably chosen in the summarization. Recently, sparse coding is also used for MVS. Panda et al. 30 propose a new approach for mutilple tour video summarization by adding an interestingness and a diversity regularizers in a sparse coding framework. Different from them, QUASC focuses on query-based multi-video summarization, and introduces the web images searched from Internet to the sparse coding model to put more emphasis on the important content, thus criterion of informativeness can be guaranteed. Therefore, from the aspects of data source (single video or multiple videos) and the learning model, QUASC is quite different from existing sparse coding-based approaches. 3. The proposed QUASC method This section presents the proposed QUASC method, in which both the candidate keyframes and the searched web images are employed to reconstruct the semantic topic space in a space coding framework. In this way, each candidate keyframe will be assigned an important score to denote its contribution in the semantic topic space. Therefore, the summarization can be generated by selecting those candidate keyframes with higher important scores. The diagram is depicted in Fig. 2 . Let X = x 1 , . . . , x i , . . . , x N R d

    注意事项

    本文(Query-aware sparse coding for web multi-video summarization.pdf)为本站会员(恋****泡)主动上传,得力文库 - 分享文档赚钱的网站仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知得力文库 - 分享文档赚钱的网站(点击联系客服),我们立即给予删除!

    温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




    关于得利文库 - 版权申诉 - 用户使用规则 - 积分规则 - 联系我们

    本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知得利文库网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

    工信部备案号:黑ICP备15003705号-8 |  经营许可证:黑B2-20190332号 |   黑公网安备:91230400333293403D

    © 2020-2023 www.deliwenku.com 得利文库. All Rights Reserved 黑龙江转换宝科技有限公司 

    黑龙江省互联网违法和不良信息举报
    举报电话:0468-3380021 邮箱:hgswwxb@163.com  

    收起
    展开