𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Using object and trajectory analysis to facilitate indexing and retrieval of video

✍ Scribed by Carlos Lopez; Yi-Ping Phoebe Chen


Book ID
104039169
Publisher
Elsevier Science
Year
2006
Tongue
English
Weight
206 KB
Volume
19
Category
Article
ISSN
0950-7051

No coin nor oath required. For personal study only.

✦ Synopsis


This paper aims to show that by using low level feature extraction, motion and object identifying and tracking methods, features can be extracted and indexed for eYcient and eVective retrieval for video; such as an awards ceremony video. Video scene/shot analysis and key frame extraction are used as a foundation to identify objects in video and be able to Wnd spatial relationships within the video. The compounding of low level features such as colour, texture and abstract object identiWcation lead into higher level real object identiWcation and tracking and scene detection. The main focus is on using a video style that is diVerent to the heavily used sports and news genres. Using diVerent video styles can open the door to creating methods that could encompass all video types instead of specialized methods for each speciWc style of video.


πŸ“œ SIMILAR VOLUMES


Retrieval of landscape images and automa
✍ Masayuki Mukunoki; Michihiko Minoh; Katsuo Ikeda πŸ“‚ Article πŸ“… 1999 πŸ› John Wiley and Sons 🌐 English βš– 680 KB

This paper proposes a method of retrieving landscape images in which an index is first generated automatically and then, to enable retrieval even when the index includes errors, a pixel-based object labeling technique for the landscape images is employed and similar images are retrieved by using obj

A generic approach to semantic video ind
✍ Dae-Jin Kim; Hichem Frigui; Aleksey Fadeev πŸ“‚ Article πŸ“… 2008 πŸ› John Wiley and Sons 🌐 English βš– 688 KB

## Abstract We present a novel method for fusing the results of multiple semantic video indexing algorithms that use different types of feature descriptors and different classification methods. This method, called Context‐Dependent Fusion (CDF), is motivated by the fact that the relative performanc