There has been the fast growth of multimedia sharing and annotation applications on the Web, which generate great amount of annotations for the multimedia resources. However, the indexing of these annotations to improve searching for media fragments, instead of the whole multimedia resources, is still not satisfactory. Many people many have the experience that going through a long long long video to just find out the one or two minutes useful fragments.
Linked data describes a series of methods of publishing structured data using semantic Web technologies into machine readable format. So the basic idea of linked data is to encourage people put their data online in a generalised format, so that everybody, including machine agents, can take a look at it.
As I said, the current video search results are not satisfactory, because many annotations are still on the WHOLE multimedia resource level. For most applications, descriptions, tags and comments only annotate the whole multimedia resource. In addition, they are not connected with media fragments and there is no efficient mechanism, except for traditional search engines, to interlink media fragments and annotations across different repositories. That's why we need linked data to break out these barriers.
No comments:
Post a Comment