IBM has announced Watson powered cognitive services for its Cloud Video technology.
The service claims to transform unlocking data-rich insights for video content and audiences to deliver differentiated, personalized viewing experiences for consumers.
Over 80 percent of data is unstructured universally and is difficult to process.
Hence, even though digital video offers vast area for content, it still remains untapped and undetermined.
For mining and analyzing complex data, use of cognitive technology is the next move.
This will aid companies to compile data based on viewership and audience reaction related to it.
The services will be accessible through the IBM Cloud.
Main features include:
Live Event Analysis: To merge Watson APIs with IBM Cloud Video streaming video solutions to track near real-time audience reaction of live events on the basis of social media feeds.
Video Scene Detection: Will automatically segment videos to meaningful scenes for efficient delivery of targeted content.
Audience Insights: Will integrate IBM Cloud Video solutions with the IBM Media Insights Platform, which uses Watson APIs to recognize audience preferences, including social media reactions.
These services are among the latest examples of IBM applying Watson to its Cloud Video platform since the formation of its Cloud Video unit in January 2016. The IBM Cloud Video unit brings together innovations from IBM’s R&D labs with the cloud video platform capabilities of Clearleap and Ustream.
IBM will bring together Watson Speech to Text and AlchemyLanguage API and IBM Cloud Video technology to recognise consumer feedback patterns during the event.
The service will process the natural language in the streaming video alongside analysis of social media feeds to sum up the audience sentiment for a particular live event.
The service is currently going through a demonstration phase with clients. It claims to provide companies with a chance to vary and understand audience reaction even before a speaker has even left the stage.
Also, IBM launched a new service intended to enhance understanding of video content.
Though segmenting videos based on simple visual cues is possible, minor shifts cannot be classified using current technologies.
The pilot project from IBM differs by deploying experimental cognitive capabilities, technology to simplify semantics and patterns in language and images, to identify higher-level concepts related to an event. Hence, the service can automatically segment videos into meaningful portions.
The release adds that a leading content provider sees the service as a potential way to improve categorization of videos, indexing of specific chapters and searches for relevant content.
The services may lead to a metadata service phase leading to highly-specific content pairings for viewers down to the segment, increasing engagement and time-spent.
Also, the cognitive technologies of IBM and its Cloud Video platform will be joined to identify audience preferences and sentiment.
And, IBM Media Insights Platform will be added to existing Catalog and Subscriber Manager and Logistics Manager products of IBM Cloud Video.
This will detail consumer viewing habits, focusing on shows or networks watched, devices used, and audience specific interests.
The service will deploy Media Insights Platform to analyze viewing behaviors and social media streams to identify complex patterns to improve content pairings and find new viewers interested in existing content.
The Media Insights Platform deploys Watson APIs like Speech to Text, AlchemyLanguage, Tone Analyzer and Personality Insights.
IBM witnessed a 3 percent revenue drop at $20.2 billion in 2Q16, with cloud revenue of $3.4 billion, witnessing a 30 percent growth.
IBM also revealed during 2Q16 that it will focus on enhancements of its traditional IT solutions to meet the requirements for strategies based on analytics, Watson and hybrid cloud.