Gracenote Enhances AI Content Discovery with New Protocol

Share
Share
Frame 1171276609

Gracenote Enhances AI Content Discovery with New Protocol

Frame 1171276609

Gracenote has launched the Video Model Context Protocol (MCP) Server, a new tool designed to enhance AI-driven content discovery for streaming platforms. This innovative solution connects LLMs to Gracenote’s real-time knowledge base, improving the accuracy and relevance of entertainment information.

As CTV platforms and streaming apps strive to enhance user experience in content search and discovery, many are exploring the integration of AI-powered features that utilize Large Language Models (LLMs) for conversational queries. However, LLMs come with their own set of challenges. Today, Nielsen’s metadata unit, Gracenote, has launched a new solution aimed at improving the accuracy and relevance of entertainment information, thereby enhancing LLM-generated responses. This innovation is designed to help streaming services deliver better results for complex or highly specific content searches.

The new offering, known as the Gracenote Video Model Context Protocol (MCP) Server, connects LLMs to Gracenote’s continuously updated knowledge base. This connection allows for real-time validation, correction, and enrichment of entertainment-related queries. With the Gracenote Video MCP Server, TV platforms can:

  • Answer specific queries, such as “Show me the episodes of Brooklyn Nine-Nine in which Jake references Die Hard.”
  • Recommend programming based on user preferences.
  • Drive tune-in based on various parameters, like “Where can I watch the Dodgers game tonight?”

While the focus is on streaming services, this product is not limited to CTV platforms and streaming apps. Tyler Bell, Gracenote’s Senior Vice President of Product, shared with StreamTV Insider that the MCP Server was designed to cater to a diverse range of video providers interested in leveraging LLMs for content search and discovery. This includes MVPDs that manage their own technology stacks and consumer-facing TV applications, as well as middleware and recommendation engine providers.

Bell emphasized that these various customers face common challenges that the MCP Server can help resolve, such as:

  • Normalizing and harmonizing backend data.
  • Recommending and promoting content to users.
  • Providing intelligent support for voice and free-text queries.

In addition to the rise of AI and LLMs, there has been a growing interest in AI agents, which are LLM-based applications that connect to various tools and logic. According to Bell, the MCP Server serves as a component of an AI agent that Gracenote customers can build to recommend shows, manipulate data, or promote content based on viewing histories. “An MCP Server customer can tailor the Model of Context of the agent to fit their specific needs,” Bell explained. “This is more of a toolbox than a one-size-fits-all solution.”

The MCP Server can connect to any LLM chosen by a Gracenote customer for content search and discovery, including third-party LLMs like ChatGPT, Deepseek, Claude, or Gemini, as well as LLMs operated locally on a customer’s infrastructure. By utilizing the MCP Server, TV platform LLMs can provide answers based on their own training data, supplemented by Gracenote’s information. These responses can then be validated, normalized, and enriched with additional context, such as Gracenote IDs.

Importantly, the MCP Server is not exclusive, allowing customers to integrate their own information alongside Gracenote’s data. This means the LLM can analyze and synthesize both sets of data. Platforms can benefit from the ability to search across a wide array of entertainment data while also restricting an AI agent to return information only from a specific catalog, using either Gracenote’s availability data or their own content catalog. “The end result is that our customers can create a service that searches the breadth of global entertainment data and filters by availability or entitlement,” Bell noted.

So, why is Gracenote data essential for LLM-based content discovery? Gracenote highlighted a significant issue with AI LLM-generated responses: they can sometimes “hallucinate,” meaning they provide answers that sound plausible but lack validation against real-world data. Bell explained, “The core issue with LLM hallucinations is their non-deterministic nature. They are trained on data but do not function as databases themselves.” He likened LLM-generated answers to a sophisticated form of autocomplete that may resemble correct answers but are not necessarily factual. The MCP Server addresses this problem by supplying LLMs with factual information, enabling them to correct, validate, and enrich their responses, similar to how one might verify an answer against an encyclopedia.

Additionally, the MCP Server helps overcome another limitation of LLMs: their lack of access to current or real-time data, which can lead to outdated responses. The MCP Server is continuously updated with the latest human-verified Gracenote entertainment data. “This allows LLMs to access recent information, such as availability schedules and new releases that post-date their training data,” Bell explained.

When it comes to how an AI agent would respond to a query using the Gracenote MCP, Bell clarified that the agent would either use its own training data to find the right answer and then ground that response in Gracenote’s knowledge base, or it could search Gracenote first and then leverage its own data. An AI agent might also combine both approaches and harmonize the answers. “One of the great advantages of MCP and LLMs is that they offer multiple pathways to achieve outcomes that have traditionally been challenging,” Bell remarked.

It’s important to note that Gracenote’s MCP Server does not train LLMs on video and entertainment data. Bell explained that LLM training affects the model’s internal structure, while the MCP Server is designed to inform and enrich the model’s output without directly altering its neurons. “Specifically, the MCP tells the LLM what tools it has access to and how to utilize them,” he added.

Another significant benefit is that it simplifies access to Gracenote data. “Customers can create an agent using code or a third-party dashboard and connect the LLM to the MCP Server with just a few clicks,” Bell stated. “Onboarding takes only a few minutes, while integration complexity will vary based on the customer’s infrastructure.”

The launch of this product comes just ahead of the IBC show in Amsterdam this month, at a time when consumers are becoming increasingly comfortable interacting with AI tools through voice prompts. “As the quality of results improves, customer satisfaction increases,” Bell noted, which is crucial for all streamers operating in a competitive landscape known for its churn issues. “While we may not see consumers engaging in conversational dialogues with their TVs anytime soon, LLMs excel at utilizing their extensive training data to provide satisfying answers to increasingly complex user queries,” Bell added.

Disclaimer: This article has been auto-generated from a syndicated RSS feed and has not been edited by Vitrina staff. It is provided solely for informational purposes on a non-commercial basis.

Find Film+TV Projects, Partners, and Deals – Fast.

VIQI matches you with the right financiers, producers, streamers, and buyers – globally.

Not a Vitrina Member? Apply Now!

Vitrina tracks global Film & TV projects, partners, and deals—used to find vendors, financiers, commissioners, licensors, and licensees

Similar Articles