Build or Buy: Navigating AI Adoption in Media
The current interest in artificial intelligence (AI) has many media companies looking for ways they can adopt the technology to build products for their users and subscribers. AI can definitely help with some of the media industry’s biggest challenges, and there is no shortage of applications that media companies can take advantage of right now.
The big question is whether it’s better for these companies to quickly build new tools on top of the existing platforms that are generating buzz, or to pay to develop their own solutions that live behind their firewalls.
Building on existing platforms
ChatGPT and its parent company, OpenAI, have drawn more attention than any other generative AI tool this year. ChatGPT has captured the interest of the general public through its ease of use, quick replies, and ability to create new content of relatively high quality. ChatGPT is so easy and effective that it has led to concerns about AI doing the work of student essays, public speeches, Hollywood screenwriting, and ad copy, among other things.
With a tool like this available, it’s no surprise that media companies would want to build their own solutions on top of the existing infrastructure. While the availability and efficacy of ChatGPT is certainly appealing, it hides the danger for media companies using the open-source AI solution.
The problems with ChatGPT and similar platforms come down to ownership and security. These free platforms own the data, queries, and outputs, presenting an issue for companies looking to build on top of them. One of the most popular uses of generative AI is to input data and ask the application to generate a result in a certain format. For instance, a reporter may put in notes from an interview and ask the application to generate an article. Once someone puts data into an open-source AI application, the AI application then owns that data.
This creates a serious security issue. The data is used for ongoing language learning, but more importantly, it’s no longer private. Any sensitive data that goes into a publicly accessible AI platform is at risk. We’ve already seen how this can lead to issues, as it did with an accidental data leak by Samsung.
The queries are property of the platforms as well, which raises similar issues to the input of data. Natural language processing (NLP) starts to make matches based on how words and information are used together, so AI systems will begin to draw conclusions from the queries. Queries pertaining to sensitive information can again lead to leaks or unprotected sensitive information.
Finally, there’s the issue of who owns the outputs. There is little question as to who owns an article produced by a newspaper employee. If media companies use generative AI to create content in the form of articles, video voice over, animation, or images, there are thorny questions about who actually owns that content. These are questions that media companies haven’t had to deal with to date, and likely don’t want to deal with going forward.
Building proprietary solutions
One reason that many companies don’t explore their own proprietary AI tools is because the existing tools often come with much lower costs associated. While that is true, the downside to using open source applications should lead some to explore paying for their own proprietary tools.
Building a proprietary AI algorithm ensures that media companies can take advantage of the same generative AI capabilities, albeit behind a firewall that keeps all information in-house and secure.
The kinds of tools that are available for proprietary use may not garner the headlines of ChatGPT, but they are close in terms of capabilities and constantly improving. A close approximation to ChatGPT’s capabilities, matched with complete control over the data and outputs should tip the scales.
ChatGPT has other limitations, including its knowledge base cutoff date of September 1, 2021. The AI continues to learn from user inputs and responses, but the overall training data does not include events or information from the past two years. This impacts every output that the AI generates, especially content related to current events.
Media companies, especially those in news media, often produce content daily, which is a benefit to in-house solutions. Their AI systems need to remain current, meaning that they need algorithms that are constantly updated and trained to remain current. Again, that’s something that can be done with proprietary technology, but not with an open source tool.
Then there are the so-called hallucinations, where AI tools simply make things up. This should be a major concern for media companies, especially those with reputations built on truth and trust. Again, proprietary algorithms can be tweaked to avoid these issues, adding another layer of security around the outputs.
Building for the future
While the questions of ownership are big for media companies, they are even more important for other enterprises exploring generative AI, including law firms and healthcare companies. These companies may have massive case logs that they want summarized, but there are huge privacy, security, and confidentiality implications that come with sharing with an open-source platform that processes and then owns that data. Here at 3Pillar we’re continuing to monitor the latest developments in AI and identify how media companies can apply AI to build breakthrough products that transform their business. To get started on a conversation about how AI can apply to your business, contact us.
Recent blog posts
Stay in Touch
Keep your competitive edge – subscribe to our newsletter for updates on emerging software engineering, data and AI, and cloud technology trends.
Thank you for submitting the form