Bridging the Chasm: Discovering principles of ethical Gen AI in media

Learn how you can use Generative AI more ethically responsible

by Diana Turkova

Bridging the Chasm: Discovering principles of ethical Gen AI in media
  • #Broadcast
  • #Media
  • #Cloud
  • #Automation
  • #MediaSupplyChain

Playing with Generative AI superpowers is all fun and games until you look into AI's "head" to find out unpleasant downsides. Similar to your favourite sweets, it can bring so much satisfaction and joy, but if you consume it irresponsibly, you are going to find yourself with extra kilos and, in the worst case scenario, declining health. Nobody wants it, of course! But without prior knowledge, we remain blissfully unaware of all the negative aftermath of both sweets and Gen AI.

Before you start guilt-tripping yourself, let's make it clear: we are not here to put you on the spot, but to educate and to keep our media world a safe and secure place for everybody!

But first of all, why even bother?

AI hallucinations. Gen AI cannot reliably distinguish between true and false information. It gets its "knowledge" from scraping the web. If it sees the tendency - it generates the output. And because most people use AI as a substitute to a search engine, the reliability of generated output is rarely validated. As a result, further use of such information plants a seed of misinformation, and consequently, negatively impacts the audience's trust in media outlets.

Shadow AI. Getting your hands on AI tools is not challenging anymore. Accessing one of such powerful tools became as easy as opening Facebook. This tendency accelerated adoption of Generative AI among teams, which use AI tools to streamline their workflows. At the same time, it created new risks associated with data and intellectual property protection as employees feed external LLMs with sensitive or confidential information.

AI bias. AI models learn from the data input from a human. As for a human, being completely unbiased is impossible, consequently, this quality is also getting inherited by Generative AI. Bias is more than just false information, but output based on discriminatory, stereotypical, and unfair historical data. Needless to say, the danger of biassed information can lead to empowering unfair practices and shaping discriminating ideas, which can also subsequently impact public's trust in the media.

Let's come back for a sec to the sweets and AI metaphor, because it hides another elephant in the room! Regardless of all the negative "side effects" of both sweets and AI, we still find ourselves returning for more…

However, the core difference is that eating that candy will only impact the one who consumed it. In case of irresponsible use of Generative AI, its output can negatively impact cohorts of people for a long-term. Media industry has to be especially careful with the use of AI tools as its content shapes the opinions and attitudes of its audience. Unfortunately, the industry hasn't caught up to the speed of AI adaptation yet. That is why the right strategy is yet to be determined.

On the bright side, as the media industry keeps experimenting with Generative AI, it gradually learns the right steps for becoming more responsible with using AI in media workflows.

With great power comes great responsibility

Let's voice an obvious, but unpleasant truth: regardless of the generated output, humans are the ones who make decisions. Decisions, which have the power to influence a tremendous amount of people. Taking responsibility means caring about the impact our decisions can have. Only then we are empowered to tackle all the challenges associated with AI.

Generative AI empowers media professionals across the globe to streamline their work, get more creative ideas, research various topics and who knows what else! Therefore, it is about time to take responsibility and start taking steps towards a more responsible use of AI.

Expretiment, but protect your interests. AI scene is a big playground at the moment. Many media companies choose to give freedom to their teams to experiment with it. This way they can build a better understanding of such tools and, consequently, learn where to draw a line. However, such freedom is defined by company's ethical policies, which can cover data privacy, copyright and other considerations.

Think critically. We shouldn't forget that AI is not an ultimate source of truth. It was trained on human generated data, and humans cannot avoid making mistakes or being biassed. Therefore, AI generated output has to be taken with a pinch of salt and double-checked with other reliable and unbiased sources.

Strive for transparency. Honesty is the best policy hands down. Especially, when it is directly connected to maintaining the audience's trust in the media. Therefore, setting up a clear guidance on how to disclose AI-augmented content is crucial.

Set up human oversight. Close supervision and approval by another person is an integral part of ethical application of AI-generated outputs. Such a process is necessary to minimise the use of AI, which contradicts the company's principles and values.

Learn, adjust, repeat. AI technology is changing rapidly and the guidelines, which are set up today, might become irrelevant tomorrow. Therefore, they have to be systematically reviewed and updated to ensure the correlation with social interests.

Generative AI is a powerful tool, which is unapologetically transforming the way media is produced and consumed. The industry is excited to experiment with it and test its abilities, but the more AI is used, the more we learn its downsides, which can potentially lead to losing people's trust. In order to maintain it, striving to become more ethically responsible, learning from each other's experiences and taking necessary actions have to be a continuous process inside any media organisation.

Back to overview