World News Intel

The rapid advancement and adoption of generative artificial intelligence (AI) is revolutionizing the field of communications. AI-powered tools can now generate convincing text, images, audio and video from textual prompts.

While generative AI is powerful, useful and convenient, it introduces significant risks, such as misinformation, bias and privacy.

Generative AI has already been the cause of some serious communications issues. AI image generators have been used during political campaigns to create fake photos aimed at confusing voters and embarrassing opponents. AI chatbots have provided inaccurate information to customers and damaged organizations’ reputations.

Deep-fake videos of public figures making inflammatory statements or endorsing stocks have gone viral. As well, AI-generated social media profiles have been used in disinformation campaigns.

The rapid pace of AI development presents a challenge. For example, the increasing realism of AI-generated images has improved dramatically, making deterring deepfakes much harder.

Without clear policies for AI in place, organizations run the risk of producing misleading communication that may erode public trust, and the potential misuse of personal data on an unprecedented scale.

The rapid pace of AI development presents a challenge for both regulators and researchers.
(Shutterstock)

Establishing AI guidelines and regulation

In Canada, several initiatives have been underway to develop AI regulation to varying reception. The federal government introduced controversial legislation in 2022 that, if passed, will outline ways to regulate AI and protect data privacy.

The legislation’s Artificial Intelligence and Data Act (AIDA), in particular, has been the subject of strong criticism from a group of 60 organizations, including the Assembly of First Nations (AFN), the Canadian Chamber of Commerce and the Canadian Civil Liberties Union, which have asked for it to be withdrawn and rewritten after more extensive consultation.

Recently, in November 2024, Innovation, Science and Economic Development Canada (ISED) announced the creation of the Canadian Artificial Intelligence Safety Institute (CAISI). CAISI aims to “support the safe and responsible development and deployment of artificial intelligence” by collaborating with other countries to establish standards and expectations.

CAISI’s development allows Canada to join the United States and other countries that have established similar institutes that will hopefully work collaboratively to establish multilateral standards for AI that encourage responsible development while promoting innovation.

The Montreal AI Ethics Institute offers resources like a newsletter, a blog and an interactive AI Ethics Living Dictionary. The University of Toronto’s Swartz Reisman Institute for Technology and Society and the University of Guelph’s CARE-AI are examples of universities building academic forums for investigating ethical AI.

In the private sector, Telus is the first Canadian telecommunications company to publicly commit to AI transparency and responsibility. Telus’s Responsible AI unit recently published its 2024 AI Report that discusses the company’s commitment to responsible AI through customer and community engagement.




Read more:
Bletchley declaration: international agreement on AI safety is a good start, but ordinary people need a say – not just elites


In November 2023, Canada was among 29 nations to sign the Bletchley AI Declaration following the First International AI Safety Summit. The goal of the declaration was to find agreement about how to assess and mitigate AI risk in the private sector.

More recently, the governments of Ontario and Québec have introduced legislation on the use and development of AI tools and systems in the public sector.

Looking forward, in January 2025, the European Union’s AI Act will come into force — dubbed “the world’s first comprehensive AI law.”

Turning frameworks into action

As generative AI use becomes more widespread, the communications industry — including public relations, marketing, digital and social media and public affairs — must develop clear guidelines for generative AI use.

While progress has been made by governments, universities and industries, more work is needed to turn these frameworks into actionable guidelines that can be adopted by Canada’s communications, media and marketing sectors.

Innovation, Science and Industry Minister Francois-Philippe Champagne announces the launch of the Canadian Artificial Intelligence Safety Institute in Montréal on Nov. 12, 2024.
THE CANADIAN PRESS/Christinne Muschi

Industry groups like the Canadian Public Relations Society, the International Association of Business Communicators and the Canadian Marketing Association should develop standards and training programs that respond to the needs of public relations, marketing and digital media professionals.

The Canadian Public Relations Society is making strides in this direction, partnering with the Chartered Institute for Public Relations, a professional body for public relations practitioners in the United Kingdom. Together, the two professional associations created the AI in PR Panel, which has produced practical guides for communicators who want to use generative AI responsibly.

Establishing standards for AI

To maximize the benefits of generative AI while limiting its downsides, the communications field needs to adopt professional standards and best practices. The past two years of generative AI use have seen several areas of concern emerge, which should be considered when developing guidelines.

  1. Transparency and disclosure. AI-generated content should be labelled. How and when generative AI is used should be disclosed. AI agents should not be presented as humans to the public.

  2. Accuracy and fact-checking. Professional communicators should uphold the journalistic standard of accuracy by fact-checking AI outputs and correcting errors. Communicators should not use AI to create or spread disinformation or misleading content.

  3. Fairness. AI systems should be regularly checked for bias to make sure they are respectful of the organization’s audiences along variables such as race, gender, age and geographic location, among others. To reduce bias, organizations should ensure that the datasets used to train their generative AI systems are accurately representative of audiences and users.

  4. Privacy and consent. Users’ privacy rights should be respected. Data protection laws should be followed.. Personal data should not be used for training AI systems without users’ expressed consent. Individuals should be allowed to opt out of receiving automated communication and having their data collected.

  5. Accountability and oversight. AI decisions should always be subject to human oversight. Clear lines of accountability and reporting should be spelled out. Generative AI systems should be audited regularly. To effect these policies, organizations should appoint a permanent AI task force accountable to the organization’s board and membership. The AI task force should monitor AI use and regularly report findings to appropriate parties.

Generative AI holds immense potential to enhance human creativity and storytelling. By developing and following thoughtful AI guidelines, the communications sector can build public trust and help to maintain the integrity of public information, which is vital to a thriving society and democracy.

Share.
Leave A Reply

Exit mobile version

Subscribe For Latest Updates

Sign up to best of business news, informed analysis and opinions on what matters to you.
Invalid email address
We promise not to spam you. You can unsubscribe at any time.
Thanks for subscribing!