AP guidelines restrict journalists' use of generative artificial intelligence

The Associated Press released guidelines for using generative artificial intelligence as the prominent news organization highlighted the role of human oversight in journalism.

Associated Press. Image: Shutterstock

In an effort to eliminate any potential negative impact on its reporting, The Associated Press issued new guidelines on Wednesday limiting journalists' use of generative artificial intelligence tools for news reporting.

Amanda Barrett, the AP's vice president for standards and inclusion, laid out some constraints on how the AP will handle future AI issues. First, journalists are not allowed to use ChatGPT to create publishable content.

“Any output from generative AI tools should be considered unvetted source material,” Barrett wrote, adding that staff should use their editorial judgment and the media’s sourcing standards when considering any information for release.

Additionally, AP does not allow the use of generative artificial intelligence to add or remove elements from photos, video or audio. It will also not transmit AI-generated images suspected of being "falsely depicted," known as deepfakes, unless it is the subject of a story and is clearly labeled.

Barrett warned staff that because generative AI can easily spread misinformation, Barrett advised AP reporters to be diligent and exercise their usual caution and skepticism, including trying to identify the source of original content.

"If journalists have any doubts about the veracity of this material," she wrote, "they should not use it."

While the article highlighted the limitations of AP journalists' ability to use generative AI, it did strike an optimistic tone in some quarters, suggesting that AI tools could also benefit journalists' reporting.

"Accuracy, fairness, and speed are the guiding values of AP news reporting, and we believe that the careful use of artificial intelligence can serve those values and improve the way we work over time," Barrett wrote.

Additionally, she clarified that the 177-year-old news organization does not believe AI can replace reporters, adding that AP reporters are accountable for the accuracy and fairness of the information they share.

Barrett pointed to a licensing deal AP signed with OpenAI last month that gave ChatGPT creators access to the AP's archive of news stories dating back to 1985. In exchange, the agreement provides media outlets with access to OpenAI's suite of products and technologies.

News of the OpenAI deal comes days after the artificial intelligence startup committed $5 million to the US Journal Project. That same month, OpenAI signed a six-year deal with stock media platform Shutterstock to gain access to its vast library of imagery and media.

Amid all the hype about the potential of generative artificial intelligence and the ability to conversationally find information through chatbots, there are growing concerns about the accuracy of some of the information that is ultimately provided to users.

While AI chatbots can generate responses that appear to be real, they also have a well-known habit of eliciting responses that are actually unreal. Known as AI hallucinations, this phenomenon can generate false content, news, or information about people, events, or facts. A

View Original
  • Reward
  • Comment
  • Share
Comment
No comments