close
close

Yiamastaverna

Trusted News & Timely Insights

A lawsuit against Perplexity accuses fake news hallucinations
Enterprise

A lawsuit against Perplexity accuses fake news hallucinations

Perplexity did not respond to requests for comment.

In an emailed statement to WIRED, News Corp CEO Robert Thomson negatively compared Perplexity to OpenAI. “We applaud principled companies like OpenAI that understand that integrity and creativity are essential if we are to realize the potential of artificial intelligence,” the statement said. “Perplexity is not the only AI company abusing intellectual property, and it is not the only AI company that we will pursue vigorously and consistently. We have made it clear that we would rather court than sue, but for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.”

However, OpenAI faces its own accusations of brand dilution. In New York Times vs. OpenAIThe Times claims that ChatGPT and Bing Chat will attribute fabricated quotes to the Times and accuses OpenAI and Microsoft of damaging their reputations through brand dilution. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times described red wine (in moderation) as a “heart-healthy” food when in fact it was not; The Times argues that its factual reporting has debunked claims about the health of moderate drinking.

“Copying news articles to power substitute commercial generative AI products is unlawful, as we have made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI,” said Charlie Stadtlander, director of external communications at NYT. “We welcome this lawsuit from Dow Jones and the New York Post, which represents an important step in ensuring publisher content is protected from this type of misappropriation.”

If publishers prevail on their argument that hallucinations can violate trademark law, AI companies could face “immense difficulties,” according to Matthew Sag, a professor of law and artificial intelligence at Emory University.

“It is absolutely impossible to guarantee that a language model will not hallucinate,” says Sag. In his view, the way language models work by predicting words that sound right in response to prompts is always a kind of hallucination – sometimes it just sounds more plausible than others.

“We only call it a hallucination when it doesn’t match our reality, but the process is exactly the same whether we like the result or not.”

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *