The fundamental error is treating an AI as an authoritative source capable of original thought or expert analysis. When you "ask an AI what it thinks," you are not getting an opinion; you are getting a text prediction.
AI is Not a Pundit, It's a Pattern-Matcher: A large language model (LLM) like ChatGPT or Gemini doesn't analyse RAJAR and form a novel conclusion. It has processed vast amounts of text from the internet, and it generates a response that is statistically likely based on that data. The quote it produced—"its survival depends on pleasing the advertisers and broadcasters who are also its paymasters"—is a generic, cynical statement that could apply to almost any industry-funded body. It sounds plausible but lacks any specific insight.
The AI's response is a perfect example of superficial knowledge. It identified that RAJAR is funded by the industry, a common piece of information. However, it completely missed the most important structural detail: that RAJAR is a Joint Industry Currency (JIC). The JIC model, where it is jointly run by the BBC and its commercial competitors, is specifically designed to prevent the very bias the AI's generic statement implies. A human expert would know this; the AI just generated a shallow take.
"Asking an AI" Is Not Research: The poster used the AI to generate a premise and then accepted it without question because it confirmed a suspicion. This is a dangerous way to form an argument.
Using an AI as a starting point for research is fine, but using it as a substitute for critical thinking is the primary flaw. The phrase "I asked AI and it said..." should be treated with the same scepticism as "I saw a random comment on a blog..."—it's an unsubstantiated claim, not evidence.