ChatGPT search tool vulnerable to manipulation and deception, tests show
ChatGPT search tool vulnerable to manipulation and deception, tests show
Exclusive: Guardian testing reveals AI-powered search tools can return false or malicious results if webpages contain hidden text
Summary
A Guardian investigation found vulnerabilities in OpenAI’s ChatGPT search tool, including susceptibility to manipulation via hidden text and prompt injections.
Malicious actors can influence ChatGPT to produce biased results or return harmful code, posing risks for users.
Tests revealed that hidden content on fake websites could manipulate ChatGPT to deliver overly positive product reviews, even contradicting the site’s actual data. A cybersecurity expert warned these flaws create a “high risk” for deceptive practices.
Experts caution users to treat AI-generated content critically, comparing these vulnerabilities to “SEO poisoning.”