When Google Points to a Chatbot Conversation, Be Skeptical
Here’s something new to watch out for: poisoned chatbot conversations surfaced in Google searches. The sharing features in ChatGPT, Claude, Gemini, Grok, and other chatbots allow users to publish their conversations as public Web pages, which can be indexed by search engines and appear alongside traditional websites in search results. Attackers can seed those conversations with malicious commands, and the conversations themselves look trustworthy in search results because the URL points to a well-known AI company. This risk isn’t theoretical—security firm Huntress documented a macOS malware infection that began with a Google search result linking to a shared chatbot conversation that contained malicious Terminal instructions. Treat chatbot conversations found via Google as you would random forum posts—potentially useful for background or ideas to start your own conversation, but not as authoritative instructions. Be especially suspicious when they offer step-by-step guidance or ask you to copy anything verbatim.

(Featured image by iStock.com/tadamichi)
Social Media: Hackers have learned how to poison shared chatbot conversations with malware—and get Google to display them in search results. Never trust step-by-step instructions or Terminal commands from user-generated chatbot pages.







