Posts

When Google Points to a Chatbot Conversation, Be Skeptical

Here’s something new to watch out for: poisoned chatbot conversations surfaced in Google searches. The sharing features in ChatGPT, Claude, Gemini, Grok, and other chatbots allow users to publish their conversations as public Web pages, which can be indexed by search engines and appear alongside traditional websites in search results. Attackers can seed those conversations with malicious commands, and the conversations themselves look trustworthy in search results because the URL points to a well-known AI company. This risk isn’t theoretical—security firm Huntress documented a macOS malware infection that began with a Google search result linking to a shared chatbot conversation that contained malicious Terminal instructions. Treat chatbot conversations found via Google as you would random forum posts—potentially useful for background or ideas to start your own conversation, but not as authoritative instructions. Be especially suspicious when they offer step-by-step guidance or ask you to copy anything verbatim.

(Featured image by iStock.com/tadamichi)


Social Media: Hackers have learned how to poison shared chatbot conversations with malware—and get Google to display them in search results. Never trust step-by-step instructions or Terminal commands from user-generated chatbot pages.

How to Encourage Successful AI Use in Your Organization

The AI hype train continues to gain momentum, with breathless reports of rapid user growth, billion-dollar deals, and sky-high company valuations. At the same time, it’s easy to highlight AI pilot failures, problematic uses, and worries about job losses.

As always, reality lies between the extremes. AI is just another technological tool, like spreadsheets, email, and the searchable Web. Like them, casual usage won’t automatically increase an organization’s productivity. At best, many people have begun using AI chatbots as a smarter search engine, and while that’s a fine start, it’s unlikely to make a notable difference. Many others are technology skeptics who are uncomfortable with any new technology, let alone one as fuzzy as AI. Even those who are interested and capable are often overwhelmed by their existing work and don’t have time to learn yet another tool.

So how do you set up an organization to make effective—even transformative—use of AI?

Get Buy-In from Management

Ideally, the desire to adopt AI would come from the top of the organization, with leadership discussing and modeling the kind of usage they want to see. But what’s absolutely essential is lower-level management creating the culture, resources, and time necessary for employees to experiment with AI.

Evangelize from the Bottom, Don’t Mandate from the Top

Although management must be on board, a CEO memo mandating immediate AI adoption won’t have the desired effect. Unlike many other technologies, AI solutions tend to be highly specific rather than one-size-fits-all. Frontline employees know where they’re wasting time with inefficient workflows, and they have first-hand knowledge of what customers want, so they’re more likely to be able to leverage AI tools when they are involved in the development and deployment. Solutions created without their participation likely won’t benefit the business’s bottom line, customers, or employees.

Centralize Testing and Support

A top-down approach does make sense for tool analysis and testing. The explosive growth of the AI market means that there are numerous similar options for any desired workflow. To save time, avoid future chaos, and reduce tool jumping, it can be helpful to have a single IT team evaluate the numerous possible tools, make recommendations, suggest best practices, establish basic data handling and privacy guidelines, and provide support.

Adopt a Documentation Mindset

A key to automating workflows with AI is being able to document the necessary tasks clearly first. Some organizations already have a documentation mindset, where they write everything down, define processes, and record decisions. If that’s not the case for your organization, it’s better to focus on building such documentation before creating automation tools that are unlikely to deliver the desired results. Consider using AI to help with documentation, such as by interviewing people who understand the workflows and using AI to extract an outline from the transcript of the recording.

Think of AI Tools Like a Junior Employee

The hard part of using AI is defining your goals and determining where AI can make a difference. It’s much like training a new hire. What are you trying to achieve by hiring them? What do they need to learn to do their job? What level of excellence do you expect? What common mistakes and pitfalls should they avoid? You can only automate something if you have a clear idea of what success entails and precisely what’s necessary to achieve it.

The Bottom Line

Ultimately, successful AI implementation comes down to defining what you want to achieve, giving people the time they need to explore possibilities, and providing guidance rather than mandates.

(Featured image by iStock.com/FabrikaCr)


Social Media: Casual AI use won’t impact your organization. To see real productivity gains with AI projects, avoid top-down mandates and instead empower frontline teams, document workflows, and centralize support.

Keep Sensitive Data Private by Disabling AI Training Options

Most AI chatbots, including ChatGPT, Claude, and Google’s Gemini, let you control whether your conversations will be used to train future models. While allowing this could improve the AI, it also means that sensitive business information and intellectual property could become part of the chatbot’s training data. Once data is incorporated into AI training, it likely can’t be removed. Even with training disabled, you should be cautious about sharing sensitive business details, trade secrets, or proprietary code with any AI system. To reduce risks, disable these training options:

  • ChatGPT: Go to Settings > Data Controls and turn off “Improve the model for everyone.”
  • Claude: Navigate to Settings > Privacy and disable “Help improve Claude.”
  • Gemini: Visit the Your Gemini Apps Activity page and turn off Gemini Apps Activity.
  • Meta AI: Avoid it entirely, as it doesn’t allow you to opt out of training.

(Featured image by iStock.com/wildpixel)


Social Media: Don’t let sensitive business data become part of AI training sets. Here’s how to turn off training options in popular AI chatbots to protect your company’s information.

Watch What You Say in AI-Recorded Meetings

You’re in a meeting with colleagues, and after everyone else has trickled out, you talk about a sensitive topic with a trusted friend. That would typically be no problem with an in-person meeting, but with a modern virtual meeting, where an AI records a transcript, summarizes what was said, and automatically emails it to all participants, you might not want everyone to know about your coworker conflicts, job search, health issues, relationship troubles, or countless other confidential matters.

This issue affects all major videoconferencing platforms—Zoom, Microsoft Teams, Google Meet, and others. Many organizations also use standalone AI recording tools that can join meetings as participants, such as Otter.ai, Fireflies.ai, and tl;dv.

No one should feel ashamed of using AI-generated meeting summaries, nor should these tools be categorically avoided. They’re undeniably helpful, allowing people to focus on the discussion instead of taking notes or worrying about forgetting action items. We know people who consider them life-changing.

However, the fact remains: unlike a person tasked with taking notes, these tools record everything, including pre-meeting chatter, small talk, and personal asides that a person would know not to include. Making matters worse, AI notetakers are often configured to distribute transcripts and summaries automatically to all attendees—including those who were invited but didn’t attend. While this helps people catch up on missed meetings, it can cause problems if the absent individuals were themselves the topics of discussion. And we won’t even get into the potential legal and HR implications of certain conversations being made public.

Practical Solutions

Given the utility of AI-generated meeting summaries, what can you do to reduce the chances of potentially embarrassing or problematic conversations being shared inappropriately?

  • Warn attendees: Although most videoconferencing tools alert users that recording is happening, everyone is used to these notifications. For a more explicit warning, the meeting host can remind everyone that summaries will be shared with all attendees.
  • Pause/resume recording: While not all videoconferencing and AI recording tools offer the option to pause and resume, it can be useful. The meeting host can wait to start recording until everyone has arrived and the pre-meeting chatter has died down, and then stop it once the last agenda item has been discussed. The challenge is that this requires the host to remember to start and stop at the right times, and any valuable conversation before or after these points will be lost.
  • Restrict distribution: Another option is to configure the system so meeting summaries are sent only to the host, who can then review and edit them if needed before sharing with the rest of the attendees. The drawbacks here are the extra work for the host and the delay in participants receiving the notes, which can hold them back from starting on action items.
  • Watch what you say: Just as with social media posts, it’s important to think before you say something you might regret. If you assume that everything you say could be shared with your entire organization—including HR and your boss—you’ll be much less likely to get into trouble. Of course, this requires everyone to be sufficiently self-aware to avoid problematic topics.
  • Use private channels: If you anticipate needing to discuss sensitive information with a remote colleague—the kind of thing you’d shut your office door to keep passersby from overhearing—use a private channel like a personal meeting room, direct message, or phone call. And if someone starts to say something problematic in a group meeting, gently suggest moving it to a private channel.

Although having AI-generated summaries of conversations you thought were private circulated to others may feel like a modern problem, variants have been around for a long time: the romantic message misaddressed to the company-wide email list, the list of layoffs left in the copy machine, or even a conversation that continues across stalls in the bathroom without realizing someone else has come in. Ultimately, all we can do is be mindful of what we say and who might hear it.

(Featured image by iStock.com/ArnoMassee)


Social Media: In virtual meetings, AI recording tools often capture and share everything—even those casual chats that occur after most attendees have left. Learn how to avoid having sensitive conversations broadcast to your whole team.

Ten Tips for Making the Best Use of AI Chatbots

Since ChatGPT launched in late 2022, people have been using AI chatbots to brainstorm, speed up research, draft content, summarize lengthy documents, analyze data, assist with writing and debugging code, and translate text into other languages. Recently, the major chatbots have gained Web search capabilities, allowing them to access live information beyond their training model data.

Using a chatbot effectively requires new approaches to thinking and working, especially when it comes to searching for information. Just as with a human assistant, you need to play to their strengths when figuring out the best ways to get the results you want. Incorporate these tips into your chatbot conversations, and you’ll see significantly better outcomes.

  1. Be specific and complete: Decades of search engine use push us toward short, focused search phrases with keywords that will appear in the results. In contrast, chatbots thrive on specificity and detail. For instance, prompting a chatbot with “iCloud photos syncing” won’t generate nearly as useful a response as “Tell me what might prevent iCloud from syncing photos between my Mac and iPhone.” Also, don’t shy away from negative prompting—tell the chatbot what not to include or consider in its response. You can even be specific about formatting the output as a bullet list, table, or graph.
  2. Every prompt is a conversation: We are accustomed to standalone searches, where, if the search fails, you must start over. You’ll achieve much better results with chatbots if you consider everything a conversation. Even responses to specific, detailed prompts may not fully address your question or could lead you to think of additional ones. Ask follow-up questions, clarify what you want to find out or accomplish, provide feedback, or redirect the conversation as needed. (For the ultimate chatbot conversational experience, try voice mode in the ChatGPT or Claude apps, where they talk back to you. It’s excellent for capturing ideas, refining your thinking, or just doing a brain dump.)
  3. Edit your last prompt: If the most recent response from a chatbot is entirely unsatisfactory, you may have better luck editing and resubmitting it rather than informing the chatbot that it has made a mistake. There’s usually an edit link or pencil button that appears when you hover over it.
  4. Context can help: Most chatbots maintain libraries of previous conversations, allowing you to search through them to find old ones easily. Because chatbot responses improve with more context, it can be helpful to return to one of those conversations when you want to explore that topic further. Similarly, if you’re asking a chatbot to create something similar to something you’ve already done, provide the previous work as an example.
  5. Ask it to role-play: Another way to increase context is to ask the chatbot to “act as” a particular type of professional, such as an editor, coach, marketer, or software developer. In essence, you’re asking the chatbot to respond in the context of a certain role. Conversely, it can be helpful to ask it to tailor its response as if you were a high school student, someone with a basic understanding of the topic, or an expert in the field.
  6. Know when to start over: Although context is key, chatbots have a limited memory, so long conversations can overwhelm what’s called the “context window.” If you notice the chatbot hallucinating, starting to repeat itself, or going off into the weeds, try saying, “Please summarize what we’ve discussed in a prompt I can use to continue working on this topic.” Then, copy that prompt into a new chat before continuing the conversation.
  7. Force Web searches as necessary: Most chatbots make it explicit when they are searching the Web, which means you can also tell when they aren’t searching and are thus relying on potentially outdated training data. If you want to ensure that you’re getting the latest information, tweak your prompt to start with something like “Search for…”
  8. Test its limits: Since every chatbot response is based on just what you say in the prompt, it won’t necessarily go as deep as you would like. Try asking it to critique its own output, generate multiple options, or present the best argument for different perspectives. You can even request it to be more cautious or more creative. It’s fine to challenge a chatbot in ways that would be socially inappropriate with another person.
  9. Save and reuse effective prompts: When you identify prompts that work particularly well for recurring tasks—such as generating meeting summaries, analyzing data, or drafting specific types of content—save them for reuse so you don’t have to start over each time.
  10. Don’t believe everything you read on the Internet. While chatbots are incredibly confident and often truly astonishing in what they can produce, it’s your responsibility to verify important facts and details (just as with human-created information, which isn’t necessarily any more trustworthy). The statistical models they use can lead to completely fabricated information. Although this is less true with Web searches, even there, they can combine information in ways that simply aren’t accurate.
  11. Try deep research: Bonus tip! Many chatbots offer a so-called deep research mode, which allows the chatbot to go off for 5 or 10 minutes to gather information, analyze it over multiple steps, and produce a much more comprehensive response. Deep research is too slow for a conversation, but it can provide a good foundation when you’re exploring a new topic that requires a lot of detail.

While AI chatbots are powerful tools, they work best when you think of them as collaborative partners rather than magical solutions. The key is experimentation—try different approaches, refine your prompting style, and don’t hesitate to push the limits of what they can do. Start with these fundamentals, but remember that becoming proficient is an ongoing process.

(Featured image by iStock.com/Memorystockphoto)


Social Media: Getting the most out of ChatGPT and Claude requires a different approach than using a traditional search engine. Learn ten essential tips for better prompting, from being conversational to leveraging context and even role-playing.