Sample Training Email — What You'd Get
Before you spend anything, read this.
Below is Email #2 from the five-email staff training series. This is formatted exactly as your employees would receive it in their inbox, with the company name and sender information removed so it's generic. You're seeing the tone, the content, the structure, and the approach. This tells you whether this is the right voice for your team.
There's no condensed version. No summary. This is the full email as it would land.
Subject: Why Free AI Tools Are Public Services (And What That Means for You)
The biggest mistake I see people make with ChatGPT, Claude, or any free AI tool is treating them like they're private.
They're not. When you type something into a free AI tool, you're typing it into a public service. The company running the tool logs what you enter. It's used to improve the product. It can be subpoenaed. It's not encrypted to you. It's not your private conversation.
This matters because most of you have some piece of information that should not leave the building.
Maybe it's a customer's name paired with their specific order. Maybe it's your pricing. Maybe it's an employee's personnel record. Maybe it's a contract template or a process detail that gives you an advantage over your competitors. Maybe it's financial data.
All of that is proprietary. All of it should stay inside.
If that information goes into a free AI tool, you've just made a choice to share it with a company you don't control, in a way you can't take back.
So what does that mean for you practically?
It means there are things you can ask an AI tool and things you shouldn't.
You can ask it to help you write a response to a routine customer email (as long as you don't put the customer's name or their sensitive details in the prompt).
You can ask it to help you structure a spreadsheet (as long as you're not putting actual financial data in to show it).
You can ask it to help you draft a process document (as long as you're leaving out the details that make it proprietary).
You can ask it to help you understand a concept or explain something (as long as you're not copying in confidential material to do it).
You cannot put customer records into a free AI tool. You cannot paste your pricing structure. You cannot enter employee personnel information. You cannot feed it a confidential contract or process. You cannot put in anything you'd be embarrassed to share with a company you don't know.
Here's the rule of thumb: if you wouldn't print it and hand it to a stranger on the street, don't put it in a free AI tool.
That sounds extreme, but it's accurate. The tool is being run by a company that you're not in a relationship with, that you don't control, and that you're not paying (which means you're not the customer—you're the product). So yes, the safety bar is that high.
This doesn't mean the tools aren't useful. It just means you have to be thoughtful about what you feed them.
Most work problems don't actually require you to put sensitive information in. You can describe the situation in general terms. You can ask the tool to help you think through it without giving it the specifics. You can use it to brainstorm or structure something and then plug in your actual data afterward.
The tool is still helpful. You're just not sharing anything you shouldn't.
If you're not sure whether something is safe to enter, don't enter it. Ask your manager first.
You won't get in trouble for asking. You will get in trouble if you enter sensitive information and that causes a problem later. So if you're standing there looking at something and thinking "I'm not totally sure about this," take 30 seconds and ask.
This is why our company is putting a policy in place. It's not because we don't want you to use these tools. It's because we want you to use them in a way that doesn't expose our business. There's a difference between "our employees are using AI" and "our employees are using AI in a way we've thought through and signed off on."
You're getting a policy email next week that spells out exactly what we've decided is off limits and what's fair game. Read it. Know it. If you disagree with it or you think it doesn't make sense for your specific role, that's worth a conversation with your manager.
But the underlying principle doesn't change: know what information is sensitive, don't put it in public tools, ask when you're not sure.
That's it. You don't have to understand how the tool works under the hood. You don't have to become an AI expert. You just have to know enough to use it without accidentally becoming a liability.
You're going to get one more email after this one that talks about what these tools are actually useful for. Then one about our company's specific policy. Then one about where you go from here.
But right now, the main thing: these are public services. Treat them accordingly.
End of sample email.
This is the approach throughout the five-email series. Direct. Specific. No hype. No technical jargon. No padding. Every sentence exists because it needs to.
The tone assumes your employees are intelligent and capable but maybe haven't thought about this from a data security angle. It validates that the tools are useful. It sets a clear boundary. It tells them how to think about the problem, not just what the rule is.
This is how all five emails are written.
If this voice feels right for your team, the rest of the series matches it. If this feels wrong—too harsh, too casual, too technical, too hand-holding—this might not be the right fit for your company culture.
Read it. See if you'd be comfortable with your team receiving this in their inbox. If yes, you know what you're getting into. If no, this probably isn't the kit for you.
[CTA Button: See All Kit Details]
[CTA Button: Buy the Kit — $997]