AI Starter Package
Learn/AI 101/Lesson 6
6 of 8 · 15 min

AI Safety and Privacy

What Data Can AI See?

When you type something into an AI tool, that text gets sent to a server where the AI model processes it. This means anything you type could be seen, stored, or used for training — depending on the tool and your settings.

Think of it like sending an email. Once you hit send, it is out of your hands unless you have set up protections first. The good news: most major AI tools now offer clear privacy controls.

The Golden Rules of AI Safety

  • Never paste passwords, API keys, or credentials into any AI chat. Treat AI like a public conversation — do not share anything you would not put on a whiteboard in an office.
  • Check the privacy policy before using a new AI tool. Look for: "We do not train on your data" or "Your conversations are not stored."
  • Turn off training data sharing when the option exists. Most tools (ChatGPT, Claude) let you opt out of having your conversations used to improve their models.
  • Be careful with sensitive business information. Client names, financial data, proprietary strategies — think twice before sharing these with any AI.
  • Use business-tier plans when handling company data. Enterprise and team plans typically have stronger privacy guarantees than free tiers.

Why Open-Source AI Matters

Some AI models are "open-source" — meaning their code is publicly available for anyone to inspect, modify, and run on their own computers. Models like Llama (from Meta) and Mistral are examples.

Why does this matter for safety?

  • Transparency: Anyone can look at the code and verify what the model does and does not do.
  • Control: You can run the model on your own hardware, meaning your data never leaves your computer.
  • No vendor lock-in: You are not dependent on one company's policies or pricing changes.

The trade-off? Open-source models often require more technical skill to set up and may not be as powerful as the top commercial models. But for privacy-sensitive work, they are an important option to know about.

What is "Dual-Gate Review"?

When you add new tools or capabilities to an AI system, how do you know they are safe? This is where dual-gate review comes in. It is a simple concept:

Before any new tool, plugin, or skill is added to your AI setup, it goes through two checkpoints (gates):

  • Gate 1 — Security Check: Does this tool access data it should not? Could it leak information? Does it connect to untrusted servers? If it fails any security check, it gets rejected.
  • Gate 2 — Quality Check: Does this tool actually work well? Is it reliable? Does it do what it claims? A tool might be safe but still be poorly made.

Think of it like airport security. Gate 1 checks for dangerous items. Gate 2 checks your boarding pass. Both must pass before you get on the plane.

The AI Starter Package uses dual-gate review for all 1,730+ skills in its library. Nothing gets through to your AI system without passing both checks.

A Simple Safety Checklist

  • Read the privacy policy of any AI tool you use regularly
  • Opt out of training data sharing where possible
  • Never share passwords, keys, or credentials with AI
  • Use business-tier plans for sensitive work
  • Consider open-source models for maximum privacy
  • Use dual-gate reviewed tools and plugins when available