How We Protect Your Data
Your code is your business. Your data is your asset. Here is exactly how we keep it safe.
Your Data Stays Yours
We never train models on your data. Your code, conversations, and business information are never shared, sold, or used for AI training. Period.
Isolated Infrastructure
Every managed agent runs on a dedicated VPS. No shared databases, no co-tenancy, no cross-customer data leaks. Your agent is your agent.
No Vendor Lock-In
All files are standard markdown (CLAUDE.md, SKILL.md, memory.md). Export everything at any time. No proprietary formats, no walled gardens.
Open-Source Foundation
Built on open-source tools (Next.js, OpenClaw, Paperclip). You can audit the codebase. No black boxes.
Dual-Gate Skill Review Process
Every third-party skill, tool, and MCP server passes a two-gate review before we include it in our ecosystem. Both gates must pass. No exceptions.
Gate 1: Security
- No hardcoded API keys or credentials
- No telemetry or data exfiltration
- No phone-home behavior or suspicious network calls
- No malicious code patterns (eval, exec, subprocess abuse)
- License compatible with commercial use
Gate 2: Quality
- Actively maintained (recent commits, responsive maintainer)
- Documentation exists and is accurate
- Relevant to AI Starter Package users
- Community traction (stars, forks, issues)
- Code quality meets standards
To date: 1,730+ skills reviewed and approved. 3 repositories rejected (telemetry, ToS violation, restrictive license).
Managed Agent Security
How we secure the managed AI agents running on your dedicated VPS.
Secret Management
All API keys stored as environment variables. Never committed to code. Rotated on a regular schedule.
Hook-Based Safety
Automated hooks run before every action. Pre-commit checks prevent pushing to production. Context health monitoring prevents degradation.
Auditor Agent
The Auditor agent reviews work quality before knowledge gets promoted. Human-in-the-loop safety — AI cannot silently modify its own rules.
Knowledge Gate
New learnings go through a nomination → review → promotion pipeline. Not every observation becomes a permanent rule. Quality over quantity.
SSH Key Authentication
Managed agent VPS servers use SSH key authentication only. No password access. Fail2ban blocks brute force attempts.
Firewall Configuration
Only necessary ports are open. UFW configured by default. All unnecessary services disabled.
Automatic Security Updates
Unattended-upgrades enabled on all managed agent servers. Critical patches applied automatically.
Log Monitoring
All agent actions logged to audit trail. Hooks auto-populate logs. Available for review on request.
Runtime Security Layer
Two independent security systems protect your AI agents at runtime — one verifies commands before execution, the other scans for vulnerabilities across the entire system.
Nexus Gate
Deterministic command firewall — intercepts every shell command before execution. Classifies risk via structural data flow analysis (not AI guessing). Blocks exfiltration, credential leaks, and destructive operations. 195+ known tool patterns, 72 attack signatures.
- Zero external dependencies
- Self-protection (AI cannot modify Nexus files)
- Taint tracking across multi-step attacks
- Credential detection (ghp_, sk-proj-, AKIA patterns)
- SHA256 behavioral fingerprinting
- Hashed audit logs (secrets never written to disk)
Ruflo Security Engine
Automated vulnerability scanning, CVE detection, threat modeling, and input validation. Runs as part of the swarm orchestration layer — every agent action passes through security boundaries.
- CVE and vulnerability scanning
- Threat modeling for agent architectures
- Input validation at all system boundaries
- Security-aware agent spawning
- Compliance checks (CWE tracking)
- Scheduled security scan automation
MCP Server Security
MCP servers connect your AI to external tools. Here is how we manage the risks.
| Risk | How We Mitigate |
|---|---|
| Malicious MCP servers | All MCP servers in our ecosystem are reviewed. We publish the dual-gate review for every server we recommend. |
| Data leaking to external APIs | MCP servers connect to services you explicitly authorize. No hidden network calls. You control which tools are active. |
| Prompt injection via tools | Hook-based safety checks run before and after tool use. The Auditor agent flags suspicious patterns. |
| API key exposure | Keys stored in environment variables, never in CLAUDE.md or settings.json. OS-level secret storage recommended (macOS Keychain, etc.). |
What We Will Never Do
- ✕Train AI models on your data, code, or conversations
- ✕Share your information with third parties
- ✕Store your API keys on our servers (you manage your own)
- ✕Access your managed agent VPS without your permission
- ✕Lock you into proprietary formats (everything is standard markdown)
Report a Security Issue
Found a vulnerability? Please report it responsibly. We take every report seriously and respond within 24 hours.
security@aistarterpackage.com