Stop Sensitive Data from Leaking into AI Tools
Every day, employees paste API keys, passwords, SSNs, and client PII into ChatGPT, Copilot, and Gemini. Locki's Smart Suggestions detects sensitive patterns in real time — and alerts your team before the data is submitted.
AES-256-GCM
Bank-grade encryption
Zero-Knowledge
We never see your data
Local Processing
Encryption in the browser
GDPR Ready
Compliant by design
Open Audit
Transparent cryptography
Free Trial
14 days, no credit card
AI tools see everything your employees type
AI assistants are now part of everyday work. But every prompt is sent to a third-party server. When employees paste sensitive data without thinking, it becomes part of someone else's infrastructure.
Every message sent to ChatGPT, Copilot, or Gemini is transmitted to external model servers. You have no control over how long it is retained, who can access it, or whether it influences future model outputs.
People paste API keys into AI debugging sessions, include client names in AI-drafted emails, or ask AI to summarise documents containing personal data — often without realising the risk.
Unlike email or file sharing, AI tool usage is invisible to IT. There is no audit trail of what was pasted into ChatGPT last Tuesday, no DLP policy that stops it, and no alert when it happens.
Sending a single Social Security Number, private key, or credit card number to an external AI service may constitute a data breach under GDPR, HIPAA, or PCI-DSS — regardless of intent.
Smart Suggestions: real-time DLP, right in the browser
Locki monitors every text field as employees type or paste — including AI chat interfaces. Sensitive patterns are detected before the user clicks Submit, not after.
Ready to use on day one: credit cards, IBANs, AWS access keys, PEM private keys, Social Security Numbers, generic API secrets, email addresses, and phone numbers.
Admins assign rules to specific user groups — DevOps, HR, Legal — and scope them to specific domains like chatgpt.com or github.com. The right rules apply to the right people, in the right context.
Detection runs as a lightweight regex engine inside the browser extension. No content is ever sent to Locki servers for analysis. The only server contact is to sync the rule list.
Up and running in four steps
Subscribe to Teams Pro
Smart Suggestions is part of the Locki Teams Pro plan. Subscribe and deploy the browser extension to your team via managed installation or share the download link.
Configure rules in the dashboard
8 detection patterns are active by default. From the admin dashboard, create custom rules, assign them to user groups (DevOps, HR, Legal), and target specific websites like chatgpt.com or copilot.microsoft.com.
Locki monitors in the background
The extension runs silently across all browser tabs. When an employee types or pastes a matching pattern into any text field — including AI chat interfaces — a real-time in-browser alert appears.
Employee decides what to do next
The alert prompts the user to reconsider. They can encrypt the sensitive content with Locki before sending, redact it, or dismiss the warning with intent — all without leaving the page.
8 patterns, active on day one
Smart Suggestions is a Locki Teams Pro feature
Admins can create unlimited custom patterns, assign them to user groups, and scope them to specific websites — all from the dashboard.
Who uses Smart Suggestions
Employees drafting customer-facing messages often paste client names, email addresses, or account numbers into AI writing assistants. Smart Suggestions catches PII before it leaves the browser.
Developers frequently paste code snippets containing API keys, database credentials, or tokens when asking AI for debugging help. Locki detects these secrets and prompts the dev to sanitise first.
Support agents using AI tools to summarise tickets may paste customer records with personal data. Locki alerts the agent before the data reaches the AI provider.
Compliance teams can enforce data handling policies directly in the browser — without VPNs, proxies, or application-layer controls. Detections are logged in the Locki audit trail.
Prevent .env file contents, SSH private keys, and cloud provider tokens from being pasted into AI coding assistants. Scoped rules mean only DevOps group members see these alerts.
Lawyers drafting with AI and HR teams summarising employee records are high-risk users. Smart Suggestions detects confidential data patterns and stops leaks before they happen.
Detection happens in your browser — nothing leaves your device
Locki's DLP engine runs entirely inside the browser extension as a lightweight regex matcher. No content is ever sent to Locki servers for analysis — the only network call is to sync the rule list (patterns, not text). Smart Suggestions is part of Locki Teams Pro and complements Locki's AES-256-GCM encryption: when sensitive data is detected, users can encrypt it on the spot before deciding whether to send it.