Plain English. No legal boilerplate.
Cognit is built for people who write about deals, strategy, and confidential analysis. We understand what that means. Your notes may contain material non-public information, undisclosed deal flow, or proprietary research.
Every user's data is fully isolated. There is no cross-user data mixing, no shared knowledge base, and no way for one user's content to influence what another user sees.
We do not read your notes. The only system that processes your content is the AI pipeline you explicitly use.
Your notes, account information (name, email, profile photo from GitHub or Google), and the AI-generated outputs Cognit produces from your writing: entity tags, sparks, clusters, and saved patterns.
Nothing else. We do not track your browsing, sell advertising, or build profiles beyond what the product needs.
We do not train AI models on your notes. We do not sell your data. We do not use your writing for advertising. We do not read your content except to run the AI features you explicitly trigger.
When you write a note, Cognit sends it to Anthropic's API for concept detection and entity extraction. Anthropic's API terms prohibit using API data for model training without explicit opt-in. We do not opt in.
Your content is sent only when the AI pipeline runs, not continuously or in real time while you type.
Your data lives in Neon Postgres, hosted on AWS in us-east-1 (N. Virginia, USA). All data is encrypted in transit via TLS and encrypted at rest via AES-256 by the database provider.
We do not currently offer end-to-end encryption. This means Cognit's servers can technically access your data to run AI processing. We are transparent about this limitation. If your compliance requirements demand E2E encryption, Cognit is not yet the right fit.
You own your data. You can delete your entire account, including all notes, AI outputs, patterns, and account information, permanently and immediately. This is irreversible.
Questions? hello@usecognit.com