Keeping your database environment and source code secure is critically important to us. This page outlines how we approach security for Tembo. Please submit potential vulnerabilities and security-related questions to security@tembo.io. Please note that we are still in the journey of growing our product and improving our security posture.
Tembo is SOC 2 Type 1 certified. Please email hi@tembo.io to request a copy of the report.
We commit to doing at-least-annual penetration testing by reputable third parties. Please email hi@tembo.io to request an executive summary of the latest report.
We depend on the following subprocessors, organized from most critical to least. Note that data is sent to our servers to power all of Tembo's AI features.
AWS. Sees database and code data Our front-end and API application infrastructure are hosted on AWS. All servers are in the US.
Tembo Cloud. Stores metadata about system state Our API stores its data in Postgres databases supplied by Tembo Cloud, a powerful Postgres service that the Tembo team has separately created and operates.
Anthropic. Sees database and code data We rely on many of Anthropic's models to give AI responses. We have a zero data retention agreement with Anthropic.
LangFuse. Sees database and code data We use Langfuse for traces, evals, prompt management and metrics to debug and improve our LLM processes.
Sentry. Sees no database or code data We use Sentry for exception tracking.
Stripe. Sees no database or code data We use Stripe to handle billing. Stripe will store your personal data (name, credit card, address).
Clerk. Sees no database or code data We use Clerk to handle auth. Clerk may store some personal data (name, email address).
None of our infrastructure is in China. We do not directly use any Chinese company as a subprocessor, and to our knowledge none of our subprocessors do either.
We assign infrastructure access to team members on a least-privilege basis. We enforce multi-factor authentication for AWS. We restrict access to resources using both network-level controls and secrets.
To provide its features, Tembo makes AI requests to our server. This happens for many different reasons. For example, we send AI requests when you ask questions in chat, and we also send AI requests in the background for building up context or looking for issues to show you.
An AI request generally includes context such as your recently viewed files, your conversation history, and relevant pieces of code based on language server information. This code data is sent to our infrastructure on AWS, and then to the appropriate language model inference provider. Note that the requests always hit our infrastructure on AWS even if you have configured your own API key in the settings.
We currently do not have the ability to direct-route from Tembo to your enterprise deployment of OpenAI/Azure/Anthropic. We may be able to provide a self-hosted server deployment option.
You own all the code generated by Tembo.
You can delete your account at any time in the Settings dashboard (click "Advanced" and then "Delete Account"). This will delete all data associated with your account. We guarantee complete removal of your data within 30 days (we immediately delete the data, but some of our databases and cloud storage have backups of no more than 30 days).
It's worth noting that if any of your data was used in model training (which would only happen if you were not on privacy mode at the time), our existing trained models will not be immediately retrained. However, any future models that are trained will not be trained on your data, since that data will have been deleted.
If you believe you have found a vulnerability in Tembo, please email security@tembo.io.
We commit to acknowledging vulnerability reports within 5 business days, and addressing them as soon as we are able to. We will publish the results in the form of security advisories on our GitHub security page. Critical incidents will be communicated both on the GitHub security page and via email to all users.