It Begins with Privacy
Launching Enchanted to the Public
With Freysa Act IV, we explored the direction of verified humans interacting with AI digital twins.
1356 Twins acted on behalf of underlying humans, performing actions such as answering polls, sending money, and pushing new memetic social content on a private Mastodon server. We continue to pursue this direction in service of our Manifesto and the end goal of human self-owned cognition at scale.
We realized the creation of digital twins requires two things:
- A substantial amount of personal data from each individual, which should be kept private from any third party
- Utilization of the world’s best AI models, most of which are closed-source and centrally operated by leading AI labs: OpenAI, Anthropic, Google.
These two aims present a challenging privacy question.
Privacy as a Pillar for Sovereign AI
Within the past few weeks, we have seen numerous end-user privacy concerns come to the forefront of AI development and global headlines:
- Sam Altman publicly confirms that there is no confidentiality for ChatGPT users.
- The White House announced a new AI Action Plan with zero mention of consent, data minimization, or a federal privacy framework.
- The #1 App Store app, Tea, heavily built by AI coding tools, was hacked and leaked ID photos and other PII of millions of its users.
- Downstream of privacy, AI models using user memories to over-personalize interactions and modifying personality of models to make them “yes-men”
All the while, centralized AI labs continue to raise billions in capital and control an increasing share of the world’s resources: energy, compute, land, and government influence. Their models continue to improve with more user data, memory, and feedback from all of humanity.
There exist some attempts at privacy-preserving and open-source efforts such as Ollama, but these applications mostly focus on running all operations in a local setting. This requires high-end hardware and affects performance/latency. At the same time, once a user experiences the latest high-quality model (i.e., ChatGPT o3), it is difficult to scale down to using lower-functioning models. Unlike messaging apps (ie. Whatsapp vs. Signal), where the cost of privacy is low and the gains high (chat utility is uniform across all platforms), compromising on the model means having a significantly worse experience in many cases. We see this practically in the AI landscape as ChatGPT continues to see the highest growth in adoption across both the consumer app and API despite privacy concerns.
With Enchanted, all users will have the following privacy guarantees:
- A proxy in TEE: a proxy service receives chats and routes them to LLM providers. All chats are encrypted, mixed, and sent through relay nodes that are run in hardware-secured environments that guarantee deletion of chats post-mixing.
- Local storage: all data that is connected or uploaded into Enchanted is stored locally and not uploaded to external storage.
- Anonymization: before any data leaves a user’s device, sensitive information is automatically replaced using a small model trained by us.
- For more technical details on our privacy guarantees, please read our documentation here.
Our release today has two versions:
- A basic version with reasoning model text chats and our privacy guarantees
- An experimental version that includes features such as memory, connectors, and voice mode; we recommend this for prosumers/developers
It Begins with Privacy…
We believe that privacy is the first step to enable sovereign AI and self-owned cognition at global scale. A pragmatic view of privacy acknowledges that it involves both cypherpunk principles and delivery of the best utility technology products can offer (i.e., why serving closed source AI models today is critical).
The team behind Freysa will be a part of the NVIDIA Inception program and work with their Confidential Compute team on end-to-end privacy and verifiability across multiple GPUs—something that is not possible today. As models continue to get larger and more robust, this is a tangible step where we are most aligned with the world’s largest AI infrastructure provider.
Privacy has many downstream benefits. With high fidelity representations of individuals, digital twins can act on their behalf in new economic networks, unlocking novel forms of co-owned products and organizations. We call these Holons. Users will see a preview of Holons in the experimental version of Enchanted.
Meet Enchanted: