Hello IC community! We are excited to finally pull back the curtain on GPT Protocol, an infrastructure project we’ve been building at the intersection of the Internet Computer and Confidential Computing (AMD SEV-SNP).
Our team has spent the last year addressing the “Black Box” problem of centralized AI and the fact that even in “Web3 DeAI,” your prompts eventually end up as plaintext on someone else’s server. We chose to build on the Internet Computer because it is the only ecosystem that provides the orchestration power, reverse gas model, and stable memory storage needed to manage a truly sovereign AI stack.
We’ve just hit our Public Beta milestone and would love to share how we’re using the IC as a verifiable controller for hardware-enforced enclaves.
1. TL;DR
- What is it? A decentralized infrastructure layer for “Sovereign AI” that ensures user prompts and documents are never visible to node operators or protocol developers.
- Key Innovation: Bridging IC smart contracts with hardware-enforced Trusted Execution Environments (AMD SEV-SNP) to create an end-to-end encrypted AI pipeline.
- Current Status: Public Beta (Open Source).
- Links: App (gpt.one) | GitHub Repo
2. The Problem: The AI “Black Box”
Current AI interactions require total surrender of data sovereignty. Whether you use a centralized AI chatbot or a standard “Web3 AI” project, your prompts are eventually stored on a provider’s server. This exposes proprietary code, sensitive documents, and personal context to logging, model training, and potential breaches.
3. The Solution: GPT Protocol
GPT Protocol facilitates Verifiable Confidential Inference. We use a “Double-Lock” encryption strategy where user data is encrypted in the browser and only decrypted inside a hardware-attested enclave.
By the time your data hits the network, it is an opaque blob. By the time it hits the compute node for processing by an inference API, it is isolated in CPU-encrypted memory that even the host OS cannot “peek” into. You might ask: “If you use third-party APIs, isn’t the privacy lost?”
We solve this through a Privacy-Preserving Proxy Layer inside the enclave:
- Zero-Retention Providers Only: We strictly route traffic to enterprise-grade API endpoints that offer legally binding Zero-Data-Retention (ZDR) and “No-Training” policies.
- Identity Stripping (Metadata Privacy): The request sent to the API provider is stripped of all user-identifiable information (IP addresses, User IDs, browser fingerprints). To inference providers, every request looks like it comes from the same “GPT Protocol Node,” not a specific individual.
- API Key & Provider Mixing: The protocol rotates through a massive pool of API keys across multiple Node Providers. By “shuffling” the data stream across different accounts and providers, we prevent any single third party from building a longitudinal profile or “shadow persona” of a user based on their prompt history.
4. Why we built on the IC
The Internet Computer is the only platform capable of acting as the Verifiable Orchestrator for high-performance confidential compute.
- Trustless Registry: We use the IC as a global registry for SEV-SNP Attestation Reports. The IC verifies the hardware’s cryptographic “quote” to ensure the node is genuine AMD silicon running our exact open-source binary.
- Single-Tenant Canisters: Every user gets a dedicated
gpt_usercanister. This provides physical data isolation at the state level - something not possible on legacy chains. - Stable Memory Vector Store: Using
ic-stable-structures, we store multi-gigabyte document chunks and chats on-chain. This allows for Confidential RAG, where the IC manages the encrypted index, and only relevant chunks are sent to the enclave. - Reverse Gas Model: Users interact with the AI without needing to manage cycles or gas for every message, provided by the canister’s cycle balance.
5. Technical Architecture
The protocol is a Rust-heavy monorepo split into three distinct planes:
A. The Control Plane (gpt_index)
The “Brain” of the network. It manages:
- Node Governance: Whitelists specific
measurement_hexhashes of VM images containing node binaries. - TCB Policies: Enforces minimum security versions for AMD firmware to protect against hardware vulnerabilities.
- User Provisioning: Automatically deploys new
gpt_userinstances upon registration - currently limited up to a single hour of use per user.
B. The Data Plane (gpt_user)
Your personal encrypted vault.
- Chat Storage: Chat history and files are stored as AES-256-GCM encrypted blobs.
- Virtual File System: A hierarchical folder structure implemented on-chain using stable memory.
C. The Compute Plane (gpt_node + gpt_host)
The confidential worker.
- AMD SEV-SNP Enclave: A minimalist Alpine Linux system running in RAM.
- Identity Extraction: During boot, the node extracts a 32-byte seed from the hardware report to derive its node specifications from the index canister.
6. Visuals
7. Credibility & Open Source
We believe that for privacy to be real, it must be auditable.
- Reproducible Builds: Our OS images (
gpt_vm) are built via Docker with necessary compilation-time tweaks to ensure bitwise reproducibility. You can hash the binary yourself and compare it to the registry on the IC. - Open Source: The entire stack, from the kernel assembly to the React frontend, is available under the MIT license.
8. Roadmap for 2026
- Live Billing Integration: On-chain automated payments for compute resources.
- Provider Incentives: Full documentation and deployment tooling on how to start earning rewards as a verified confidential node operator.
- OpenAI-Compatible Bridge: A drop-in proxy layer allowing any existing application built with the OpenAI SDK to migrate to confidential TEE inference simply by updating their
BASE_URLenvironment variable. - GPU Support: Launch of GPU-backed enclaves (H100/H200) for private large-scale model inference.
- Threshold Privacy: Integration of vetKeys for threshold-based decryption logic.
- Autonomous Agents: Allowing AI agents to hold their own ICP/ckBTC in a secure enclave.
9. Links & Resources
- Main Site: https://gpt.one
- Source Code: onecompany/gpt
- Twitter/X: @gpticp
10. Beta Notes & Development Status
Please keep in mind that the project is currently under active development, and we expect to ship significant refinements and features in the very near future. To facilitate rapid iteration and testing during this Public Beta phase, dedicated user canisters are currently limited to a one-hour lifespan and are automatically de-provisioned (erased) afterward.
Be sure to follow this forum thread! We will be posting regular updates here, including information on our full release schedule, latest features, and technical deep-dives.
We are looking for technical feedback from the IC community! Specifically, we’d love to hear your thoughts on our attestation verification logic. Happy to answer any questions below!



