Managed Keys
Managed keys route your API requests through Gonka Broker’s proxy layer, which translates them into a format compatible with the Gonka decentralized network and returns responses in OpenAI-compatible format.
How it works
Section titled “How it works”Your app → Gonka Broker Proxy → Gonka GPU Network → Response- Your application sends a request using the OpenAI API format
- Gonka Broker receives it and forwards it to the decentralized GPU network
- The inference runs on available GPUs
- The response is converted to OpenAI format and returned to your app
From your application’s perspective, this is identical to calling the OpenAI API.
What’s compatible
Section titled “What’s compatible”Managed keys support the OpenAI Chat Completions API, including:
- Chat completions (
/v1/chat/completions) - Streaming responses (
stream: true) - System, user, and assistant messages
- Temperature, top_p, max_tokens, and other standard parameters
See API Compatibility for the full list of supported endpoints and parameters.
Base URL
Section titled “Base URL”All managed key requests go to:
https://proxy.gonkabroker.com/v1Set this as the base_url (Python) or baseURL (JavaScript) in your OpenAI client configuration.
When to use managed keys
Section titled “When to use managed keys”| Use case | Managed key? |
|---|---|
| Replacing OpenAI in existing code | Yes |
| Using LangChain, LlamaIndex, or similar frameworks | Yes |
| Using Cursor, Continue, or other AI-powered dev tools | Yes |
| Building a product with the Gonka SDK | No — use a direct key |
Security
Section titled “Security”- Your key secret starts with
gk-prx-and should be treated like any API credential - Gonka Broker does not log or store your prompts and completions
- If a key is compromised, rotate it immediately — this invalidates the old secret and issues a new one