Docs / Troubleshooting / AI Not Responding

AI Not Responding

Troubleshoot issues with AI features.

QuickContract uses a Bring Your Own Key (BYOK) model — you connect your own AI provider and API key. When AI features stop working, the issue is almost always related to your key, your provider's availability, or your network connection. The sections below cover the most common causes.

API key invalid

If QuickContract shows an "invalid API key" or "authentication failed" error:

  • Check for typos: Go to Settings > AI Provider and re-enter your API key. Keys are long strings and it is easy to miss a character when pasting. Make sure you are copying the full key with no leading or trailing spaces.
  • Key may be expired or revoked: Log in to your provider's dashboard (OpenRouter, Anthropic, OpenAI, or Google AI Studio) and verify the key is still active. Some providers expire keys after a period of inactivity or if billing lapses.
  • Wrong provider selected: Confirm that the provider dropdown in Settings matches the provider that issued the key. An OpenAI key will not work with the Anthropic provider, and vice versa.

No response from provider

If the request hangs or times out with no error message:

  • Check your internet connection. All cloud providers (OpenRouter, Anthropic, OpenAI, Gemini) require an active internet connection. Open a browser and verify you can reach other websites.
  • Provider may be down. Check your provider's status page for outage reports. OpenRouter, Anthropic, and OpenAI each publish real-time status at their respective status pages.
  • Firewall or VPN interference. If you are on a corporate network, a firewall or VPN may block API requests. Try disabling the VPN temporarily or switching to a different network.
Tip

If you use Ollama as your provider, no internet connection is needed. Ollama runs models locally on your Mac. If Ollama is not responding, make sure the Ollama application is running — QuickContract connects to it at http://localhost:11434.

Rate limit errors

Rate limit errors (HTTP 429) mean you have sent too many requests in a short period. This is enforced by the AI provider, not by QuickContract.

  • Wait and retry. Most rate limits reset within a minute. QuickContract will automatically retry once after a short delay. If you see repeated rate limit errors, wait 60 seconds before trying again.
  • Upgrade your provider tier. Free-tier API keys often have strict rate limits. Upgrading to a paid tier on your provider's dashboard will significantly increase your allowed request volume.
  • Switch providers temporarily. If one provider is rate-limiting you, you can switch to another provider in Settings > AI Provider while the limit resets.

Model not available

If you see a "model not found" error, the model you selected may have been deprecated or renamed by the provider.

  • Refresh the model list. Go to Settings > AI Provider and click the refresh button next to the model dropdown. This pulls the latest available models from your provider.
  • Select a different model. If your preferred model was removed, choose an alternative. For contract generation, any model with strong instruction-following works well — Claude, GPT-4, and Gemini Pro are all reliable choices.
  • Check provider requirements. Some models require a paid plan or API access approval. Verify on your provider's documentation that your account has access to the model you selected.

Slow responses

If AI features work but take a long time to respond:

  • Large contracts take longer. Generating or analyzing a lengthy contract requires more tokens. This is normal, especially for models with lower throughput.
  • Try a faster model. Smaller models (e.g., Claude Haiku, GPT-4o mini) respond significantly faster than larger ones. For routine tasks like clause extraction, a smaller model is often sufficient.
  • Ollama hardware requirements. If you are using Ollama, response speed depends on your Mac's hardware. Models run faster on Macs with more unified memory. A 7B parameter model is a good starting point; larger models may be slow on machines with 8 GB of RAM.
Check the AI Providers docs

For provider-specific setup instructions and recommended models, see the AI Providers section of the docs.