CRITICAL FIX: Auto-labels, notebook summaries, and other AI features
were not working because 8 services were calling getAIProvider() WITHOUT
passing the config parameter.
This caused them to use the default 'ollama' provider instead of
the configured OpenAI provider from the database.
ROOT CAUSE ANALYSIS:
Working features (titles):
- title-suggestions/route.ts: getAIProvider(config) ✓
Broken features (auto-labels, summaries):
- contextual-auto-tag.service.ts: getAIProvider() ✗ (2x)
- notebook-summary.service.ts: getAIProvider() ✗
- auto-label-creation.service.ts: getAIProvider() ✗
- notebook-suggestion.service.ts: getAIProvider() ✗
- batch-organization.service.ts: getAIProvider() ✗
- embedding.service.ts: getAIProvider() ✗ (2x)
FIXED: All 8 services now properly call:
const config = await getSystemConfig()
const provider = getAIProvider(config)
This ensures ALL AI features use the provider configured in the admin
interface (OpenAI) instead of defaulting to Ollama.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The paragraph-refactor service was using OLLAMA_BASE_URL directly from
environment variables instead of using the configured AI provider from
the database. This caused "OLLAMA error" even when OpenAI was configured
in the admin interface.
Changes:
- paragraph-refactor.service.ts: Now uses getSystemConfig() and
getTagsProvider() from factory instead of direct Ollama calls
- factory.ts: Added proper error messages when API keys are missing
- .env.docker.example: Updated with new provider configuration
variables (AI_PROVIDER_TAGS, AI_PROVIDER_EMBEDDING)
This fixes the issue where AI reformulation features (Clarify, Shorten,
Improve Style) would fail with OLLAMA errors even when OpenAI was
properly configured in the admin settings.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Critical fix for Docker deployment where AI features were trying to connect
to localhost:11434 instead of using configured provider (Ollama Docker service
or OpenAI).
Problems fixed:
1. Reformulation (clarify/shorten/improve) failing with ECONNREFUSED 127.0.0.1:11434
2. Auto-labels failing with same error
3. Notebook summaries failing
4. Could not switch from Ollama to OpenAI in admin
Root cause:
- Code had hardcoded fallback to 'http://localhost:11434' in multiple places
- .env.docker file not created on server (gitignore'd)
- No validation that required environment variables were set
Changes:
1. lib/ai/factory.ts:
- Remove hardcoded 'http://localhost:11434' fallback
- Only use localhost for local development (NODE_ENV !== 'production')
- Throw error if OLLAMA_BASE_URL not set in production
2. lib/ai/providers/ollama.ts:
- Remove default parameter 'http://localhost:11434' from constructor
- Require baseUrl to be explicitly passed
- Throw error if baseUrl is missing
3. lib/ai/services/paragraph-refactor.service.ts:
- Remove 'http://localhost:11434' fallback (2 locations)
- Require OLLAMA_BASE_URL to be set
- Throw clear error if not configured
4. app/(main)/admin/settings/admin-settings-form.tsx:
- Add debug info showing current provider state
- Display database config value for transparency
- Help troubleshoot provider selection issues
5. DOCKER-SETUP.md:
- Complete guide for Docker configuration
- Instructions for .env.docker setup
- Examples for Ollama Docker, OpenAI, and external Ollama
- Troubleshooting common issues
Usage:
On server, create .env.docker with proper provider configuration:
- Ollama in Docker: OLLAMA_BASE_URL="http://ollama:11434"
- OpenAI: OPENAI_API_KEY="sk-..."
- External Ollama: OLLAMA_BASE_URL="http://SERVER_IP:11434"
Then in admin interface, users can independently configure:
- Tags Provider (for auto-labels, AI features)
- Embeddings Provider (for semantic search)
Result:
✓ Clear errors if Ollama not configured
✓ Can switch to OpenAI freely in admin
✓ No more hardcoded localhost in production
✓ Proper separation between local dev and Docker production
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>