sepehr 8617117dec fix: remove Ollama default fallbacks in factory and Docker
ROOT CAUSE: The factory was defaulting to 'ollama' when no provider
was configured, and docker-compose.yml was always setting OLLAMA_BASE_URL
even when using OpenAI. This caused the app to try connecting to Ollama
even when OpenAI was configured in the admin.

CRITICAL CHANGES:
1. lib/ai/factory.ts - Removed 'ollama' default fallback
   - getTagsProvider() now throws error if AI_PROVIDER_TAGS not set
   - getEmbeddingsProvider() now throws error if AI_PROVIDER_EMBEDDING not set
   - Forces explicit configuration instead of silent fallback to Ollama

2. docker-compose.yml - Removed default OLLAMA_BASE_URL
   - Changed: OLLAMA_BASE_URL=${OLLAMA_BASE_URL:-http://ollama:11434}
   - To: OLLAMA_BASE_URL=${OLLAMA_BASE_URL}
   - Only set if explicitly defined in .env.docker

3. Application name: Mento → Memento (correct spelling)
   - Updated in: sidebar, README, deploy.sh, DOCKER_DEPLOYMENT.md

4. app/api/ai/config/route.ts - Return 'not set' instead of 'ollama'
   - Makes it clear when provider is not configured

IMPACT: The app will now properly use OpenAI when configured in the
admin interface, instead of silently falling back to Ollama.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-12 23:08:20 +01:00

27 lines
945 B
TypeScript

import { NextRequest, NextResponse } from 'next/server'
import { getSystemConfig } from '@/lib/config'
export async function GET(request: NextRequest) {
try {
const config = await getSystemConfig()
return NextResponse.json({
AI_PROVIDER_TAGS: config.AI_PROVIDER_TAGS || 'not set',
AI_MODEL_TAGS: config.AI_MODEL_TAGS || 'not set',
AI_PROVIDER_EMBEDDING: config.AI_PROVIDER_EMBEDDING || 'not set',
AI_MODEL_EMBEDDING: config.AI_MODEL_EMBEDDING || 'not set',
OPENAI_API_KEY: config.OPENAI_API_KEY ? '***configured***' : '',
CUSTOM_OPENAI_API_KEY: config.CUSTOM_OPENAI_API_KEY ? '***configured***' : '',
CUSTOM_OPENAI_BASE_URL: config.CUSTOM_OPENAI_BASE_URL || '',
OLLAMA_BASE_URL: config.OLLAMA_BASE_URL || 'not set'
})
} catch (error: any) {
return NextResponse.json(
{
error: error.message || 'Failed to fetch config'
},
{ status: 500 }
)
}
}