- Thumbs down now increases the similarity threshold by +0.15 for the
notes involved, making it harder for irrelevant connections to reappear
- Thumbs up slightly lowers the threshold by -0.05, boosting similar
future connections
- Remove duplicate close button in ComparisonModal (kept only the
native Dialog close button)
- Normalize all embeddings to same model/dimension (2560) to fix
random similarity scores caused by mixed embedding models
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix critical bug: used prisma.systemConfig.findFirst() which returned
a single {key,value} record instead of the full config object needed
by getAIProvider(). Replaced with getSystemConfig() that returns all
config as a proper Record<string,string>.
- Add ensureEmbeddings() to auto-generate embeddings for notes that
lack them before searching for connections. This fixes the case where
notes created without an AI provider configured never got embeddings,
making Memory Echo silently return zero connections.
- Restore demo mode polling (15s interval after dismiss) in the
notification component.
- Integrate ComparisonModal and FusionModal in the notification card
with merge button for direct note fusion from the notification.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Ajout de la table de relation 1-1 NoteEmbedding pour alléger Model Note
- Refactor complet des actions IA sémantique et Memory Echo pour utiliser la jointure
- Migration propre des 85 embeddings locaux existants
- Ajout PRAGMA journal_mode=WAL pour la concurrence au sein de lib/prisma
- Ajout npm run db:switch pour configuration auto SQLite / PostgreSQL
- Fix du compilateur Turbopack et Next-PWA
CRITICAL FIX: Auto-labels, notebook summaries, and other AI features
were not working because 8 services were calling getAIProvider() WITHOUT
passing the config parameter.
This caused them to use the default 'ollama' provider instead of
the configured OpenAI provider from the database.
ROOT CAUSE ANALYSIS:
Working features (titles):
- title-suggestions/route.ts: getAIProvider(config) ✓
Broken features (auto-labels, summaries):
- contextual-auto-tag.service.ts: getAIProvider() ✗ (2x)
- notebook-summary.service.ts: getAIProvider() ✗
- auto-label-creation.service.ts: getAIProvider() ✗
- notebook-suggestion.service.ts: getAIProvider() ✗
- batch-organization.service.ts: getAIProvider() ✗
- embedding.service.ts: getAIProvider() ✗ (2x)
FIXED: All 8 services now properly call:
const config = await getSystemConfig()
const provider = getAIProvider(config)
This ensures ALL AI features use the provider configured in the admin
interface (OpenAI) instead of defaulting to Ollama.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The paragraph-refactor service was using OLLAMA_BASE_URL directly from
environment variables instead of using the configured AI provider from
the database. This caused "OLLAMA error" even when OpenAI was configured
in the admin interface.
Changes:
- paragraph-refactor.service.ts: Now uses getSystemConfig() and
getTagsProvider() from factory instead of direct Ollama calls
- factory.ts: Added proper error messages when API keys are missing
- .env.docker.example: Updated with new provider configuration
variables (AI_PROVIDER_TAGS, AI_PROVIDER_EMBEDDING)
This fixes the issue where AI reformulation features (Clarify, Shorten,
Improve Style) would fail with OLLAMA errors even when OpenAI was
properly configured in the admin settings.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Critical fix for Docker deployment where AI features were trying to connect
to localhost:11434 instead of using configured provider (Ollama Docker service
or OpenAI).
Problems fixed:
1. Reformulation (clarify/shorten/improve) failing with ECONNREFUSED 127.0.0.1:11434
2. Auto-labels failing with same error
3. Notebook summaries failing
4. Could not switch from Ollama to OpenAI in admin
Root cause:
- Code had hardcoded fallback to 'http://localhost:11434' in multiple places
- .env.docker file not created on server (gitignore'd)
- No validation that required environment variables were set
Changes:
1. lib/ai/factory.ts:
- Remove hardcoded 'http://localhost:11434' fallback
- Only use localhost for local development (NODE_ENV !== 'production')
- Throw error if OLLAMA_BASE_URL not set in production
2. lib/ai/providers/ollama.ts:
- Remove default parameter 'http://localhost:11434' from constructor
- Require baseUrl to be explicitly passed
- Throw error if baseUrl is missing
3. lib/ai/services/paragraph-refactor.service.ts:
- Remove 'http://localhost:11434' fallback (2 locations)
- Require OLLAMA_BASE_URL to be set
- Throw clear error if not configured
4. app/(main)/admin/settings/admin-settings-form.tsx:
- Add debug info showing current provider state
- Display database config value for transparency
- Help troubleshoot provider selection issues
5. DOCKER-SETUP.md:
- Complete guide for Docker configuration
- Instructions for .env.docker setup
- Examples for Ollama Docker, OpenAI, and external Ollama
- Troubleshooting common issues
Usage:
On server, create .env.docker with proper provider configuration:
- Ollama in Docker: OLLAMA_BASE_URL="http://ollama:11434"
- OpenAI: OPENAI_API_KEY="sk-..."
- External Ollama: OLLAMA_BASE_URL="http://SERVER_IP:11434"
Then in admin interface, users can independently configure:
- Tags Provider (for auto-labels, AI features)
- Embeddings Provider (for semantic search)
Result:
✓ Clear errors if Ollama not configured
✓ Can switch to OpenAI freely in admin
✓ No more hardcoded localhost in production
✓ Proper separation between local dev and Docker production
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>