Critical fix for Docker deployment where AI features were trying to connect
to localhost:11434 instead of using configured provider (Ollama Docker service
or OpenAI).
Problems fixed:
1. Reformulation (clarify/shorten/improve) failing with ECONNREFUSED 127.0.0.1:11434
2. Auto-labels failing with same error
3. Notebook summaries failing
4. Could not switch from Ollama to OpenAI in admin
Root cause:
- Code had hardcoded fallback to 'http://localhost:11434' in multiple places
- .env.docker file not created on server (gitignore'd)
- No validation that required environment variables were set
Changes:
1. lib/ai/factory.ts:
- Remove hardcoded 'http://localhost:11434' fallback
- Only use localhost for local development (NODE_ENV !== 'production')
- Throw error if OLLAMA_BASE_URL not set in production
2. lib/ai/providers/ollama.ts:
- Remove default parameter 'http://localhost:11434' from constructor
- Require baseUrl to be explicitly passed
- Throw error if baseUrl is missing
3. lib/ai/services/paragraph-refactor.service.ts:
- Remove 'http://localhost:11434' fallback (2 locations)
- Require OLLAMA_BASE_URL to be set
- Throw clear error if not configured
4. app/(main)/admin/settings/admin-settings-form.tsx:
- Add debug info showing current provider state
- Display database config value for transparency
- Help troubleshoot provider selection issues
5. DOCKER-SETUP.md:
- Complete guide for Docker configuration
- Instructions for .env.docker setup
- Examples for Ollama Docker, OpenAI, and external Ollama
- Troubleshooting common issues
Usage:
On server, create .env.docker with proper provider configuration:
- Ollama in Docker: OLLAMA_BASE_URL="http://ollama:11434"
- OpenAI: OPENAI_API_KEY="sk-..."
- External Ollama: OLLAMA_BASE_URL="http://SERVER_IP:11434"
Then in admin interface, users can independently configure:
- Tags Provider (for auto-labels, AI features)
- Embeddings Provider (for semantic search)
Result:
✓ Clear errors if Ollama not configured
✓ Can switch to OpenAI freely in admin
✓ No more hardcoded localhost in production
✓ Proper separation between local dev and Docker production
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Critical fix for AI features (reformulation, auto-labels) in Docker.
Only title generation was working because it had hardcoded fallback.
Problem:
- Code uses OLLAMA_BASE_URL environment variable
- docker-compose.yml was setting OLLAMA_API_URL (wrong variable)
- .env.docker was missing on server
- Container couldn't connect to Ollama service
Root cause:
- Environment variable name mismatch
- Missing OLLAMA_BASE_URL configuration
- localhost:11434 doesn't work in Docker (need service name)
Solution:
1. Update docker-compose.yml:
- Change OLLAMA_API_URL → OLLAMA_BASE_URL
- Add OLLAMA_MODEL environment variable
- Default to http://ollama:11434 (Docker service name)
2. Update .env.docker:
- Add OLLAMA_BASE_URL="http://ollama:11434"
- Add OLLAMA_MODEL="granite4:latest"
- Document AI provider configuration
3. Update .env.docker.example:
- Add examples for different Ollama setups
- Document Docker service name vs external IP
- Add OpenAI configuration example
Result:
✓ All AI features now work in Docker (titles, reformulation, auto-labels)
✓ Proper connection to Ollama container via Docker network
✓ Clear documentation for different deployment scenarios
Technical details:
- Docker service name "ollama" resolves to container IP
- Port 11434 accessible within memento-network
- Fallback to localhost only for local development
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixes issue where interface always defaulted to English for new users.
Now automatically detects and applies browser language on first visit.
Changes:
- lib/i18n/LanguageProvider.tsx:
- Add browser language detection using navigator.language
- Check if detected language is in supported languages list
- Auto-save detected language to localStorage
- Update HTML lang attribute for proper font rendering
Behavior:
- First visit: Detects browser language (e.g., fr-FR → fr)
- Returning visits: Uses saved language from localStorage
- Fallback: Defaults to English if language not supported
Result:
✓ French users see French interface automatically
✓ All supported languages work (en, fr, es, de, fa, it, pt, ru, zh, ja, ko, ar, hi, nl, pl)
✓ Better UX for international users
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixes issue where notebook dropdown showed icon name (e.g., "folder")
instead of the actual icon component.
Problem:
- note-card.tsx was displaying {notebook.icon} as text
- Users saw "folder", "book", etc. instead of icons
Solution:
- Import Lucide icon components (Folder, Book, Briefcase, etc.)
- Add ICON_MAP matching icon names to components
- Use getNotebookIcon() function to resolve icon name to component
- Render component as <NotebookIcon className="h-4 w-4 mr-2" />
Changes:
- components/note-card.tsx:
- Add LucideIcon and icon imports
- Add ICON_MAP and getNotebookIcon() helper
- Update notebook dropdown to render icon components
Result:
✓ Notebook icons now display correctly in dropdown menu
✓ Consistent with notebooks-list.tsx implementation
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixes redirect issue where application redirects to localhost:3000
instead of server IP when accessed from remote browser.
Problem:
- Accessing http://192.168.1.190:3000 redirects to http://localhost:3000
- NEXTAUTH_URL was hardcoded to localhost in .env
- Remote users cannot access the application
Solution:
1. Create .env.docker with server-specific configuration
- NEXTAUTH_URL="http://192.168.1.190:3000"
- Ollama URL for container-to-container communication
2. Update docker-compose.yml:
- Add env_file: .env.docker
- Loads Docker-specific environment variables
3. Create .env.docker.example as template:
- Document NEXTAUTH_URL configuration
- Users can copy and customize for their server IP/domain
Files:
- .env.docker: Docker environment (gitignored)
- .env.docker.example: Template for users
- docker-compose.yml: Load .env.docker
- .env.example: Update NEXTAUTH_URL documentation
Usage:
On production server, edit .env.docker with your server IP:
NEXTAUTH_URL="http://YOUR_IP:3000"
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Critical fix for production deployment on Proxmox/Docker.
Problem:
- Runtime error: Prisma Client could not locate Query Engine for "debian-openssl-1.1.x"
- Wrong binary target generated (Windows dll instead of Linux .so.node)
- Wrong OpenSSL version (3.0.x instead of 1.1.x for Debian 11)
Root cause:
- Schema.prisma didn't specify binaryTargets
- Prisma auto-detected Windows during local development
- Debian 11 (bullseye) uses OpenSSL 1.1.x, not 3.0.x
Solution:
1. Add binaryTargets to schema.prisma:
- "debian-openssl-1.1.x" for Docker/Proxmox
- "native" for local development
2. Fix Prisma folder permissions in Docker:
- RUN chown -R nextjs:nodejs /app/prisma
- Ensures Query Engine binary is readable by app user
Changes:
- prisma/schema.prisma: Add binaryTargets = ["debian-openssl-1.1.x", "native"]
- keep-notes/Dockerfile: Add chown for /app/prisma folder
Verification:
✓ libquery_engine-debian-openssl-1.1.x.so.node exists
✓ Permissions: nextjs:nodejs (readable)
✓ Prisma Client loads successfully in container
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixes runtime error where Prisma Client could not locate the Query Engine:
"libquery_engine-debian-openssl-1.1.x.so.node" not found
Root cause:
- Next.js standalone output does not include Prisma Query Engine binaries
- The .prisma folder in node_modules contains the required binary files
Solution:
- Copy node_modules/.prisma folder in Docker runner stage
- This includes libquery_engine-debian-openssl-1.1.x.so.node
- Prisma Client can now find and load the Query Engine at runtime
Tested:
✓ Docker build successful
✓ Container starts without Prisma errors
✓ Application ready in 40ms
Changes:
- keep-notes/Dockerfile: Add COPY for node_modules/.prisma folder
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
## Docker Configuration
- Enhance docker-compose.yml with Ollama support for local AI
- Add resource limits and health checks for better stability
- Configure isolated Docker network (keep-network)
- Add persistent volumes for database and uploads
- Include optional Ollama service configuration
## Deployment Files
- Add DOCKER_DEPLOYMENT.md with comprehensive deployment guide
- Add deploy.sh automation script with 10+ commands
- Document Proxmox LXC container setup
- Add backup/restore procedures
- Include SSL/HTTPS and reverse proxy configuration
## Docker Build Optimization
- Improve .dockerignore for faster builds
- Exclude development files and debug logs
- Add comprehensive exclusions for IDE, OS, and testing files
## Features
- Support for OpenAI API (cloud AI)
- Support for Ollama (local AI models)
- Automatic database backups
- Health checks and auto-restart
- Resource limits for VM/LXC environments
## Documentation
- Complete Proxmox deployment guide
- Troubleshooting section
- Security best practices
- Performance tuning recommendations
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Ou une version plus courte si vous préférez :
feat(docker): Add Proxmox deployment config with Ollama support
- Enhance docker-compose.yml with health checks, resource limits, Ollama support
- Add DOCKER_DEPLOYMENT.md guide (50+ sections covering Proxmox, SSL, AI setup)
- Add deploy.sh script with build, start, backup, logs commands
- Improve .dockerignore for optimized builds
- Document backup/restore procedures and security best practices
- Support both OpenAI and local Ollama AI providers
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Add AI Provider Testing page (/admin/ai-test) with Tags and Embeddings tests
- Add new AI providers: CustomOpenAI, DeepSeek, OpenRouter
- Add API routes for AI config, models listing, and testing endpoints
- Add UX Design Specification document for Phase 1 MVP AI
- Add PRD Phase 1 MVP AI planning document
- Update admin settings and sidebar navigation
- Fix AI factory for multi-provider support
## Bug Fixes
### Note Card Actions
- Fix broken size change functionality (missing state declaration)
- Implement React 19 useOptimistic for instant UI feedback
- Add startTransition for non-blocking updates
- Ensure smooth animations without page refresh
- All note actions now work: pin, archive, color, size, checklist
### Markdown LaTeX Rendering
- Add remark-math and rehype-katex plugins
- Support inline equations with dollar sign syntax
- Support block equations with double dollar sign syntax
- Import KaTeX CSS for proper styling
- Equations now render correctly instead of showing raw LaTeX
## Technical Details
- Replace undefined currentNote references with optimistic state
- Add optimistic updates before server actions for instant feedback
- Use router.refresh() in transitions for smart cache invalidation
- Install remark-math, rehype-katex, and katex packages
## Testing
- Build passes successfully with no TypeScript errors
- Dev server hot-reloads changes correctly
- Added multi-provider AI infrastructure (OpenAI/Ollama)
- Implemented real-time tag suggestions with debounced analysis
- Created AI diagnostics and database maintenance tools in Settings
- Added automated garbage collection for orphan labels
- Refined UX with deterministic color hashing and interactive ghost tags