Add pt-6 to CardFooter for proper visual separation between the last
content section and the action buttons.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Using form action in Next.js triggers automatic router cache revalidation,
causing the server component to re-render and remount the client component,
which resets all useState values. Switching to onSubmit with e.preventDefault()
prevents this behavior while preserving full functionality.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
revalidatePath was causing the server component to re-render and
potentially remount the client component, resetting all useState values.
Since the client component already updates its local state optimistically
after save, revalidatePath is unnecessary here. Also uninstalls agentation.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The Ollama model useEffect hooks were triggering on every config prop
change (including after revalidatePath), causing a model list refetch
that reset the combobox selection. Limited dependencies to provider
state only so fetch only runs on mount and provider switch.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Three bugs fixed:
- Removed the useEffect that synced state from config prop on every
re-render, which caused a race condition resetting model state after
revalidatePath triggered a server re-render.
- Reset selected model to a sensible default when switching providers,
preventing stale model names from one provider appearing in another
provider's model list (which made the select show the first option).
- Model select FormData names already fixed in previous commit to match
provider-specific field names (AI_MODEL_TAGS_OLLAMA etc).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The model select fields used provider-specific names (AI_MODEL_TAGS_OLLAMA,
AI_MODEL_TAGS_OPENAI, etc.) but handleSaveAI read from non-existent
formData keys (AI_MODEL_TAGS, AI_MODEL_EMBEDDING). This caused model
values to be silently dropped on save, making the provider appear to
revert. Now reads from the correct provider-specific field names.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add nodemailer override to force v8.0.4+ across the dependency tree,
fixing the ERESOLVE conflict with next-auth@5.0.0-beta.30 and
eliminating the SMTP command injection vulnerability (GHSA-c7w3-x93f-qmm8).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Unified localStorage key to 'theme-preference' across all components
- Fixed header.tsx using wrong localStorage key ('theme' instead of 'theme-preference')
- Added localStorage hybrid persistence for instant theme changes
- Removed router.refresh() which was causing stale data revert
- Replaced Blue theme with Sepia
- Consolidated auth() calls to prevent race conditions
- Updated UserSettingsData types to include all themes
Enhanced toast styling to ensure ALL elements in the toaster container
don't block page interaction, while keeping toast buttons clickable.
Added:
- pointer-events: none on all toaster children
- pointer-events: none on ::before and ::after pseudo-elements
- Only actual toast elements have pointer-events: auto
This should fix the issue where users had to refresh (F5) after any
toast notification to click buttons again.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
CRITICAL FIX: Auto-labels, notebook summaries, and other AI features
were not working because 8 services were calling getAIProvider() WITHOUT
passing the config parameter.
This caused them to use the default 'ollama' provider instead of
the configured OpenAI provider from the database.
ROOT CAUSE ANALYSIS:
Working features (titles):
- title-suggestions/route.ts: getAIProvider(config) ✓
Broken features (auto-labels, summaries):
- contextual-auto-tag.service.ts: getAIProvider() ✗ (2x)
- notebook-summary.service.ts: getAIProvider() ✗
- auto-label-creation.service.ts: getAIProvider() ✗
- notebook-suggestion.service.ts: getAIProvider() ✗
- batch-organization.service.ts: getAIProvider() ✗
- embedding.service.ts: getAIProvider() ✗ (2x)
FIXED: All 8 services now properly call:
const config = await getSystemConfig()
const provider = getAIProvider(config)
This ensures ALL AI features use the provider configured in the admin
interface (OpenAI) instead of defaulting to Ollama.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
CRITICAL FIX: Toast notifications were blocking all UI interaction,
requiring a page refresh (F5) to click buttons again.
Root cause: The [data-sonner-toaster] container was capturing all
pointer events, preventing clicks on the page underneath.
Solution:
- Added CSS: pointer-events: none on [data-sonner-toaster]
- Added CSS: pointer-events: auto on [data-sonner-toast]
- This allows the toast itself to be interactive (buttons, close)
- But lets clicks pass through to the page underneath
After this fix, users can continue working immediately after
a toast appears without refreshing the page.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
URGENT FIX: Admin form was not properly saving AI provider configuration,
causing 'AI_PROVIDER_TAGS is not configured' error even after setting OpenAI.
Changes:
- admin-settings-form.tsx: Added validation and error handling
- admin-settings.ts: Filter empty values before saving to DB
- setup-openai.ts: Script to initialize OpenAI as default provider
This fixes the critical bug where users couldn't use the app after
configuring OpenAI in the admin interface.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
ROOT CAUSE: The factory was defaulting to 'ollama' when no provider
was configured, and docker-compose.yml was always setting OLLAMA_BASE_URL
even when using OpenAI. This caused the app to try connecting to Ollama
even when OpenAI was configured in the admin.
CRITICAL CHANGES:
1. lib/ai/factory.ts - Removed 'ollama' default fallback
- getTagsProvider() now throws error if AI_PROVIDER_TAGS not set
- getEmbeddingsProvider() now throws error if AI_PROVIDER_EMBEDDING not set
- Forces explicit configuration instead of silent fallback to Ollama
2. docker-compose.yml - Removed default OLLAMA_BASE_URL
- Changed: OLLAMA_BASE_URL=${OLLAMA_BASE_URL:-http://ollama:11434}
- To: OLLAMA_BASE_URL=${OLLAMA_BASE_URL}
- Only set if explicitly defined in .env.docker
3. Application name: Mento → Memento (correct spelling)
- Updated in: sidebar, README, deploy.sh, DOCKER_DEPLOYMENT.md
4. app/api/ai/config/route.ts - Return 'not set' instead of 'ollama'
- Makes it clear when provider is not configured
IMPACT: The app will now properly use OpenAI when configured in the
admin interface, instead of silently falling back to Ollama.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add comprehensive tests to verify AI provider configuration and ensure
OpenAI is being used correctly instead of hardcoded Ollama.
Changes:
- Add ai-provider.spec.ts: Playwright tests for AI provider validation
- Add /api/debug/config endpoint: Exposes AI configuration for testing
- Tests verify: OpenAI config, connectivity, no OLLAMA errors
All 4 tests pass locally:
✓ AI provider configuration check
✓ OpenAI connectivity test
✓ Embeddings provider verification
✓ No OLLAMA errors validation
Usage on Docker:
TEST_URL=http://192.168.1.190:3000 npx playwright test ai-provider.spec.ts
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The paragraph-refactor service was using OLLAMA_BASE_URL directly from
environment variables instead of using the configured AI provider from
the database. This caused "OLLAMA error" even when OpenAI was configured
in the admin interface.
Changes:
- paragraph-refactor.service.ts: Now uses getSystemConfig() and
getTagsProvider() from factory instead of direct Ollama calls
- factory.ts: Added proper error messages when API keys are missing
- .env.docker.example: Updated with new provider configuration
variables (AI_PROVIDER_TAGS, AI_PROVIDER_EMBEDDING)
This fixes the issue where AI reformulation features (Clarify, Shorten,
Improve Style) would fail with OLLAMA errors even when OpenAI was
properly configured in the admin settings.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Update sidebar.tsx to display "Mento" instead of "Keep"
- Update README.md title from "Keep Notes" to "Mento"
- Update DOCKER_DEPLOYMENT.md references to "Mento"
- Update deploy.sh script comments to use "Mento"
- Add DOCKER-SETUP.md with Docker configuration guide
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Critical fix for Docker deployment where AI features were trying to connect
to localhost:11434 instead of using configured provider (Ollama Docker service
or OpenAI).
Problems fixed:
1. Reformulation (clarify/shorten/improve) failing with ECONNREFUSED 127.0.0.1:11434
2. Auto-labels failing with same error
3. Notebook summaries failing
4. Could not switch from Ollama to OpenAI in admin
Root cause:
- Code had hardcoded fallback to 'http://localhost:11434' in multiple places
- .env.docker file not created on server (gitignore'd)
- No validation that required environment variables were set
Changes:
1. lib/ai/factory.ts:
- Remove hardcoded 'http://localhost:11434' fallback
- Only use localhost for local development (NODE_ENV !== 'production')
- Throw error if OLLAMA_BASE_URL not set in production
2. lib/ai/providers/ollama.ts:
- Remove default parameter 'http://localhost:11434' from constructor
- Require baseUrl to be explicitly passed
- Throw error if baseUrl is missing
3. lib/ai/services/paragraph-refactor.service.ts:
- Remove 'http://localhost:11434' fallback (2 locations)
- Require OLLAMA_BASE_URL to be set
- Throw clear error if not configured
4. app/(main)/admin/settings/admin-settings-form.tsx:
- Add debug info showing current provider state
- Display database config value for transparency
- Help troubleshoot provider selection issues
5. DOCKER-SETUP.md:
- Complete guide for Docker configuration
- Instructions for .env.docker setup
- Examples for Ollama Docker, OpenAI, and external Ollama
- Troubleshooting common issues
Usage:
On server, create .env.docker with proper provider configuration:
- Ollama in Docker: OLLAMA_BASE_URL="http://ollama:11434"
- OpenAI: OPENAI_API_KEY="sk-..."
- External Ollama: OLLAMA_BASE_URL="http://SERVER_IP:11434"
Then in admin interface, users can independently configure:
- Tags Provider (for auto-labels, AI features)
- Embeddings Provider (for semantic search)
Result:
✓ Clear errors if Ollama not configured
✓ Can switch to OpenAI freely in admin
✓ No more hardcoded localhost in production
✓ Proper separation between local dev and Docker production
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Critical fix for AI features (reformulation, auto-labels) in Docker.
Only title generation was working because it had hardcoded fallback.
Problem:
- Code uses OLLAMA_BASE_URL environment variable
- docker-compose.yml was setting OLLAMA_API_URL (wrong variable)
- .env.docker was missing on server
- Container couldn't connect to Ollama service
Root cause:
- Environment variable name mismatch
- Missing OLLAMA_BASE_URL configuration
- localhost:11434 doesn't work in Docker (need service name)
Solution:
1. Update docker-compose.yml:
- Change OLLAMA_API_URL → OLLAMA_BASE_URL
- Add OLLAMA_MODEL environment variable
- Default to http://ollama:11434 (Docker service name)
2. Update .env.docker:
- Add OLLAMA_BASE_URL="http://ollama:11434"
- Add OLLAMA_MODEL="granite4:latest"
- Document AI provider configuration
3. Update .env.docker.example:
- Add examples for different Ollama setups
- Document Docker service name vs external IP
- Add OpenAI configuration example
Result:
✓ All AI features now work in Docker (titles, reformulation, auto-labels)
✓ Proper connection to Ollama container via Docker network
✓ Clear documentation for different deployment scenarios
Technical details:
- Docker service name "ollama" resolves to container IP
- Port 11434 accessible within memento-network
- Fallback to localhost only for local development
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixes issue where interface always defaulted to English for new users.
Now automatically detects and applies browser language on first visit.
Changes:
- lib/i18n/LanguageProvider.tsx:
- Add browser language detection using navigator.language
- Check if detected language is in supported languages list
- Auto-save detected language to localStorage
- Update HTML lang attribute for proper font rendering
Behavior:
- First visit: Detects browser language (e.g., fr-FR → fr)
- Returning visits: Uses saved language from localStorage
- Fallback: Defaults to English if language not supported
Result:
✓ French users see French interface automatically
✓ All supported languages work (en, fr, es, de, fa, it, pt, ru, zh, ja, ko, ar, hi, nl, pl)
✓ Better UX for international users
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixes issue where notebook dropdown showed icon name (e.g., "folder")
instead of the actual icon component.
Problem:
- note-card.tsx was displaying {notebook.icon} as text
- Users saw "folder", "book", etc. instead of icons
Solution:
- Import Lucide icon components (Folder, Book, Briefcase, etc.)
- Add ICON_MAP matching icon names to components
- Use getNotebookIcon() function to resolve icon name to component
- Render component as <NotebookIcon className="h-4 w-4 mr-2" />
Changes:
- components/note-card.tsx:
- Add LucideIcon and icon imports
- Add ICON_MAP and getNotebookIcon() helper
- Update notebook dropdown to render icon components
Result:
✓ Notebook icons now display correctly in dropdown menu
✓ Consistent with notebooks-list.tsx implementation
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixes redirect issue where application redirects to localhost:3000
instead of server IP when accessed from remote browser.
Problem:
- Accessing http://192.168.1.190:3000 redirects to http://localhost:3000
- NEXTAUTH_URL was hardcoded to localhost in .env
- Remote users cannot access the application
Solution:
1. Create .env.docker with server-specific configuration
- NEXTAUTH_URL="http://192.168.1.190:3000"
- Ollama URL for container-to-container communication
2. Update docker-compose.yml:
- Add env_file: .env.docker
- Loads Docker-specific environment variables
3. Create .env.docker.example as template:
- Document NEXTAUTH_URL configuration
- Users can copy and customize for their server IP/domain
Files:
- .env.docker: Docker environment (gitignored)
- .env.docker.example: Template for users
- docker-compose.yml: Load .env.docker
- .env.example: Update NEXTAUTH_URL documentation
Usage:
On production server, edit .env.docker with your server IP:
NEXTAUTH_URL="http://YOUR_IP:3000"
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Critical fix for production deployment on Proxmox/Docker.
Problem:
- Runtime error: Prisma Client could not locate Query Engine for "debian-openssl-1.1.x"
- Wrong binary target generated (Windows dll instead of Linux .so.node)
- Wrong OpenSSL version (3.0.x instead of 1.1.x for Debian 11)
Root cause:
- Schema.prisma didn't specify binaryTargets
- Prisma auto-detected Windows during local development
- Debian 11 (bullseye) uses OpenSSL 1.1.x, not 3.0.x
Solution:
1. Add binaryTargets to schema.prisma:
- "debian-openssl-1.1.x" for Docker/Proxmox
- "native" for local development
2. Fix Prisma folder permissions in Docker:
- RUN chown -R nextjs:nodejs /app/prisma
- Ensures Query Engine binary is readable by app user
Changes:
- prisma/schema.prisma: Add binaryTargets = ["debian-openssl-1.1.x", "native"]
- keep-notes/Dockerfile: Add chown for /app/prisma folder
Verification:
✓ libquery_engine-debian-openssl-1.1.x.so.node exists
✓ Permissions: nextjs:nodejs (readable)
✓ Prisma Client loads successfully in container
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixes runtime error where Prisma Client could not locate the Query Engine:
"libquery_engine-debian-openssl-1.1.x.so.node" not found
Root cause:
- Next.js standalone output does not include Prisma Query Engine binaries
- The .prisma folder in node_modules contains the required binary files
Solution:
- Copy node_modules/.prisma folder in Docker runner stage
- This includes libquery_engine-debian-openssl-1.1.x.so.node
- Prisma Client can now find and load the Query Engine at runtime
Tested:
✓ Docker build successful
✓ Container starts without Prisma errors
✓ Application ready in 40ms
Changes:
- keep-notes/Dockerfile: Add COPY for node_modules/.prisma folder
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>