## Translation Files - Add 11 new language files (es, de, pt, ru, zh, ja, ko, ar, hi, nl, pl) - Add 100+ missing translation keys across all 15 languages - New sections: notebook, pagination, ai.batchOrganization, ai.autoLabels - Update nav section with workspace, quickAccess, myLibrary keys ## Component Updates - Update 15+ components to use translation keys instead of hardcoded text - Components: notebook dialogs, sidebar, header, note-input, ghost-tags, etc. - Replace 80+ hardcoded English/French strings with t() calls - Ensure consistent UI across all supported languages ## Code Quality - Remove 77+ console.log statements from codebase - Clean up API routes, components, hooks, and services - Keep only essential error handling (no debugging logs) ## UI/UX Improvements - Update Keep logo to yellow post-it style (from-yellow-400 to-amber-500) - Change selection colors to #FEF3C6 (notebooks) and #EFB162 (nav items) - Make "+" button permanently visible in notebooks section - Fix grammar and syntax errors in multiple components ## Bug Fixes - Fix JSON syntax errors in it.json, nl.json, pl.json, zh.json - Fix syntax errors in notebook-suggestion-toast.tsx - Fix syntax errors in use-auto-tagging.ts - Fix syntax errors in paragraph-refactor.service.ts - Fix duplicate "fusion" section in nl.json 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Ou une version plus courte si vous préférez : feat(i18n): Add 15 languages, remove logs, update UI components - Create 11 new translation files (es, de, pt, ru, zh, ja, ko, ar, hi, nl, pl) - Add 100+ translation keys: notebook, pagination, AI features - Update 15+ components to use translations (80+ strings) - Remove 77+ console.log statements from codebase - Fix JSON syntax errors in 4 translation files - Fix component syntax errors (toast, hooks, services) - Update logo to yellow post-it style - Change selection colors (#FEF3C6, #EFB162) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
49 KiB
| stepsCompleted | workflow_completed | inputDocuments | workflow_type | project_name | user_name | date | focus_area | communication_language | document_output_language | status | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
true |
|
create-epics-and-stories | Keep (Memento Phase 1 MVP AI) | Ramez | 2026-01-10 | Phase 1 MVP AI - AI-Powered Note Taking Features | French | English | completed |
Keep (Memento) - Epic Breakdown - Phase 1 MVP AI
Overview
This document provides the complete epic and story breakdown for Keep Phase 1 MVP AI, decomposing the requirements from the Phase 1 PRD, UX Design Specification, and Architecture into implementable stories.
Project Context: Brownfield extension of existing Keep Notes application with AI-powered features. Zero breaking changes to existing functionality.
Implementation Timeline: 12 weeks (4 phases) Target: Production-ready MVP with 6 core AI features
Requirements Inventory
Functional Requirements - Phase 1 MVP
Core AI Features:
- FR6: Real-time content analysis for concept identification
- FR7: AI-powered tag suggestions based on content analysis
- FR8: User control over AI suggestions (accept/modify/reject)
- FR11: Exact keyword search (title and content)
- FR12: Semantic search by meaning/intention (natural language)
- FR13: Hybrid search combining exact + semantic results
Foundation Features (Already Implemented):
- FR1: CRUD operations for notes (text and checklist)
- FR2: Pin notes to top of list
- FR3: Archive notes
- FR4: Attach images to notes
- FR5: Drag-and-drop reordering (Muuri)
- FR9: Manual tag management
- FR10: Filter and sort by tags
- FR16: Optimistic UI for immediate feedback
Configuration & Administration:
- FR17: AI provider configuration (OpenAI, Ollama)
- FR18: Multi-provider support via Vercel AI SDK
- FR19: Theme customization (dark mode)
Deferred to Phase 2/3:
- FR14: Offline PWA mode
- FR15: Background sync
Non-Functional Requirements
Performance:
- NFR1: Auto-tagging < 1.5s after typing ends
- NFR2: Semantic search < 300ms for 1000 notes
- NFR3: Title suggestions < 2s after detection
Security & Privacy:
- NFR4: API key isolation (server-side only)
- NFR5: Local-first privacy (Ollama = 100% local)
Reliability:
- NFR8: Vector integrity (automatic background updates)
Portability:
- NFR9: Minimal footprint (Zero DevOps)
- NFR10: Node.js LTS support
Phase 1 MVP AI Epic Mapping
Epic 1: Intelligent Title Suggestions ⭐
Focus: AI-powered title generation for untitled notes FRs covered: FR6, FR8 Architecture Decision: Decision 1 (Database Schema), Decision 3 (Language Detection) Priority: HIGH (Core user experience feature)
Epic 2: Hybrid Semantic Search 🔍
Focus: Keyword + vector search with RRF fusion FRs covered: FR11, FR12, FR13 Architecture Decision: Decision 1 (Database Schema - reuses Note.embedding) Priority: HIGH (Core discovery feature)
Epic 3: Paragraph-Level Reformulation ✍️
Focus: AI-powered text improvement (Clarify, Shorten, Improve Style) FRs covered: FR6, FR8 Architecture Decision: Decision 1 (Database Schema - no schema change) Priority: MEDIUM (User productivity feature)
Epic 4: Memory Echo (Proactive Connections) 🧠
Focus: Daily proactive note connections via cosine similarity FRs covered: FR6 Architecture Decision: Decision 2 (Memory Echo Architecture) Priority: HIGH (Differentiating feature)
Epic 5: AI Settings Panel ⚙️
Focus: Granular ON/OFF controls per feature + provider selection FRs covered: FR17, FR18 Architecture Decision: Decision 4 (AI Settings Architecture) Priority: HIGH (User control requirement)
Epic 6: Language Detection Service 🌐
Focus: Automatic language detection (TinyLD hybrid approach) FRs covered: FR6 (Cross-cutting concern) Architecture Decision: Decision 3 (Language Detection Strategy) Priority: HIGH (Enables multilingual prompts)
Epic 1: Intelligent Title Suggestions
Overview
Generate 3 AI-powered title suggestions when a note reaches 50+ words without a title. User can accept, modify, or reject suggestions.
User Stories: 3 Estimated Complexity: Medium Dependencies: Language Detection Service, AI Provider Factory
Story 1.1: Real-time Word Count Detection
As a user, I want the system to detect when my note reaches 50+ words without a title, so that I can receive title suggestions automatically.
Acceptance Criteria:
- Given an open note editor
- When I type content and the word count reaches 50+
- And the note title field is empty
- Then the system triggers background title generation
- And a non-intrusive toast notification appears: "💡 Title suggestions available"
Technical Requirements:
- Word count triggered on
debounce(300ms after typing stops) - Detection logic:
content.split(/\s+/).length >= 50 - Must not interfere with typing experience (non-blocking)
- Toast notification uses Sonner (radix-ui compatible)
Implementation Files:
- Component:
keep-notes/components/ai/ai-suggestion.tsx(NEW) - Hook:
useWordCountDetection(NEW utility) - UI: Toast notification with "View" / "Dismiss" actions
Story 1.2: AI Title Generation
As a system, I want to generate 3 relevant title suggestions using AI, so that users can quickly organize their notes.
Acceptance Criteria:
- Given a note with 50+ words of content
- When title generation is triggered
- Then the AI generates 3 distinct title suggestions
- And each title is concise (3-8 words)
- And titles reflect the main concept of the content
- And generation completes within < 2 seconds
Technical Requirements:
- Service:
TitleSuggestionServiceinlib/ai/services/title-suggestion.service.ts - Provider: Uses
getAIProvider()factory (OpenAI or Ollama) - System Prompt: English (stability)
- User Data: Local language (FR, EN, ES, DE, FA, etc.)
- Language Detection: Called before generation for multilingual prompts
- Storage: Suggestions stored in memory (not persisted until user accepts)
Prompt Engineering:
System: You are a title generator. Generate 3 concise titles (3-8 words each) that capture the main concept.
User Language: {detected_language}
Content: {note_content}
Output format: JSON array of strings
Error Handling:
- If AI fails: Retry once with different provider (if available)
- If retry fails: Show toast error "Failed to generate suggestions. Please try again."
- Timeout: 5 seconds maximum
Story 1.3: User Interaction & Feedback
As a user, I want to accept, modify, or reject title suggestions, so that I maintain full control over my note organization.
Acceptance Criteria:
- Given 3 AI-generated title suggestions
- When I click the toast notification
- Then a modal displays the 3 suggestions
- And I can click any suggestion to apply it as the note title
- And I can click "Dismiss" to ignore all suggestions
- And the modal closes automatically after selection or dismissal
Technical Requirements:
- Component:
AiSuggestionModal(extendscomponents/ai/ai-suggestion.tsx) - Server Action:
updateNote(noteId, { title }) - Feedback: Store user choice in
AiFeedbacktablefeedbackType: 'thumbs_up' if accepted without modificationfeature: 'title_suggestion'originalContent: All 3 suggestions (JSON array)correctedContent: User's final choice (or modified title)
UI/UX Requirements (from UX Design Spec):
- Modal design: Clean, centered, with card-style suggestions
- Each suggestion: Clickable card with hover effect
- "Dismiss" button: Secondary action at bottom
- Auto-close after selection (no confirmation dialog)
- If user modifies title: Record as 'correction' feedback
Implementation Files:
- Modal:
components/ai/ai-suggestion.tsx(NEW) - Server Action:
app/actions/ai-suggestions.ts(NEW) - API Route:
/api/ai/feedback(NEW) - stores feedback
Database Updates:
// When user accepts a title
await prisma.note.update({
where: { id: noteId },
data: {
title: selectedTitle,
autoGenerated: true,
aiProvider: currentProvider,
aiConfidence: 85, // Placeholder - Phase 3 will calculate
lastAiAnalysis: new Date()
}
})
// Store feedback for Phase 3 trust scoring
await prisma.aiFeedback.create({
data: {
noteId,
userId: session.user.id,
feedbackType: 'thumbs_up',
feature: 'title_suggestion',
originalContent: JSON.stringify(allThreeSuggestions),
correctedContent: selectedTitle,
metadata: JSON.stringify({
provider: currentProvider,
model: modelName,
timestamp: new Date()
})
}
})
Epic 2: Hybrid Semantic Search
Overview
Combine exact keyword matching with vector similarity search using Reciprocal Rank Fusion (RRF) for comprehensive results.
User Stories: 3 Estimated Complexity: High Dependencies: Existing embeddings system, Language Detection (optional)
Story 2.1: Query Embedding Generation
As a system, I want to generate vector embeddings for user search queries, so that I can find notes by meaning.
Acceptance Criteria:
- Given a user search query
- When the search is executed
- Then the system generates a vector embedding for the query
- And the embedding is stored in memory (not persisted)
- And generation completes within < 200ms
Technical Requirements:
- Service:
SemanticSearchServiceinlib/ai/services/semantic-search.service.ts - Provider: Uses
getAIProvider()factory - Embedding Model:
text-embedding-3-small(OpenAI) or Ollama equivalent - Language Detection: Optional (can detect query language for better results)
- Caching: Query embeddings cached in React Cache (5-minute TTL)
Implementation:
// lib/ai/services/semantic-search.service.ts
async generateQueryEmbedding(query: string): Promise<number[]> {
const provider = getAIProvider()
const embedding = await provider.generateEmbedding(query)
return embedding
}
Story 2.2: Vector Similarity Calculation
As a system, I want to calculate cosine similarity between query and all user notes, so that I can rank results by meaning.
Acceptance Criteria:
- Given a query embedding and all user note embeddings
- When similarity calculation runs
- Then the system calculates cosine similarity for each note
- And returns notes ranked by similarity score (descending)
- And calculation completes within < 300ms for 1000 notes
Technical Requirements:
- Algorithm: Cosine similarity
- Formula:
similarity = dotProduct(queryEmbedding, noteEmbedding) / (magnitude(query) * magnitude(note)) - Threshold: Notes with similarity < 0.3 are filtered out
- Performance: In-memory calculation (no separate vector DB for Phase 1)
Implementation:
// lib/ai/services/semantic-search.service.ts
async searchBySimilarity(
queryEmbedding: number[],
userId: string
): Promise<Array<{ note: Note, score: number }>> {
// Fetch all user notes with embeddings
const notes = await prisma.note.findMany({
where: { userId },
select: { id: true, title: true, content: true, embedding: true }
})
// Calculate cosine similarity
const results = notes
.map(note => ({
note,
score: cosineSimilarity(queryEmbedding, JSON.parse(note.embedding))
}))
.filter(r => r.score > 0.3) // Threshold filter
.sort((a, b) => b.score - a.score)
return results
}
Story 2.3: Hybrid Search with RRF Fusion
As a user, I want to see combined results from keyword search and semantic search, so that I get the most comprehensive results.
Acceptance Criteria:
- Given a search query
- When I execute the search
- Then the system performs BOTH keyword search AND semantic search
- And results are fused using Reciprocal Rank Fusion (RRF)
- And each result displays a badge: "Exact Match" or "Related"
- And total time < 300ms for 1000 notes
Technical Requirements:
- Service:
SemanticSearchService(extends from Story 2.1, 2.2) - Fusion Algorithm: Reciprocal Rank Fusion (RRF)
RRF(score) = 1 / (k + rank)where k = 60 (standard value)- Combined score =
RRF(keyword_rank) + RRF(semantic_rank)
- Keyword Search: Existing Prisma query (title/content LIKE
%query%) - Semantic Search: Cosine similarity from Story 2.2
- Result Limit: Top 20 notes
RRF Implementation:
// lib/ai/services/semantic-search.service.ts
async hybridSearch(
query: string,
userId: string
): Promise<Array<{ note: Note, keywordScore: number, semanticScore: number, combinedScore: number }>> {
// Parallel execution
const [keywordResults, semanticResults] = await Promise.all([
this.keywordSearch(query, userId), // Existing implementation
this.searchBySimilarity(query, userId) // Story 2.2
])
// Calculate RRF scores
const k = 60
const scoredNotes = new Map<string, any>()
// Add keyword RRF scores
keywordResults.forEach((note, index) => {
const rrf = 1 / (k + index + 1)
scoredNotes.set(note.id, {
note,
keywordScore: rrf,
semanticScore: 0,
combinedScore: rrf
})
})
// Add semantic RRF scores and combine
semanticResults.forEach(({ note, score }, index) => {
const rrf = 1 / (k + index + 1)
if (scoredNotes.has(note.id)) {
const existing = scoredNotes.get(note.id)
existing.semanticScore = rrf
existing.combinedScore += rrf
} else {
scoredNotes.set(note.id, {
note,
keywordScore: 0,
semanticScore: rrf,
combinedScore: rrf
})
}
})
// Convert to array and sort by combined score
return Array.from(scoredNotes.values())
.sort((a, b) => b.combinedScore - a.combinedScore)
.slice(0, 20) // Top 20 results
}
UI Requirements (from UX Design Spec):
- Component:
components/ai/semantic-search-results.tsx(NEW) - Badge display:
- "Exact Match" badge: Blue background, shown if
keywordScore > 0 - "Related" badge: Gray background, shown if
semanticScore > 0ANDkeywordScore === 0 - Both badges can appear if note matches both
- "Exact Match" badge: Blue background, shown if
- Result card: Displays title, content snippet (100 chars), badges
- Loading state: Skeleton cards while searching (< 300ms)
API Route:
- Endpoint:
POST /api/ai/search - Request schema:
{ query: string, userId: string } - Response:
{ success: true, data: { results: Array<{ note: Note, badges: Array<"Exact Match" | "Related"> }>, totalResults: number, searchTime: number // milliseconds } }
Epic 3: Paragraph-Level Reformulation
Overview
AI-powered text improvement with 3 options: Clarify, Shorten, Improve Style. Triggered via context menu on text selection.
User Stories: 2 Estimated Complexity: Medium Dependencies: AI Provider Factory
Story 3.1: Context Menu Integration
As a user, I want to select text and see "Reformulate" options in a context menu, so that I can improve my writing with AI assistance.
Acceptance Criteria:
- Given a note editor with text content
- When I select one or more paragraphs (50-500 words)
- And I right-click or long-press
- Then a context menu appears with "Reformulate" submenu
- And the submenu shows 3 options: "Clarify", "Shorten", "Improve Style"
- When I click any option
- Then the selected text is sent to AI for reformulation
- And a loading indicator appears on the selected text
Technical Requirements:
- Component:
components/ai/paragraph-refactor.tsx(NEW) - Context Menu: Extends existing note editor context menu (Radix Dropdown Menu)
- Text Selection:
window.getSelection()API - Word Count Validation: 50-500 words (show error if out of range)
- Loading State: Skeleton or spinner overlay on selected text
UI Implementation:
// components/ai/paragraph-refactor.tsx
'use client'
import { useCallback } from 'react'
import { startTransition } from 'react'
export function ParagraphRefactor({ noteId, content }: { noteId: string, content: string }) {
const handleTextSelection = useCallback(() => {
const selection = window.getSelection()
const selectedText = selection?.toString()
const wordCount = selectedText?.split(/\s+/).length || 0
if (wordCount < 50 || wordCount > 500) {
showError('Please select 50-500 words to reformulate')
return
}
// Show context menu at selection position
showContextMenu(selection.getRangeAt(0))
}, [])
const handleRefactor = async (option: 'clarify' | 'shorten' | 'improve') => {
const selectedText = window.getSelection()?.toString()
startTransition(async () => {
showLoadingState()
const result = await refactorParagraph(noteId, selectedText, option)
hideLoadingState()
showRefactorDialog(result.refactoredText)
})
}
return (
// Context menu integration
<DropdownMenu>
<DropdownMenuTrigger>Reformulate</DropdownMenuTrigger>
<DropdownMenuContent>
<DropdownMenuItem onClick={() => handleRefactor('clarify')}>
Clarify
</DropdownMenuItem>
<DropdownMenuItem onClick={() => handleRefactor('shorten')}>
Shorten
</DropdownMenuItem>
<DropdownMenuItem onClick={() => handleRefactor('improve')}>
Improve Style
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
)
}
Story 3.2: AI Reformulation & Application
As a user, I want to see AI-reformulated text and choose to apply or discard it, so that I can improve my writing while maintaining control.
Acceptance Criteria:
- Given selected text sent for reformulation
- When AI completes processing (< 2 seconds)
- Then a modal displays showing:
- Original text (left side)
- Reformulated text (right side) with diff highlighting
- "Apply" and "Discard" buttons
- When I click "Apply"
- Then the reformulated text replaces the original in the note
- And the change is saved automatically
- When I click "Discard"
- Then the modal closes and no changes are made
Technical Requirements:
- Service:
ParagraphRefactorServiceinlib/ai/services/paragraph-refactor.service.ts - Provider: Uses
getAIProvider()factory - System Prompt: English (stability)
- User Data: Local language (respects language detection)
- Diff Display: Use
react-diff-vieweror similar library
Prompt Engineering:
System: You are a text reformulator. Reformulate the text according to the user's chosen option.
User Language: {detected_language}
Option: {clarify|shorten|improve}
Clarify: Make the text clearer and easier to understand
Shorten: Reduce word count by 30-50% while keeping key information
Improve Style: Enhance readability, flow, and professional tone
Original Text:
{selected_text}
Output: Reformulated text only (no explanations)
UI Implementation:
// Modal component (extends paragraph-refactor.tsx)
export function RefactorModal({
originalText,
refactoredText,
onApply,
onDiscard
}) {
return (
<Dialog open={true}>
<DialogContent className="max-w-4xl">
<DialogHeader>
<DialogTitle>Compare & Apply</DialogTitle>
</DialogHeader>
<div className="grid grid-cols-2 gap-4">
<div>
<h4 className="font-medium mb-2">Original</h4>
<div className="p-4 bg-gray-100 rounded">
{originalText}
</div>
</div>
<div>
<h4 className="font-medium mb-2">Refactored</h4>
<div className="p-4 bg-blue-50 rounded">
{refactoredText}
</div>
</div>
</div>
<DialogFooter>
<Button variant="ghost" onClick={onDiscard}>
Discard
</Button>
<Button onClick={onApply}>
Apply Changes
</Button>
</DialogFooter>
</DialogContent>
</Dialog>
)
}
Server Action:
// app/actions/ai-suggestions.ts
'use server'
import { auth } from '@/auth'
import { ParagraphRefactorService } from '@/lib/ai/services/paragraph-refactor.service'
import { updateNote } from './notes'
export async function refactorParagraph(
noteId: string,
selectedText: string,
option: 'clarify' | 'shorten' | 'improve'
) {
const session = await auth()
if (!session?.user?.id) throw new Error('Unauthorized')
const service = new ParagraphRefactorService()
const refactoredText = await service.refactor(selectedText, option)
return {
success: true,
originalText: selectedText,
refactoredText
}
}
export async function applyRefactoring(
noteId: string,
originalText: string,
refactoredText: string
) {
const session = await auth()
if (!session?.user?.id) throw new Error('Unauthorized')
// Get current note content
const note = await prisma.note.findUnique({ where: { id: noteId } })
if (!note?.userId || note.userId !== session.user.id) {
throw new Error('Note not found')
}
// Replace original text with refactored text
const newContent = note.content.replace(originalText, refactoredText)
await updateNote(noteId, { content: newContent })
return { success: true }
}
Feedback Collection:
// Track which reformulation option users prefer
await prisma.aiFeedback.create({
data: {
noteId,
userId: session.user.id,
feedbackType: 'correction', // User chose to apply
feature: 'paragraph_refactor',
originalContent: originalText,
correctedContent: refactoredText,
metadata: JSON.stringify({
option, // 'clarify' | 'shorten' | 'improve'
provider: currentProvider,
timestamp: new Date()
})
}
})
Epic 4: Memory Echo (Proactive Connections)
Overview
Background process that identifies connections between notes using cosine similarity. Displays 1 insight per day (max similarity > 0.75).
User Stories: 2 Estimated Complexity: High Dependencies: Existing embeddings system, Decision 2 (Server Action + Queue pattern)
Story 4.1: Background Insight Generation
As a system, I want to analyze all user note embeddings daily to find connections, so that I can proactively suggest related notes.
Acceptance Criteria:
- Given a user with 10+ notes (each with embeddings)
- When the user logs in
- And no insight has been generated today
- Then the system triggers background analysis
- And calculates cosine similarity between all note pairs
- And finds the top pair with similarity > 0.75
- And stores the insight in
MemoryEchoInsighttable - And UI freeze is < 100ms (only DB check, background processing)
Technical Requirements:
- Server Action:
app/actions/ai-memory-echo.ts(NEW) - Service:
MemoryEchoServiceinlib/ai/services/memory-echo.service.ts(NEW) - Trigger: User login check (in layout or dashboard)
- Constraint: Max 1 insight per user per day (enforced via DB unique constraint)
- Performance: < 100ms UI freeze (async processing)
Implementation:
// app/actions/ai-memory-echo.ts
'use server'
import { auth } from '@/auth'
import { prisma } from '@/lib/prisma'
import { MemoryEchoService } from '@/lib/ai/services/memory-echo.service'
export async function generateMemoryEcho() {
const session = await auth()
if (!session?.user?.id) {
return { success: false, error: 'Unauthorized' }
}
// Check if already generated today
const today = new Date()
today.setHours(0, 0, 0, 0)
const existing = await prisma.memoryEchoInsight.findFirst({
where: {
userId: session.user.id,
insightDate: { gte: today }
}
})
if (existing) {
return { success: true, insight: existing, alreadyGenerated: true }
}
// Generate new insight (non-blocking background task)
generateInBackground(session.user.id)
// Return immediately (UI doesn't wait)
return { success: true, insight: null, alreadyGenerated: false }
}
async function generateInBackground(userId: string) {
const service = new MemoryEchoService()
try {
const insight = await service.findTopConnection(userId)
if (insight) {
await prisma.memoryEchoInsight.create({
data: {
userId,
note1Id: insight.note1Id,
note2Id: insight.note2Id,
similarityScore: insight.score
}
})
}
} catch (error) {
console.error('Memory Echo background generation error:', error)
}
}
Service Implementation:
// lib/ai/services/memory-echo.service.ts
export class MemoryEchoService {
async findTopConnection(
userId: string
): Promise<{ note1Id: string, note2Id: string, score: number } | null> {
// Fetch all user notes with embeddings
const notes = await prisma.note.findMany({
where: { userId },
select: { id: true, embedding: true, title: true, content: true }
})
if (notes.length < 2) return null
// Calculate pairwise cosine similarities
const insights = []
const threshold = 0.75
for (let i = 0; i < notes.length; i++) {
for (let j = i + 1; j < notes.length; j++) {
const embedding1 = JSON.parse(notes[i].embedding)
const embedding2 = JSON.parse(notes[j].embedding)
const similarity = cosineSimilarity(embedding1, embedding2)
if (similarity > threshold) {
insights.push({
note1Id: notes[i].id,
note2Id: notes[j].id,
score: similarity
})
}
}
}
// Return top insight (highest similarity)
if (insights.length === 0) return null
insights.sort((a, b) => b.score - a.score)
return insights[0]
}
}
// Cosine similarity utility
function cosineSimilarity(vecA: number[], vecB: number[]): number {
const dotProduct = vecA.reduce((sum, a, i) => sum + a * vecB[i], 0)
const magnitudeA = Math.sqrt(vecA.reduce((sum, a) => sum + a * a, 0))
const magnitudeB = Math.sqrt(vecB.reduce((sum, b) => sum + b * b, 0))
return dotProduct / (magnitudeA * magnitudeB)
}
Story 4.2: Insight Display & Feedback
As a user, I want to see daily note connections and provide feedback, so that I can discover relationships in my knowledge base.
Acceptance Criteria:
- Given a stored Memory Echo insight
- When I log in (or navigate to dashboard)
- Then a toast notification appears: "💡 Memory Echo: Note X relates to Note Y (85% match)"
- When I click the toast
- Then a modal displays both notes side-by-side
- And I can click each note to view it in editor
- And I can provide feedback via 👍 / 👎 buttons
- When I click feedback
- Then the feedback is stored in
MemoryEchoInsight.feedbackfield
Technical Requirements:
- Component:
components/ai/memory-echo-notification.tsx(NEW) - Trigger: Check on page load (dashboard layout)
- UI: Toast notification with Sonner
- Modal: Side-by-side note comparison
- Feedback: Updates
MemoryEchoInsight.feedbackfield
UI Implementation:
// components/ai/memory-echo-notification.tsx
'use client'
import { useEffect, useState } from 'react'
import { useRouter } from 'next/navigation'
import { toast } from 'sonner'
import { Bell, X, ThumbsUp, ThumbsDown } from 'lucide-react'
import { generateMemoryEcho } from '@/app/actions/ai-memory-echo'
export function MemoryEchoNotification() {
const router = useRouter()
const [insight, setInsight] = useState<any>(null)
const [viewed, setViewed] = useState(false)
useEffect(() => {
checkForInsight()
}, [])
const checkForInsight = async () => {
const result = await generateMemoryEcho()
if (result.success && result.insight && !result.alreadyGenerated) {
// Show toast notification
toast('💡 Memory Echo', {
description: `Note "${insight.note1.title}" relates to "${insight.note2.title}" (${Math.round(insight.similarityScore * 100)}% match)`,
action: {
label: 'View',
onClick: () => showInsightModal(result.insight)
}
})
}
if (result.success && result.insight) {
setInsight(result.insight)
}
}
const showInsightModal = (insightData: any) => {
// Open modal with both notes side-by-side
setViewed(true)
markAsViewed(insightData.id)
}
const handleFeedback = async (feedback: 'thumbs_up' | 'thumbs_down') => {
await updateMemoryEchoFeedback(insight.id, feedback)
toast(feedback === 'thumbs_up' ? 'Thanks for your feedback!' : 'We\'ll improve next time')
// Close modal or hide toast
}
if (!insight) return null
return (
// Modal implementation with feedback buttons
<Dialog open={viewed} onOpenChange={setViewed}>
<DialogContent>
<DialogHeader>
<DialogTitle>Memory Echo Discovery</DialogTitle>
</DialogHeader>
<div className="grid grid-cols-2 gap-4">
{/* Note 1 */}
<NoteCard note={insight.note1} onClick={() => router.push(`/notes/${insight.note1.id}`)} />
{/* Note 2 */}
<NoteCard note={insight.note2} onClick={() => router.push(`/notes/${insight.note2.id}`)} />
</div>
<div className="text-center text-sm text-gray-600">
Similarity: {Math.round(insight.similarityScore * 100)}%
</div>
<div className="flex justify-center gap-4">
<Button
variant={insight.feedback === 'thumbs_up' ? 'default' : 'outline'}
size="icon"
onClick={() => handleFeedback('thumbs_up')}
>
<ThumbsUp className="h-4 w-4" />
</Button>
<Button
variant={insight.feedback === 'thumbs_down' ? 'default' : 'outline'}
size="icon"
onClick={() => handleFeedback('thumbs_down')}
>
<ThumbsDown className="h-4 w-4" />
</Button>
</div>
</DialogContent>
</Dialog>
)
}
Server Action for Feedback:
// app/actions/ai-memory-echo.ts
export async function updateMemoryEchoFeedback(
insightId: string,
feedback: 'thumbs_up' | 'thumbs_down'
) {
const session = await auth()
if (!session?.user?.id) throw new Error('Unauthorized')
await prisma.memoryEchoInsight.update({
where: { id: insightId },
data: { feedback }
})
return { success: true }
}
Database Schema (from Architecture Decision 2):
model MemoryEchoInsight {
id String @id @default(cuid())
userId String?
note1Id String
note2Id String
similarityScore Float
insightDate DateTime @default(now())
viewed Boolean @default(false)
feedback String?
note1 Note @relation("EchoNote1", fields: [note1Id], references: [id])
note2 Note @relation("EchoNote2", fields: [note2Id], references: [id])
user User? @relation(fields: [userId], references: [id])
@@unique([userId, insightDate])
@@index([userId, insightDate])
}
Epic 5: AI Settings Panel
Overview
Dedicated settings page at /settings/ai with granular ON/OFF controls for each AI feature and provider selection.
User Stories: 2 Estimated Complexity: Medium Dependencies: Decision 4 (UserAISettings table), AI Provider Factory
Story 5.1: Granular Feature Toggles
As a user, I want to enable/disable individual AI features, so that I can control which AI assistance I receive.
Acceptance Criteria:
- Given the AI Settings page at
/settings/ai - When I navigate to the page
- Then I see toggles for each AI feature:
- Title Suggestions (default: ON)
- Semantic Search (default: ON)
- Paragraph Reformulation (default: ON)
- Memory Echo (default: ON)
- When I toggle any feature OFF
- Then the setting is saved to
UserAISettingstable - And the feature is immediately disabled in the UI
- When I toggle any feature ON
- Then the feature is re-enabled immediately
Technical Requirements:
- Page:
app/(main)/settings/ai/page.tsx(NEW) - Component:
components/ai/ai-settings-panel.tsx(NEW) - Server Action:
app/actions/ai-settings.ts(NEW) - Database:
UserAISettingstable (from Decision 4)
UI Implementation:
// app/(main)/settings/ai/page.tsx
import { AISettingsPanel } from '@/components/ai/ai-settings-panel'
import { getAISettings } from '@/lib/ai/settings'
export default async function AISettingsPage() {
const settings = await getAISettings()
return (
<div className="container mx-auto py-8">
<h1 className="text-3xl font-bold mb-6">AI Settings</h1>
<AISettingsPanel initialSettings={settings} />
</div>
)
}
// components/ai/ai-settings-panel.tsx
'use client'
import { useState } from 'react'
import { Switch } from '@/components/ui/switch'
import { Label } from '@/components/ui/label'
import { Card } from '@/components/ui/card'
import { updateAISettings } from '@/app/actions/ai-settings'
export function AISettingsPanel({ initialSettings }: { initialSettings: any }) {
const [settings, setSettings] = useState(initialSettings)
const handleToggle = async (feature: string, value: boolean) => {
// Optimistic update
setSettings(prev => ({ ...prev, [feature]: value }))
// Server update
await updateAISettings({ [feature]: value })
}
return (
<div className="space-y-6">
<FeatureToggle
name="titleSuggestions"
label="Title Suggestions"
description="Suggest titles for untitled notes after 50+ words"
checked={settings.titleSuggestions}
onChange={(checked) => handleToggle('titleSuggestions', checked)}
/>
<FeatureToggle
name="semanticSearch"
label="Semantic Search"
description="Find notes by meaning, not just keywords"
checked={settings.semanticSearch}
onChange={(checked) => handleToggle('semanticSearch', checked)}
/>
<FeatureToggle
name="paragraphRefactor"
label="Paragraph Reformulation"
description="AI-powered text improvement options"
checked={settings.paragraphRefactor}
onChange={(checked) => handleToggle('paragraphRefactor', checked)}
/>
<FeatureToggle
name="memoryEcho"
label="Memory Echo"
description="Daily proactive connections between your notes"
checked={settings.memoryEcho}
onChange={(checked) => handleToggle('memoryEcho', checked)}
/>
{settings.memoryEcho && (
<FrequencySlider
value={settings.memoryEchoFrequency}
onChange={(value) => handleToggle('memoryEchoFrequency', value)}
options={['daily', 'weekly', 'custom']}
/>
)}
</div>
)
}
function FeatureToggle({
name,
label,
description,
checked,
onChange
}: {
name: string
label: string
description: string
checked: boolean
onChange: (checked: boolean) => void
}) {
return (
<Card className="p-4">
<div className="flex items-center justify-between">
<div className="space-y-1">
<Label htmlFor={name}>{label}</Label>
<p className="text-sm text-gray-500">{description}</p>
</div>
<Switch
id={name}
checked={checked}
onCheckedChange={onChange}
/>
</div>
</Card>
)
}
Server Action:
// app/actions/ai-settings.ts
'use server'
import { auth } from '@/auth'
import { prisma } from '@/lib/prisma'
export async function updateAISettings(settings: Partial<UserAISettings>) {
const session = await auth()
if (!session?.user?.id) throw new Error('Unauthorized')
// Upsert settings (create if not exists)
await prisma.userAISettings.upsert({
where: { userId: session.user.id },
create: {
userId: session.user.id,
...settings
},
update: settings
})
revalidatePath('/settings/ai')
return { success: true }
}
export async function getAISettings() {
const session = await auth()
if (!session?.user?.id) {
// Return defaults for non-logged-in users
return {
titleSuggestions: true,
semanticSearch: true,
paragraphRefactor: true,
memoryEcho: true,
memoryEchoFrequency: 'daily',
aiProvider: 'auto'
}
}
const settings = await prisma.userAISettings.findUnique({
where: { userId: session.user.id }
})
return settings || {
titleSuggestions: true,
semanticSearch: true,
paragraphRefactor: true,
memoryEcho: true,
memoryEchoFrequency: 'daily',
aiProvider: 'auto'
}
}
Story 5.2: AI Provider Selection
As a user, I want to choose my AI provider (Auto, OpenAI, or Ollama), so that I can control cost and privacy.
Acceptance Criteria:
- Given the AI Settings page
- When I scroll to the "AI Provider" section
- Then I see 3 provider options:
- Auto (Recommended) - Ollama when available, OpenAI fallback
- Ollama (Local) - 100% private, runs locally
- OpenAI (Cloud) - Most accurate, requires API key
- When I select a provider
- Then the selection is saved to
UserAISettings.aiProvider - And the AI provider factory uses my preference
Technical Requirements:
- Component: Extends
AISettingsPanelwith provider selector - Integration:
getAIProvider()factory respects user selection - Validation: API key required for OpenAI (stored in SystemConfig)
UI Implementation:
// components/ai/ai-settings-panel.tsx (extend existing component)
function ProviderSelector({
value,
onChange
}: {
value: 'auto' | 'openai' | 'ollama'
onChange: (value: 'auto' | 'openai' | 'ollama') => void
}) {
const providers = [
{
value: 'auto',
label: 'Auto (Recommended)',
description: 'Ollama when available, OpenAI fallback'
},
{
value: 'ollama',
label: 'Ollama (Local)',
description: '100% private, runs locally on your machine'
},
{
value: 'openai',
label: 'OpenAI (Cloud)',
description: 'Most accurate, requires API key'
}
]
return (
<Card className="p-4">
<Label className="text-base font-medium">AI Provider</Label>
<RadioGroup value={value} onValueChange={onChange}>
{providers.map(provider => (
<div key={provider.value} className="flex items-start space-x-2 py-2">
<RadioGroupItem value={provider.value} id={provider.value} />
<div className="grid gap-1.5">
<Label htmlFor={provider.value}>{provider.label}</Label>
<p className="text-sm text-gray-500">{provider.description}</p>
</div>
</div>
))}
</RadioGroup>
{value === 'openai' && (
<APIKeyInput />
)}
</Card>
)
}
Provider Factory Integration:
// lib/ai/factory.ts (existing, extend to respect user settings)
import { getAIProvider } from './factory'
import { getAISettings } from './settings'
export async function getUserAIProvider(): Promise<AIProvider> {
const userSettings = await getAISettings()
const systemConfig = await getSystemConfig()
let provider = userSettings.aiProvider // 'auto' | 'openai' | 'ollama'
// Handle 'auto' mode
if (provider === 'auto') {
// Check if Ollama is available
try {
const ollamaStatus = await checkOllamaHealth()
provider = ollamaStatus ? 'ollama' : 'openai'
} catch {
provider = 'openai' // Fallback to OpenAI
}
}
return getAIProvider(provider)
}
Database Schema (from Decision 4):
model UserAISettings {
userId String @id
// Feature Flags (granular ON/OFF)
titleSuggestions Boolean @default(true)
semanticSearch Boolean @default(true)
paragraphRefactor Boolean @default(true)
memoryEcho Boolean @default(true)
// Configuration
memoryEchoFrequency String @default("daily") // 'daily' | 'weekly' | 'custom'
aiProvider String @default("auto") // 'auto' | 'openai' | 'ollama'
// Relation
user User @relation(fields: [userId], references: [id])
// Indexes for analytics
@@index([memoryEcho])
@@index([aiProvider])
@@index([memoryEchoFrequency])
}
Epic 6: Language Detection Service
Overview
Automatic language detection using TinyLD (62 languages including Persian). Hybrid approach: TinyLD for < 50 words, AI for ≥ 50 words.
User Stories: 2 Estimated Complexity: Medium Dependencies: Decision 3 (Language Detection Strategy), TinyLD library
Story 6.1: TinyLD Integration for Short Notes
As a system, I want to detect note language efficiently for notes < 50 words using TinyLD, so that I can enable multilingual AI processing.
Acceptance Criteria:
- Given a note with < 50 words
- When the note is saved or analyzed
- Then the system detects language using TinyLD
- And detection completes in < 10ms
- And the detected language is stored in
Note.languagefield - And confidence score is stored in
Note.languageConfidencefield
Technical Requirements:
- Library:
tinyld(npm install tinyld) - Service:
LanguageDetectionServiceinlib/ai/services/language-detection.service.ts - Supported Languages: 62 (including Persian/fa verified)
- Output Format: ISO 639-1 codes (fr, en, es, de, fa, etc.)
Implementation:
// lib/ai/services/language-detection.service.ts
import { tinyld } from 'tinyld'
export class LanguageDetectionService {
private readonly MIN_WORDS_FOR_AI = 50
private readonly MIN_CONFIDENCE = 0.7
async detectLanguage(content: string): Promise<{
language: string // 'fr' | 'en' | 'es' | 'de' | 'fa' | 'unknown'
confidence: number // 0.0-1.0
method: 'tinyld' | 'ai' | 'manual'
}> {
const wordCount = content.split(/\s+/).length
// Short notes: TinyLD (fast, TypeScript native)
if (wordCount < this.MIN_WORDS_FOR_AI) {
const result = tinyld(content)
return {
language: this.mapToISO(result.language),
confidence: result.confidence || 0.8,
method: 'tinyld'
}
}
// Long notes: AI for better accuracy
const response = await generateText({
model: openai('gpt-4o-mini'), // or ollama/llama3.2
prompt: `Detect the language of this text. Respond ONLY with ISO 639-1 code (fr, en, es, de, fa):\n\n${content.substring(0, 500)}`
})
return {
language: response.text.toLowerCase().trim(),
confidence: 0.9,
method: 'ai'
}
}
private mapToISO(code: string): string {
const mapping = {
'fra': 'fr',
'eng': 'en',
'spa': 'es',
'deu': 'de',
'fas': 'fa',
'pes': 'fa', // Persian (Farsi)
'por': 'pt',
'ita': 'it',
'rus': 'ru',
'zho': 'zh'
}
return mapping[code] || code.substring(0, 2)
}
}
Trigger Points:
- Note creation (on save)
- Note update (on save)
- Before AI processing (title generation, reformulation, etc.)
Database Update:
// app/actions/notes.ts (extend existing createNote/updateNote)
export async function createNote(data: { title: string, content: string }) {
const session = await auth()
if (!session?.user?.id) throw new Error('Unauthorized')
// Detect language
const languageService = new LanguageDetectionService()
const { language, languageConfidence } = await languageService.detectLanguage(data.content)
const note = await prisma.note.create({
data: {
...data,
userId: session.user.id,
language,
languageConfidence
}
})
return note
}
Story 6.2: AI Fallback for Long Notes
As a system, I want to use AI language detection for notes ≥ 50 words, so that I can achieve higher accuracy for longer content.
Acceptance Criteria:
- Given a note with ≥ 50 words
- When the note is saved or analyzed
- Then the system detects language using AI (OpenAI or Ollama)
- And detection completes in < 500ms
- And the detected language is stored in
Note.languagefield - And confidence score is 0.9 (AI is more accurate)
Technical Requirements:
- Provider: Uses
getAIProvider()factory - Model:
gpt-4o-mini(OpenAI) orllama3.2(Ollama) - Prompt: Minimal (only language detection)
- Output: ISO 639-1 code only
AI Prompt (from Story 6.1):
Detect the language of this text. Respond ONLY with ISO 639-1 code (fr, en, es, de, fa):
{content (first 500 chars)}
Performance Target:
- TinyLD detection: ~8ms for < 50 words ✅
- AI detection: ~200-500ms for ≥ 50 words ✅
- Overall impact: Negligible for UX
Implementation Phases
Phase 1: Foundation (Week 1-2)
Goal: Database schema and base infrastructure
Stories:
- Epic 1-6: All Prisma migrations (3 new tables, extend Note model)
- Epic 6: Language Detection Service (TinyLD integration)
- Epic 5: AI Settings page + UserAISettings table
Deliverables:
- ✅ Prisma migrations created and applied
- ✅
LanguageDetectionServiceimplemented - ✅
/settings/aipage functional - ✅ Base AI service layer structure created
Phase 2: Infrastructure (Week 3-4)
Goal: Core services and AI provider integration
Stories:
- Epic 1: Title Suggestion Service
- Epic 2: Semantic Search Service (part 1 - embeddings)
- Epic 3: Paragraph Refactor Service
- Epic 4: Memory Echo Service (part 1 - background job)
Deliverables:
- ✅ All AI services implemented
- ✅ Provider factory extended for new services
- ✅ Server actions created for all features
- ✅ Integration tests passing
Phase 3: AI Features (Week 5-9)
Goal: UI components and user-facing features
Stories:
- Epic 1: Title Suggestions UI (Stories 1.1, 1.2, 1.3)
- Epic 2: Semantic Search UI (Stories 2.1, 2.2, 2.3)
- Epic 3: Paragraph Reformulation UI (Stories 3.1, 3.2)
- Epic 4: Memory Echo UI (Stories 4.1, 4.2)
Deliverables:
- ✅ All AI components implemented
- ✅ Toast notifications working
- ✅ Modals and dialogs functional
- ✅ Feedback collection active
Phase 4: Polish & Testing (Week 10-12)
Goal: Quality assurance and performance optimization
Stories:
- Epic 1-6: E2E Playwright tests
- Epic 1-6: Performance testing and optimization
- Epic 1-6: Multi-language testing (FR, EN, ES, DE, FA)
- Epic 1-6: Bug fixes and refinement
Deliverables:
- ✅ E2E test coverage for all AI features
- ✅ Performance targets met (search < 300ms, titles < 2s, Memory Echo < 100ms UI freeze)
- ✅ Multi-language verification complete
- ✅ Production deployment ready
Dependencies & Critical Path
Critical Path Implementation
Prisma Migrations → Language Detection Service → AI Settings Page
↓
All AI Services
↓
UI Components
↓
Testing & Polish
Parallel Development Opportunities
- Week 1-2: Language Detection + AI Settings (independent)
- Week 3-4: All AI services (can be developed in parallel)
- Week 5-9: UI components (can be developed in parallel per epic)
- Week 10-12: Testing (all features tested together)
Cross-Epic Dependencies
- All Epics → Epic 6 (Language Detection): Must detect language before AI processing
- All Epics → Epic 5 (AI Settings): Must check feature flags before executing
- Epic 2 (Semantic Search) → Existing Embeddings: Reuses
Note.embeddingfield - Epic 4 (Memory Echo) → Epic 2 (Semantic Search): Uses cosine similarity from Epic 2
Definition of Done
Per Story
- Code implemented following
project-context.mdrules - TypeScript strict mode compliance
- Server actions have
'use server'directive - Components have
'use client'directive (if interactive) - All imports use
@/alias - Error handling with
try/catchandconsole.error() - API responses follow
{success, data, error}format auth()check in all server actionsrevalidatePath('/')after mutations- E2E Playwright test written
- Manual testing completed
Per Epic
- All stories completed
- Integration tests passing
- Performance targets met
- User acceptance criteria validated
- Documentation updated
Phase 1 MVP AI
- All 6 epics completed
- Zero breaking changes to existing features
- All NFRs met (performance, security, privacy)
- Multi-language verified (FR, EN, ES, DE, FA)
- Production deployment ready
- User feedback collected and analyzed
Generated: 2026-01-10 Author: Winston (Architect Agent) with Create Epics & Stories workflow Based on: PRD Phase 1 MVP AI + UX Design Spec + Architecture (2784 lines) Status: READY FOR IMPLEMENTATION