feat: AI provider testing page + multi-provider support + UX design spec

- Add AI Provider Testing page (/admin/ai-test) with Tags and Embeddings tests
- Add new AI providers: CustomOpenAI, DeepSeek, OpenRouter
- Add API routes for AI config, models listing, and testing endpoints
- Add UX Design Specification document for Phase 1 MVP AI
- Add PRD Phase 1 MVP AI planning document
- Update admin settings and sidebar navigation
- Fix AI factory for multi-provider support
This commit is contained in:
sepehr 2026-01-10 11:23:22 +01:00
parent 640fcb26f7
commit fc2c40249e
21 changed files with 5971 additions and 138 deletions

View File

@ -28,7 +28,8 @@
"Bash(curl:*)",
"Bash(python:*)",
"Bash(npm test:*)",
"Skill(bmad:bmm:agents:ux-designer)"
"Skill(bmad:bmm:agents:ux-designer)",
"Skill(bmad:bmm:workflows:create-prd)"
]
}
}

View File

@ -0,0 +1,443 @@
---
stepsCompleted: [1, 2, 3]
inputDocuments: []
session_topic: 'Amélioration de l''utilisation de l''IA dans Memento'
session_goals: 'Explorer des cas d''usage IA pertinents, définir l''architecture multilingue, prioriser les fonctionnalités par valeur utilisateur'
selected_approach: 'ai-recommended'
techniques_used: ['SCAMPER Method', 'Future Self Interview', 'Six Thinking Hats']
ideas_generated: ['20+ idées SCAMPER', 'Solution 3-couches confiance', '7 alternatives créatives Six Hats']
context_file: ''
session_status: 'completed'
completion_date: '2026-01-09'
---
# Brainstorming Session Results
**Facilitator:** AI Brainstorming Guide
**Date:** 2026-01-09
## Session Overview
**Topic:** Amélioration de l'utilisation de l'IA dans Memento
**Goals:**
- Explorer des cas d'usage IA pertinents pour une app de prise de notes
- Définir l'architecture multilingue (prompts système en anglais, données en langue utilisateur)
- Prioriser les fonctionnalités par valeur utilisateur
## Technique Selection
**Approach:** AI-Recommended Techniques
**Analysis Context:** Amélioration IA dans Memento avec focus sur cas d'usage pertinents, architecture multilingue et priorisation
**Recommended Techniques:**
### Phase 1: SCAMPER Method (Structured) ✅ TERMINÉ
**Why this fits:** Vous avez déjà 3 idées de base. SCAMPER permet de les expandre systématiquement selon 7 dimensions créatives
**Expected outcome:** 15-20 variantes et améliorations des 3 idées initiales ✅ ATTEINT
### Phase 2: Future Self Interview (Introspective Delight) 🔄 EN COURS
**Why this builds on Phase 1:** Projection dans le futur pour comprendre les vrais besoins utilisateurs et frictions potentielles
**Expected outcome:** Compréhension profonde des besoins réels et problèmes d'usage
### Phase 3: Six Thinking Hats (Structured)
**Why this concludes effectively:** Vision complète des implications techniques, UX et business pour l'architecture multilingue
**Expected outcome:** Architecture multilingue robuste avec analyse multi-perspectives
---
## Phase 1: SCAMPER Method - Results
### S - Substitute: Pattern ON/OFF
**Idées clés:**
- Auto-description d'images ON/OFF → Bouton transparent sur image quand OFF
- Auto-reformulation ON/OFF → Bouton crayon sur paragraphes + menu contextuel
- Auto-titres ON/OFF → 3 suggestions IA sous champ titre
- Page Settings IA avec checkboxes pour chaque fonctionnalité
- Philosophie: "Zéro friction par défaut, mais contrôlable"
### C - Combine: Hybrides intelligents
**Idées clés:**
- Images + Titres → Photo sans titre → analyse + titre auto
- Reformulation + Titres → Bouton "Optimiser la note" → contenu + titre
- Mode "Super IA" → Un bouton pour TOUT faire d'un coup
- Tags hybrides → Catégories IA hiérarchiques + tags utilisateur personnalisés
### A - Adapt: Extensions contextuelles
**Idées clés:**
- Liens/URLs → Bouton IA pour résumer OU extraire points clés (choix paramètres)
- Codes/citations → IA explique le contexte
- Recherche sémantique → "Rechercher par sens" au lieu de mots-clés
- Multilinguisme → Détection automatique par note + bouton régénération
### M - Modify: Améliorations UX
**Idées clés:**
- Tags hybrides → Catégories IA (hiérarchiques) + tags perso
- Choix paramètres → Options configurables (résumé vs bullets vs analyse)
- Proposition langue → IA détecte + propose/confirme avant générer
- Bouton → Décision par A/B testing plus tard (itération pragmatique)
### P - Put to Other Uses: Extensions futures
**Idées clés:**
- Audio → Transcription + résumé notes vocales (pour plus tard)
- IA priorisation → Organisation auto des notes
- Business model → Freemium avec IA payante (type n8n, "paiement un café")
- Contrainte Zéro DevOps → Solutions managées (Vercel, Netlify)
### E - Eliminate: Simplification
**Idées clés:**
- RÉTABLISSEMENT: Garde la détection AUTO de la langue (plus prévisible)
- Bouton → Test A/B des scénarios pour décision itérative
### R - Reverse: Inversions innovantes
**Idées clés:**
- Workflow inversé → IA propose des brouillons basés sur patterns historiques
- Rôle inversé → IA donne conseils d'organisation et structuration
- Priorité inversée → IA suggère des suites logiques après chaque note
- Travail fond (NON) → Pas d'IA en arrière-plan pendant sommeil
**Total idées générées:** 20+ concepts concrets
---
## Phase 2: Future Self Interview - Results ✅
**Approche:** Projection temporelle pour comprendre vrais besoins utilisateurs
### Interview Insights:
**Fonctionnalité la plus appréciée:**
- 🎯 **"IA suggère des suites logiques"** - Gain de temps, évite d'oublier, flux de travail fluide
**Principal défi identifié:**
- ⚠️ **Hallucinations de l'IA** - Erreurs, inventions, pertes de confiance
### Solution Élégante Proposée: Système de Confiance à 3 Couches
**1. Score de Confiance (Transparence)**
- Score % affiché pour chaque génération IA
- >90% = ✅ Solide (auto-application)
- 70-89% = ⚠️ Moyen (à vérifier)
- <70% = Faible (pas d'auto-génération)
**2. Feedback & Apprentissage**
- Boutons 👍👎 à côté de chaque génération
- "Ça marche!" → IA retient les patterns positifs
- "Faux" → IA apprend et évite les erreurs
**3. Mode Conservatif (Safety First)**
- Générations auto seulement si confiance >90%
- Si doute: IA demande confirmation
---
## Phase 3: Six Thinking Hats - Results ✅
**Approche:** Vision multi-perspectives pour validation complète de l'architecture multilingue et des fonctionnalités IA
---
### 🎩 White Hat - Faits & Techniques (Architecture)
**Faits techniques actuels:**
- Stack Next.js 15 + Prisma + SQLite
- IA providers supportés: Ollama, OpenAI, Custom OpenAI
- Tags AI déjà implémenté avec embeddings
- Base de données existante avec User, Note, Label
- Système auth fonctionnel
**Besoins techniques identifiés:**
- API embeddings pour recherche sémantique (vector search)
- API generation pour titres, résumés, reformulations
- Stockage embeddings dans DB (nouvelle colonne/vector DB)
- Scoring de confiance (mécanisme interne IA ou meta-layer)
- Système feedback user (nouvelle table/user_feedback)
- File upload pour images (OCR/description)
- Configuration multi-provider (dans Settings admin)
**Architecture multilingue:**
- Prompts système en anglais (stabilité)
- Détection auto langue par note (user data)
- Embeddings multi-langues supportés
**Contraintes:**
- Zéro DevOps → Vercel/Netlify hosting
- SQLite en prod (pas de vector DB séparée)
- Modèles locaux via Ollama ou API externes
---
### ❤️ Red Hat - Émotions & Ressenti Utilisateur
**Ce que les utilisateurs vont ressentir:**
- 😊 **Soulagement**: "Ça marche tout seul, je ne fais rien"
- 🤩 **Délice**: "Wow, il a deviné ce que je voulais faire!"
- 😰 **Frustration potentielle**: "Pourquoi la IA s'est trompée?"
- 😕 **Confusion**: "Comment ça marche ce score de confiance?"
- 🎯 **Contrôle**: "Je peux désactiver si je veux"
**Points de friction émotionnelle identifiés:**
- Hallucinations = perte de confiance rapide
- Trop d'options = overwhelm
- IA trop présente = sentiment d'être surveillé
- IA invisible = "magie" mais aussi manque de compréhension
**Design émotionnel recommandé:**
- Transparence sur ce que fait la IA
- Feedback immédiat (spinners, toast notifications)
- Contrôle utilisateur TOUJOURS disponible
- Messages humains, pas techniques
---
### 🌞 Yellow Hat - Bénéfices & Valeur
**Valeur utilisateur directe:**
- ⏱️ **Gain de temps**: Titres auto, tags auto, reformulations rapides
- 🧠 **Moins de charge cognitive**: IA gère la organisation, user se concentre sur contenu
- 🔍 **Retrouvabilité**: Recherche sémantique = trouver par sens, pas mots-clés
- 📈 **Qualité**: Reformulations améliorent clarté des notes
- 🎯 **Flow**: Suggestions de suites logiques = ne pas oublier, continuation fluide
**Valeur business (modèle freemium):**
- 💰 **Revenus**: Abonnement pour features IA avancées
- 🎁 **Attraction**: Version gratuite = acquisition users
- ☕ **Payment friendly**: "Buy me a coffee" = low friction
- 🚀 **Scalabilité**: Zéro DevOps = coûts maîtrisés
**Valeur technique:**
- 🔧 **Maintenabilité**: Architecture modulaire (factory pattern pour providers)
- 🌍 **International**: Support multi-langues out-of-the-box
- 🛡️ **Confiance**: Système de feedback = amélioration continue
**Différenciation vs concurrents:**
- Google Keep: pas de IA avancée
- Notion: IA payante seulement, complexe
- Memento: simple + IA progressive + respect privacy (Ollama local)
---
### ⚫ Black Hat - Risques & Défis
**Risques techniques:**
- ⚠️ **Performance**: Embeddings = ralentissements si beaucoup de notes
- 💾 **Stockage**: SQLite avec embeddings = taille DB rapide
- 🔐 **Sécurité**: File upload images = validation nécessaire
- 🐛 **Hallucinations**: IA peut générer faux, même avec score de confiance
- 🌐 **API limits**: OpenAI = coûts, rate limits; Ollama = nécessite installation locale
**Risques UX:**
- 😤 **Frustration**: IA qui se trompe = abandon
- 🤔 **Complexité**: Trop de features = overwhelm
- 🎭 **Incohérence**: Tags IA qui ne font pas sens pour l'utilisateur
- 🔔 **Spam**: Notifications IA trop fréquentes = désactivation
**Risques business:**
- 💸 **Coûts IA**: OpenAI API = margin pressure si beaucoup d'users
- 📉 **Adoption**: Users ne voient pas la valeur IA = pas de conversion freemium
- 🏃 **Churn**: Une mauvaise expérience IA = perte user
- ⚖️ **Concurrence**: Notion, Obsidian ajoutent IA aussi
**Risques adoption:**
- 🔒 **Privacy**: Users inquiets que IA lise leurs notes
- 🏠 **Setup local**: Ollama = barrière à l'entrée pour utilisateurs non-techniques
- 📊 **Data usage**: Users sur connexion limitée = embeddings = consommation data
**Mitigations identifiées:**
- Système confiance + feedback = réduit hallucinations impact
- Mode conservatif = moins d'erreurs auto
- ON/OFF granulaire = user contrôle = réduit frustration
- Hosting managé = zéro DevOps mais coûts hosting
- Ollama optionnel = fallback OpenAI pour users non-tech
---
### 🌱 Green Hat - Alternatives Créatives
**Nouvelles idées issues de l'analyse:**
**1. IA Contextuelle (Smart Context)**
- IA adapte son comportement selon le type de note
- Note code = suggestions techniques
- Note liste = checkboxes, organisation
- Note réflexion = questions de synthèse
**2. Templates IA-Enhanced**
- IA génère templates personnalisés selon patterns utilisateur
- "Meeting notes", "Brainstorming", "Project planning"
- Auto-complétion de sections
**3. IA Collaborative**
- Mode "Brainstorm avec IA" = IA propose des idées
- IA joue rôle de "devils advocate" = challenge les idées
- IA suggère des connexions entre notes
**4. Gamification Subtile**
- "Note du jour" = IA met en avant une note à relire
- "Patterns découverts" = IA montre tendances d'écriture
- "Insight semaine" = IA résume les thèmes récurrents
**5. IA Prédictive**
- IA suggère de créer une note avant même qu'on le demande
- "Tu créés souvent des notes X le mardi, veux-tu un template?"
- Anticipation basée sur historique
**6. Mode "Focus IA"**
- Interface simplifiée avec IA en avant
- Tout est automatique, minimal UI
- Pour utilisateurs qui veulent zéro friction
**7. IA + Voice (future-proofing)**
- Préparer architecture pour transcription vocale
- Commandes vocales: "Crée une note sur X"
- Dictée avec reformulation IA en temps réel
---
### 🔵 Blue Hat - Process & Organisation
**Synthèse des 3 phases:**
**20+ idées générées (SCAMPER):**
- Catégorisation: UX (5), Architecture (4), Business (3), Features (8)
**Problème critique identifié (Future Self):**
- Hallucinations → Solution: Système confiance 3 couches ✅
**Validation multi-perspectives (Six Hats):**
- Technique: Faisable avec stack actuel + quelques ajouts
- Émotionnel: Besoin transparence + contrôle
- Valeur: Gain temps + différenciation claire
- Risques: Mitigables avec architecture solide
- Créatif: 7 nouvelles directions innovantes
---
### 📊 Priorisation des Fonctionnalités
**Phase 1 - MVP IA (Maximum Value, Quick Wins):**
1. ✅ **Tags IA automatiques** (déjà implémenté)
2. 🎯 **Titres auto** (3 suggestions, pas d'auto-génération)
3. 🔍 **Recherche sémantique** (vector search avec embeddings)
4. 🎨 **Bouton reformulation** (manuel, par paragraphe)
**Phase 2 - Experience Enhancement:**
5. 🖼️ **Description images** (OCR + description)
6. 🔗 **Résumé URLs** (extraction points clés)
7. 💡 **Suggestions suites logiques** (après chaque note)
8. ⚙️ **Settings IA granulaires** (ON/OFF par feature)
**Phase 3 - Trust & Intelligence:**
9. 📊 **Score de confiance** (transparence)
10. 👍👎 **Feedback learning** (amélioration continue)
11. 🛡️ **Mode conservatif** (safety first)
12. 🌍 **Détection langue auto** (multilingue)
**Phase 4 - Advanced Features (Freemium):**
13. 🎙️ **Transcription audio** (notes vocales)
14. 📁 **Organisation auto** (IA propose dossiers/catégories)
15. 🧠 **Templates IA personnalisés** (patterns utilisateur)
16. 🤖 **Mode "Super IA"** (optimisation complète note)
---
### 🎯 Architecture Technique Recommandée
**Base de données (Prisma + SQLite):**
```
Note (existante)
+ embedding: Vector (512) // embeddings pour recherche sémantique
+ autoGenerated: Boolean // True si titre/tags par IA
+ aiConfidence: Int? // Score 0-100 si généré par IA
+ language: String? // Langue détectée: 'fr', 'en', etc.
AiFeedback (nouvelle)
+ id: ID
+ noteId: Note
+ userId: User
+ feedbackType: Enum (thumbs_up, thumbs_down, correction)
+ originalContent: String
+ correctedContent: String?
+ createdAt: DateTime
```
**API Routes:**
- `/api/ai/tags` (existante)
- `/api/ai/embeddings` (génération embeddings note)
- `/api/ai/search` (recherche sémantique)
- `/api/ai/titles` (suggestions titres)
- `/api/ai/refactor` (reformulation texte)
- `/api/ai/image` (description OCR)
- `/api/ai/url-summary` (résumé URL)
- `/api/ai/feedback` (collecte feedback)
- `/api/ai/next-steps` (suggestions suites)
**Components:**
- `<AiButton />` (bouton générique avec loading state)
- `<AiSuggestion />` (suggestion avec score confiance)
- `<AiFeedbackButtons />` (👍👎 avec tooltip)
- `<AiSettingsPanel />` (ON/OFF granulaire)
- `<ConfidenceBadge />` (affichage score)
**Services:**
- `ai.service.ts` (orchestration appels IA)
- `confidence.service.ts` (calcul score confiance)
- `feedback.service.ts` (collecte et analyse feedback)
- `embedding.service.ts` (génération et stockage embeddings)
---
### 🚀 Next Steps Concrets
**Immédiat (cette semaine):**
1. ✅ Valider architecture technique avec équipe
2. 📝 Créer PRD pour features Phase 1
3. 🔧 Setup infrastructure embeddings (colonne Vector DB)
4. 🧪 Tester modèles Ollama + OpenAI pour titres/refactor
**Court terme (2-4 semaines):**
5. 💻 Implémenter recherche sémantique (MVP +)
6. 🎨 Développer suggestions titres
7. ✨ Bouton reformulation UX
8. 🧪 Tests utilisateurs avec petits cohort
**Moyen terme (1-2 mois):**
9. 🖼️ Description images + OCR
10. 🔗 Résumé URLs
11. ⚙️ Settings IA granulaires
12. 📊 Système feedback + scoring confiance
**Long terme (3+ mois):**
13. 🎙️ Transcription audio
14. 🤖 Mode "Super IA"
15. 🧠 Templates intelligents
16. 💰 Lancement freemium + paiement
---
## 🎉 Conclusion Session Brainstorming
**Résumé exécutif:**
- **20+ idées IA générées** via SCAMPER
- **Problème critique identifié**: hallucinations → solution élégante proposée
- **Architecture multilingue validée**: prompts anglais, données utilisateur multi-langues
- **Priorisation claire**: 4 phases de MVP à features avancées
- **Business model défini**: freemium avec "buy me a coffee", zéro DevOps
**Décision clef:**
"Zéro prise de tête" = automatique par défaut, contrôle utilisateur TOUJOURS disponible
**Prochaine étape:**
Créer PRD détaillé pour Phase 1 (MVP IA) avec specs techniques + mockups UX
---
✅ **Session terminée avec succès!**
**Date:** 2026-01-09
**Durée:** 3 phases (SCAMPER, Future Self Interview, Six Thinking Hats)
**Output:** Architecture validée + roadmap priorisée + next steps concrets
---

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,258 @@
'use client'
import { Button } from '@/components/ui/button'
import { Card, CardContent } from '@/components/ui/card'
import { Badge } from '@/components/ui/badge'
import { useState, useEffect } from 'react'
import { toast } from 'sonner'
import { Loader2, CheckCircle2, XCircle, Clock, Zap, Info } from 'lucide-react'
interface TestResult {
success: boolean
provider?: string
model?: string
responseTime?: number
tags?: Array<{ tag: string; confidence: number }>
embeddingLength?: number
firstValues?: number[]
error?: string
details?: any
}
export function AI_TESTER({ type }: { type: 'tags' | 'embeddings' }) {
const [isLoading, setIsLoading] = useState(false)
const [result, setResult] = useState<TestResult | null>(null)
const [config, setConfig] = useState<any>(null)
useEffect(() => {
fetchConfig()
}, [])
const fetchConfig = async () => {
try {
const response = await fetch('/api/ai/config')
const data = await response.json()
setConfig(data)
// Set previous result if available
if (data.previousTest) {
setResult(data.previousTest[type] || null)
}
} catch (error) {
console.error('Failed to fetch config:', error)
}
}
const runTest = async () => {
setIsLoading(true)
const startTime = Date.now()
try {
const response = await fetch(`/api/ai/test-${type}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' }
})
const endTime = Date.now()
const data = await response.json()
setResult({
...data,
responseTime: endTime - startTime
})
if (data.success) {
toast.success(
`${type === 'tags' ? 'Tags' : 'Embeddings'} Test Successful!`,
{
description: `Provider: ${data.provider} | Time: ${endTime - startTime}ms`
}
)
} else {
toast.error(
`${type === 'tags' ? 'Tags' : 'Embeddings'} Test Failed`,
{
description: data.error || 'Unknown error'
}
)
}
} catch (error: any) {
const endTime = Date.now()
const errorResult = {
success: false,
error: error.message || 'Network error',
responseTime: endTime - startTime
}
setResult(errorResult)
toast.error(`❌ Test Error: ${error.message}`)
} finally {
setIsLoading(false)
}
}
const getProviderInfo = () => {
if (!config) return { provider: 'Loading...', model: 'Loading...' }
if (type === 'tags') {
return {
provider: config.AI_PROVIDER_TAGS || 'ollama',
model: config.AI_MODEL_TAGS || 'granite4:latest'
}
} else {
return {
provider: config.AI_PROVIDER_EMBEDDING || 'ollama',
model: config.AI_MODEL_EMBEDDING || 'embeddinggemma:latest'
}
}
}
const providerInfo = getProviderInfo()
return (
<div className="space-y-4">
{/* Provider Info */}
<div className="space-y-3 p-4 bg-muted/50 rounded-lg">
<div className="flex items-center justify-between">
<span className="text-sm font-medium">Provider:</span>
<Badge variant="outline" className="text-xs">
{providerInfo.provider.toUpperCase()}
</Badge>
</div>
<div className="flex items-center justify-between">
<span className="text-sm font-medium">Model:</span>
<span className="text-sm text-muted-foreground font-mono">
{providerInfo.model}
</span>
</div>
</div>
{/* Test Button */}
<Button
onClick={runTest}
disabled={isLoading}
className="w-full"
variant={result?.success ? 'default' : result?.success === false ? 'destructive' : 'default'}
>
{isLoading ? (
<>
<Loader2 className="mr-2 h-4 w-4 animate-spin" />
Testing...
</>
) : (
<>
<Zap className="mr-2 h-4 w-4" />
Run Test
</>
)}
</Button>
{/* Results */}
{result && (
<Card className={result.success ? 'border-green-200 dark:border-green-900' : 'border-red-200 dark:border-red-900'}>
<CardContent className="pt-6">
{/* Status */}
<div className="flex items-center gap-2 mb-4">
{result.success ? (
<>
<CheckCircle2 className="h-5 w-5 text-green-600" />
<span className="font-semibold text-green-600 dark:text-green-400">Test Passed</span>
</>
) : (
<>
<XCircle className="h-5 w-5 text-red-600" />
<span className="font-semibold text-red-600 dark:text-red-400">Test Failed</span>
</>
)}
</div>
{/* Response Time */}
{result.responseTime && (
<div className="flex items-center gap-2 text-sm text-muted-foreground mb-4">
<Clock className="h-4 w-4" />
<span>Response time: {result.responseTime}ms</span>
</div>
)}
{/* Tags Results */}
{type === 'tags' && result.success && result.tags && (
<div className="space-y-3">
<div className="flex items-center gap-2">
<Info className="h-4 w-4 text-blue-600" />
<span className="text-sm font-medium">Generated Tags:</span>
</div>
<div className="flex flex-wrap gap-2">
{result.tags.map((tag, idx) => (
<Badge
key={idx}
variant="secondary"
className="text-sm"
>
{tag.tag}
<span className="ml-1 text-xs opacity-70">
({Math.round(tag.confidence * 100)}%)
</span>
</Badge>
))}
</div>
</div>
)}
{/* Embeddings Results */}
{type === 'embeddings' && result.success && result.embeddingLength && (
<div className="space-y-3">
<div className="flex items-center gap-2">
<Info className="h-4 w-4 text-green-600" />
<span className="text-sm font-medium">Embedding Dimensions:</span>
</div>
<div className="p-3 bg-muted rounded-lg">
<div className="text-2xl font-bold text-center">
{result.embeddingLength}
</div>
<div className="text-xs text-center text-muted-foreground mt-1">
vector dimensions
</div>
</div>
{result.firstValues && result.firstValues.length > 0 && (
<div className="space-y-1">
<span className="text-xs font-medium">First 5 values:</span>
<div className="p-2 bg-muted rounded font-mono text-xs">
[{result.firstValues.slice(0, 5).map((v, i) => v.toFixed(4)).join(', ')}]
</div>
</div>
)}
</div>
)}
{/* Error Details */}
{!result.success && result.error && (
<div className="mt-4 p-3 bg-red-50 dark:bg-red-950/20 rounded-lg border border-red-200 dark:border-red-900">
<p className="text-sm font-medium text-red-900 dark:text-red-100">Error:</p>
<p className="text-sm text-red-700 dark:text-red-300 mt-1">{result.error}</p>
{result.details && (
<details className="mt-2">
<summary className="text-xs cursor-pointer text-red-600 dark:text-red-400">
Technical details
</summary>
<pre className="mt-2 text-xs overflow-auto p-2 bg-red-100 dark:bg-red-900/30 rounded">
{JSON.stringify(result.details, null, 2)}
</pre>
</details>
)}
</div>
)}
</CardContent>
</Card>
)}
{/* Loading State */}
{isLoading && (
<div className="text-center py-4">
<Loader2 className="h-8 w-8 animate-spin mx-auto text-blue-600" />
<p className="text-sm text-muted-foreground mt-2">
Testing {type === 'tags' ? 'tags generation' : 'embeddings'}...
</p>
</div>
)}
</div>
)
}

View File

@ -0,0 +1,106 @@
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'
import { Button } from '@/components/ui/button'
import { auth } from '@/auth'
import { redirect } from 'next/navigation'
import Link from 'next/link'
import { ArrowLeft, TestTube } from 'lucide-react'
import { AI_TESTER } from './ai-tester'
export default async function AITestPage() {
const session = await auth()
if ((session?.user as any)?.role !== 'ADMIN') {
redirect('/')
}
return (
<div className="container mx-auto py-10 px-4 max-w-6xl">
<div className="flex justify-between items-center mb-8">
<div className="flex items-center gap-3">
<Link href="/admin/settings">
<Button variant="outline" size="icon">
<ArrowLeft className="h-4 w-4" />
</Button>
</Link>
<div>
<h1 className="text-3xl font-bold flex items-center gap-2">
<TestTube className="h-8 w-8" />
AI Provider Testing
</h1>
<p className="text-muted-foreground mt-1">
Test your AI providers for tag generation and semantic search embeddings
</p>
</div>
</div>
</div>
<div className="grid gap-6 md:grid-cols-2">
{/* Tags Provider Test */}
<Card className="border-blue-200 dark:border-blue-900">
<CardHeader className="bg-blue-50/50 dark:bg-blue-950/20">
<CardTitle className="flex items-center gap-2">
<span className="text-2xl">🏷</span>
Tags Generation Test
</CardTitle>
<CardDescription>
Test the AI provider responsible for automatic tag suggestions
</CardDescription>
</CardHeader>
<CardContent className="pt-6">
<AI_TESTER type="tags" />
</CardContent>
</Card>
{/* Embeddings Provider Test */}
<Card className="border-green-200 dark:border-green-900">
<CardHeader className="bg-green-50/50 dark:bg-green-950/20">
<CardTitle className="flex items-center gap-2">
<span className="text-2xl">🔍</span>
Embeddings Test
</CardTitle>
<CardDescription>
Test the AI provider responsible for semantic search embeddings
</CardDescription>
</CardHeader>
<CardContent className="pt-6">
<AI_TESTER type="embeddings" />
</CardContent>
</Card>
</div>
{/* Info Section */}
<Card className="mt-6">
<CardHeader>
<CardTitle> How Testing Works</CardTitle>
</CardHeader>
<CardContent className="space-y-4 text-sm">
<div>
<h4 className="font-semibold mb-2">🏷 Tags Generation Test:</h4>
<ul className="list-disc list-inside space-y-1 text-muted-foreground">
<li>Sends a sample note to the AI provider</li>
<li>Requests 3-5 relevant tags based on the content</li>
<li>Displays the generated tags with confidence scores</li>
<li>Measures response time</li>
</ul>
</div>
<div>
<h4 className="font-semibold mb-2">🔍 Embeddings Test:</h4>
<ul className="list-disc list-inside space-y-1 text-muted-foreground">
<li>Sends a sample text to the embedding provider</li>
<li>Generates a vector representation (list of numbers)</li>
<li>Displays embedding dimensions and sample values</li>
<li>Verifies the vector is valid and properly formatted</li>
</ul>
</div>
<div className="bg-amber-50 dark:bg-amber-950/20 p-4 rounded-lg border border-amber-200 dark:border-amber-900">
<p className="font-semibold text-amber-900 dark:text-amber-100">💡 Tip:</p>
<p className="text-amber-800 dark:text-amber-200 mt-1">
You can use different providers for tags and embeddings! For example, use Ollama (free) for tags
and OpenAI (best quality) for embeddings to optimize costs and performance.
</p>
</div>
</CardContent>
</Card>
</div>
)
}

View File

@ -4,36 +4,66 @@ import { Button } from '@/components/ui/button'
import { Input } from '@/components/ui/input'
import { Checkbox } from '@/components/ui/checkbox'
import { Card, CardContent, CardDescription, CardFooter, CardHeader, CardTitle } from '@/components/ui/card'
import { Label } from '@/components/ui/label'
import { updateSystemConfig, testSMTP } from '@/app/actions/admin-settings'
import { toast } from 'sonner'
import { useState, useEffect } from 'react'
import Link from 'next/link'
import { TestTube, ExternalLink } from 'lucide-react'
type AIProvider = 'ollama' | 'openai' | 'custom'
interface AvailableModels {
tags: string[]
embeddings: string[]
}
const MODELS_2026 = {
ollama: {
tags: ['llama3:latest', 'llama3.2:latest', 'granite4:latest', 'mistral:latest', 'mixtral:latest', 'phi3:latest', 'gemma2:latest', 'qwen2:latest'],
embeddings: ['embeddinggemma:latest', 'mxbai-embed-large:latest', 'nomic-embed-text:latest']
},
openai: {
tags: ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-4', 'gpt-3.5-turbo'],
embeddings: ['text-embedding-3-small', 'text-embedding-3-large', 'text-embedding-ada-002']
},
custom: {
tags: ['gpt-4o-mini', 'gpt-4o', 'claude-3-haiku', 'claude-3-sonnet', 'llama-3.1-8b'],
embeddings: ['text-embedding-3-small', 'text-embedding-3-large', 'text-embedding-ada-002']
}
}
export function AdminSettingsForm({ config }: { config: Record<string, string> }) {
const [isSaving, setIsSaving] = useState(false)
const [isTesting, setIsTesting] = useState(false)
// Local state for Checkbox
const [allowRegister, setAllowRegister] = useState(config.ALLOW_REGISTRATION !== 'false')
const [smtpSecure, setSmtpSecure] = useState(config.SMTP_SECURE === 'true')
const [smtpIgnoreCert, setSmtpIgnoreCert] = useState(config.SMTP_IGNORE_CERT === 'true')
// AI Provider state - separated for tags and embeddings
const [tagsProvider, setTagsProvider] = useState<AIProvider>((config.AI_PROVIDER_TAGS as AIProvider) || 'ollama')
const [embeddingsProvider, setEmbeddingsProvider] = useState<AIProvider>((config.AI_PROVIDER_EMBEDDING as AIProvider) || 'ollama')
// Sync state with config when server revalidates
useEffect(() => {
setAllowRegister(config.ALLOW_REGISTRATION !== 'false')
setSmtpSecure(config.SMTP_SECURE === 'true')
setSmtpIgnoreCert(config.SMTP_IGNORE_CERT === 'true')
setTagsProvider((config.AI_PROVIDER_TAGS as AIProvider) || 'ollama')
setEmbeddingsProvider((config.AI_PROVIDER_EMBEDDING as AIProvider) || 'ollama')
}, [config])
const handleSaveSecurity = async (formData: FormData) => {
setIsSaving(true)
// We override the formData get because the hidden input might be tricky
const data = {
ALLOW_REGISTRATION: allowRegister ? 'true' : 'false',
}
const result = await updateSystemConfig(data)
setIsSaving(false)
if (result.error) {
toast.error('Failed to update security settings')
} else {
@ -43,20 +73,45 @@ export function AdminSettingsForm({ config }: { config: Record<string, string> }
const handleSaveAI = async (formData: FormData) => {
setIsSaving(true)
const data = {
AI_PROVIDER: formData.get('AI_PROVIDER') as string,
OLLAMA_BASE_URL: formData.get('OLLAMA_BASE_URL') as string,
AI_MODEL_EMBEDDING: formData.get('AI_MODEL_EMBEDDING') as string,
OPENAI_API_KEY: formData.get('OPENAI_API_KEY') as string,
const data: Record<string, string> = {}
// Tags provider configuration
const tagsProv = formData.get('AI_PROVIDER_TAGS') as AIProvider
data.AI_PROVIDER_TAGS = tagsProv
data.AI_MODEL_TAGS = formData.get('AI_MODEL_TAGS') as string
if (tagsProv === 'ollama') {
data.OLLAMA_BASE_URL = formData.get('OLLAMA_BASE_URL_TAGS') as string
} else if (tagsProv === 'openai') {
data.OPENAI_API_KEY = formData.get('OPENAI_API_KEY') as string
} else if (tagsProv === 'custom') {
data.CUSTOM_OPENAI_API_KEY = formData.get('CUSTOM_OPENAI_API_KEY_TAGS') as string
data.CUSTOM_OPENAI_BASE_URL = formData.get('CUSTOM_OPENAI_BASE_URL_TAGS') as string
}
// Embeddings provider configuration
const embedProv = formData.get('AI_PROVIDER_EMBEDDING') as AIProvider
data.AI_PROVIDER_EMBEDDING = embedProv
data.AI_MODEL_EMBEDDING = formData.get('AI_MODEL_EMBEDDING') as string
if (embedProv === 'ollama') {
data.OLLAMA_BASE_URL = formData.get('OLLAMA_BASE_URL_EMBEDDING') as string
} else if (embedProv === 'openai') {
data.OPENAI_API_KEY = formData.get('OPENAI_API_KEY') as string
} else if (embedProv === 'custom') {
data.CUSTOM_OPENAI_API_KEY = formData.get('CUSTOM_OPENAI_API_KEY_EMBEDDING') as string
data.CUSTOM_OPENAI_BASE_URL = formData.get('CUSTOM_OPENAI_BASE_URL_EMBEDDING') as string
}
const result = await updateSystemConfig(data)
setIsSaving(false)
if (result.error) {
toast.error('Failed to update AI settings')
} else {
toast.success('AI Settings updated')
setTagsProvider(tagsProv)
setEmbeddingsProvider(embedProv)
}
}
@ -71,10 +126,10 @@ export function AdminSettingsForm({ config }: { config: Record<string, string> }
SMTP_IGNORE_CERT: smtpIgnoreCert ? 'true' : 'false',
SMTP_SECURE: smtpSecure ? 'true' : 'false',
}
const result = await updateSystemConfig(data)
setIsSaving(false)
if (result.error) {
toast.error('Failed to update SMTP settings')
} else {
@ -108,8 +163,8 @@ export function AdminSettingsForm({ config }: { config: Record<string, string> }
<form action={handleSaveSecurity}>
<CardContent className="space-y-4">
<div className="flex items-center space-x-2">
<Checkbox
id="ALLOW_REGISTRATION"
<Checkbox
id="ALLOW_REGISTRATION"
checked={allowRegister}
onCheckedChange={(c) => setAllowRegister(!!c)}
/>
@ -133,40 +188,219 @@ export function AdminSettingsForm({ config }: { config: Record<string, string> }
<Card>
<CardHeader>
<CardTitle>AI Configuration</CardTitle>
<CardDescription>Configure the AI provider for auto-tagging and semantic search.</CardDescription>
<CardDescription>Configure AI providers for auto-tagging and semantic search. Use different providers for optimal performance.</CardDescription>
</CardHeader>
<form action={handleSaveAI}>
<CardContent className="space-y-4">
<div className="space-y-2">
<label htmlFor="AI_PROVIDER" className="text-sm font-medium">Provider</label>
<select
id="AI_PROVIDER"
name="AI_PROVIDER"
defaultValue={config.AI_PROVIDER || 'ollama'}
className="flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2"
>
<option value="ollama">Ollama (Local)</option>
<option value="openai">OpenAI</option>
</select>
</div>
<div className="space-y-2">
<label htmlFor="OLLAMA_BASE_URL" className="text-sm font-medium">Ollama Base URL</label>
<Input id="OLLAMA_BASE_URL" name="OLLAMA_BASE_URL" defaultValue={config.OLLAMA_BASE_URL || 'http://localhost:11434'} placeholder="http://localhost:11434" />
<CardContent className="space-y-6">
{/* Tags Generation Section */}
<div className="space-y-4 p-4 border rounded-lg bg-blue-50/50 dark:bg-blue-950/20">
<h3 className="text-base font-semibold flex items-center gap-2">
<span className="text-blue-600">🏷</span> Tags Generation Provider
</h3>
<p className="text-xs text-muted-foreground">AI provider for automatic tag suggestions. Recommended: Ollama (free, local).</p>
<div className="space-y-2">
<Label htmlFor="AI_PROVIDER_TAGS">Provider</Label>
<select
id="AI_PROVIDER_TAGS"
name="AI_PROVIDER_TAGS"
value={tagsProvider}
onChange={(e) => setTagsProvider(e.target.value as AIProvider)}
className="flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2"
>
<option value="ollama">🦙 Ollama (Local & Free)</option>
<option value="openai">🤖 OpenAI (GPT-5, GPT-4)</option>
<option value="custom">🔧 Custom OpenAI-Compatible</option>
</select>
</div>
{/* Ollama Tags Config */}
{tagsProvider === 'ollama' && (
<div className="space-y-3">
<div className="space-y-2">
<Label htmlFor="OLLAMA_BASE_URL_TAGS">Base URL</Label>
<Input id="OLLAMA_BASE_URL_TAGS" name="OLLAMA_BASE_URL_TAGS" defaultValue={config.OLLAMA_BASE_URL || 'http://localhost:11434'} placeholder="http://localhost:11434" />
</div>
<div className="space-y-2">
<Label htmlFor="AI_MODEL_TAGS_OLLAMA">Model</Label>
<select
id="AI_MODEL_TAGS_OLLAMA"
name="AI_MODEL_TAGS"
defaultValue={config.AI_MODEL_TAGS || 'granite4:latest'}
className="flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2"
>
{MODELS_2026.ollama.tags.map((model) => (
<option key={model} value={model}>{model}</option>
))}
</select>
<p className="text-xs text-muted-foreground">Select an Ollama model installed on your system</p>
</div>
</div>
)}
{/* OpenAI Tags Config */}
{tagsProvider === 'openai' && (
<div className="space-y-3">
<div className="space-y-2">
<Label htmlFor="OPENAI_API_KEY">API Key</Label>
<Input id="OPENAI_API_KEY" name="OPENAI_API_KEY" type="password" defaultValue={config.OPENAI_API_KEY || ''} placeholder="sk-..." />
<p className="text-xs text-muted-foreground">Your OpenAI API key from <a href="https://platform.openai.com/api-keys" target="_blank" rel="noopener noreferrer" className="text-blue-500 hover:underline">platform.openai.com</a></p>
</div>
<div className="space-y-2">
<Label htmlFor="AI_MODEL_TAGS_OPENAI">Model</Label>
<select
id="AI_MODEL_TAGS_OPENAI"
name="AI_MODEL_TAGS"
defaultValue={config.AI_MODEL_TAGS || 'gpt-4o-mini'}
className="flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2"
>
{MODELS_2026.openai.tags.map((model) => (
<option key={model} value={model}>{model}</option>
))}
</select>
<p className="text-xs text-muted-foreground"><strong className="text-green-600">gpt-4o-mini</strong> = Best value <strong className="text-blue-600">gpt-4o</strong> = Best quality</p>
</div>
</div>
)}
{/* Custom OpenAI Tags Config */}
{tagsProvider === 'custom' && (
<div className="space-y-3">
<div className="space-y-2">
<Label htmlFor="CUSTOM_OPENAI_BASE_URL_TAGS">Base URL</Label>
<Input id="CUSTOM_OPENAI_BASE_URL_TAGS" name="CUSTOM_OPENAI_BASE_URL_TAGS" defaultValue={config.CUSTOM_OPENAI_BASE_URL || ''} placeholder="https://api.example.com/v1" />
</div>
<div className="space-y-2">
<Label htmlFor="CUSTOM_OPENAI_API_KEY_TAGS">API Key</Label>
<Input id="CUSTOM_OPENAI_API_KEY_TAGS" name="CUSTOM_OPENAI_API_KEY_TAGS" type="password" defaultValue={config.CUSTOM_OPENAI_API_KEY || ''} placeholder="sk-..." />
</div>
<div className="space-y-2">
<Label htmlFor="AI_MODEL_TAGS_CUSTOM">Model</Label>
<select
id="AI_MODEL_TAGS_CUSTOM"
name="AI_MODEL_TAGS"
defaultValue={config.AI_MODEL_TAGS || 'gpt-4o-mini'}
className="flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2"
>
{MODELS_2026.custom.tags.map((model) => (
<option key={model} value={model}>{model}</option>
))}
</select>
<p className="text-xs text-muted-foreground">Common models for OpenAI-compatible APIs</p>
</div>
</div>
)}
</div>
<div className="space-y-2">
<label htmlFor="AI_MODEL_EMBEDDING" className="text-sm font-medium">Embedding Model</label>
<Input id="AI_MODEL_EMBEDDING" name="AI_MODEL_EMBEDDING" defaultValue={config.AI_MODEL_EMBEDDING || 'embeddinggemma:latest'} placeholder="embeddinggemma:latest" />
</div>
{/* Embeddings Section */}
<div className="space-y-4 p-4 border rounded-lg bg-green-50/50 dark:bg-green-950/20">
<h3 className="text-base font-semibold flex items-center gap-2">
<span className="text-green-600">🔍</span> Embeddings Provider
</h3>
<p className="text-xs text-muted-foreground">AI provider for semantic search embeddings. Recommended: OpenAI (best quality).</p>
<div className="space-y-2">
<label htmlFor="OPENAI_API_KEY" className="text-sm font-medium">OpenAI API Key (if using OpenAI)</label>
<Input id="OPENAI_API_KEY" name="OPENAI_API_KEY" type="password" defaultValue={config.OPENAI_API_KEY || ''} placeholder="sk-..." />
<div className="space-y-2">
<Label htmlFor="AI_PROVIDER_EMBEDDING">Provider</Label>
<select
id="AI_PROVIDER_EMBEDDING"
name="AI_PROVIDER_EMBEDDING"
value={embeddingsProvider}
onChange={(e) => setEmbeddingsProvider(e.target.value as AIProvider)}
className="flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2"
>
<option value="ollama">🦙 Ollama (Local & Free)</option>
<option value="openai">🤖 OpenAI (text-embedding-4)</option>
<option value="custom">🔧 Custom OpenAI-Compatible</option>
</select>
</div>
{/* Ollama Embeddings Config */}
{embeddingsProvider === 'ollama' && (
<div className="space-y-3">
<div className="space-y-2">
<Label htmlFor="OLLAMA_BASE_URL_EMBEDDING">Base URL</Label>
<Input id="OLLAMA_BASE_URL_EMBEDDING" name="OLLAMA_BASE_URL_EMBEDDING" defaultValue={config.OLLAMA_BASE_URL || 'http://localhost:11434'} placeholder="http://localhost:11434" />
</div>
<div className="space-y-2">
<Label htmlFor="AI_MODEL_EMBEDDING_OLLAMA">Model</Label>
<select
id="AI_MODEL_EMBEDDING_OLLAMA"
name="AI_MODEL_EMBEDDING"
defaultValue={config.AI_MODEL_EMBEDDING || 'embeddinggemma:latest'}
className="flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2"
>
{MODELS_2026.ollama.embeddings.map((model) => (
<option key={model} value={model}>{model}</option>
))}
</select>
<p className="text-xs text-muted-foreground">Select an embedding model installed on your system</p>
</div>
</div>
)}
{/* OpenAI Embeddings Config */}
{embeddingsProvider === 'openai' && (
<div className="space-y-3">
<div className="space-y-2">
<Label htmlFor="OPENAI_API_KEY">API Key</Label>
<Input id="OPENAI_API_KEY" name="OPENAI_API_KEY" type="password" defaultValue={config.OPENAI_API_KEY || ''} placeholder="sk-..." />
<p className="text-xs text-muted-foreground">Your OpenAI API key from <a href="https://platform.openai.com/api-keys" target="_blank" rel="noopener noreferrer" className="text-blue-500 hover:underline">platform.openai.com</a></p>
</div>
<div className="space-y-2">
<Label htmlFor="AI_MODEL_EMBEDDING_OPENAI">Model</Label>
<select
id="AI_MODEL_EMBEDDING_OPENAI"
name="AI_MODEL_EMBEDDING"
defaultValue={config.AI_MODEL_EMBEDDING || 'text-embedding-3-small'}
className="flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2"
>
{MODELS_2026.openai.embeddings.map((model) => (
<option key={model} value={model}>{model}</option>
))}
</select>
<p className="text-xs text-muted-foreground"><strong className="text-green-600">text-embedding-3-small</strong> = Best value <strong className="text-blue-600">text-embedding-3-large</strong> = Best quality</p>
</div>
</div>
)}
{/* Custom OpenAI Embeddings Config */}
{embeddingsProvider === 'custom' && (
<div className="space-y-3">
<div className="space-y-2">
<Label htmlFor="CUSTOM_OPENAI_BASE_URL_EMBEDDING">Base URL</Label>
<Input id="CUSTOM_OPENAI_BASE_URL_EMBEDDING" name="CUSTOM_OPENAI_BASE_URL_EMBEDDING" defaultValue={config.CUSTOM_OPENAI_BASE_URL || ''} placeholder="https://api.example.com/v1" />
</div>
<div className="space-y-2">
<Label htmlFor="CUSTOM_OPENAI_API_KEY_EMBEDDING">API Key</Label>
<Input id="CUSTOM_OPENAI_API_KEY_EMBEDDING" name="CUSTOM_OPENAI_API_KEY_EMBEDDING" type="password" defaultValue={config.CUSTOM_OPENAI_API_KEY || ''} placeholder="sk-..." />
</div>
<div className="space-y-2">
<Label htmlFor="AI_MODEL_EMBEDDING_CUSTOM">Model</Label>
<select
id="AI_MODEL_EMBEDDING_CUSTOM"
name="AI_MODEL_EMBEDDING"
defaultValue={config.AI_MODEL_EMBEDDING || 'text-embedding-3-small'}
className="flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2"
>
{MODELS_2026.custom.embeddings.map((model) => (
<option key={model} value={model}>{model}</option>
))}
</select>
<p className="text-xs text-muted-foreground">Common embedding models for OpenAI-compatible APIs</p>
</div>
</div>
)}
</div>
</CardContent>
<CardFooter>
<Button type="submit" disabled={isSaving}>Save AI Settings</Button>
<CardFooter className="flex justify-between">
<Button type="submit" disabled={isSaving}>{isSaving ? 'Saving...' : 'Save AI Settings'}</Button>
<Link href="/admin/ai-test">
<Button type="button" variant="outline" className="gap-2">
<TestTube className="h-4 w-4" />
Open AI Test Panel
<ExternalLink className="h-3 w-3" />
</Button>
</Link>
</CardFooter>
</form>
</Card>
@ -188,12 +422,12 @@ export function AdminSettingsForm({ config }: { config: Record<string, string> }
<Input id="SMTP_PORT" name="SMTP_PORT" defaultValue={config.SMTP_PORT || '587'} placeholder="587" />
</div>
</div>
<div className="space-y-2">
<label htmlFor="SMTP_USER" className="text-sm font-medium">Username</label>
<Input id="SMTP_USER" name="SMTP_USER" defaultValue={config.SMTP_USER || ''} />
</div>
<div className="space-y-2">
<label htmlFor="SMTP_PASS" className="text-sm font-medium">Password</label>
<Input id="SMTP_PASS" name="SMTP_PASS" type="password" defaultValue={config.SMTP_PASS || ''} />
@ -205,10 +439,10 @@ export function AdminSettingsForm({ config }: { config: Record<string, string> }
</div>
<div className="flex items-center space-x-2">
<Checkbox
id="SMTP_SECURE"
checked={smtpSecure}
onCheckedChange={(c) => setSmtpSecure(!!c)}
<Checkbox
id="SMTP_SECURE"
checked={smtpSecure}
onCheckedChange={(c) => setSmtpSecure(!!c)}
/>
<label
htmlFor="SMTP_SECURE"
@ -219,8 +453,8 @@ export function AdminSettingsForm({ config }: { config: Record<string, string> }
</div>
<div className="flex items-center space-x-2">
<Checkbox
id="SMTP_IGNORE_CERT"
<Checkbox
id="SMTP_IGNORE_CERT"
checked={smtpIgnoreCert}
onCheckedChange={(c) => setSmtpIgnoreCert(!!c)}
/>

View File

@ -10,11 +10,11 @@ export default async function MainLayout({
const session = await auth();
return (
<div className="flex min-h-screen flex-col">
<div className="flex h-screen flex-col">
<HeaderWrapper user={session?.user} />
<div className="flex flex-1">
<Sidebar className="shrink-0 border-r" user={session?.user} />
<main className="flex-1">
<div className="flex flex-1 overflow-hidden">
<Sidebar className="shrink-0 border-r overflow-y-auto" user={session?.user} />
<main className="flex-1 overflow-y-auto">
{children}
</main>
</div>

View File

@ -0,0 +1,27 @@
import { NextRequest, NextResponse } from 'next/server'
import { getSystemConfig } from '@/lib/config'
export async function GET(request: NextRequest) {
try {
const config = await getSystemConfig()
return NextResponse.json({
AI_PROVIDER_TAGS: config.AI_PROVIDER_TAGS || 'ollama',
AI_MODEL_TAGS: config.AI_MODEL_TAGS || 'granite4:latest',
AI_PROVIDER_EMBEDDING: config.AI_PROVIDER_EMBEDDING || 'ollama',
AI_MODEL_EMBEDDING: config.AI_MODEL_EMBEDDING || 'embeddinggemma:latest',
OPENAI_API_KEY: config.OPENAI_API_KEY ? '***configured***' : '',
CUSTOM_OPENAI_API_KEY: config.CUSTOM_OPENAI_API_KEY ? '***configured***' : '',
CUSTOM_OPENAI_BASE_URL: config.CUSTOM_OPENAI_BASE_URL || '',
OLLAMA_BASE_URL: config.OLLAMA_BASE_URL || 'http://localhost:11434'
})
} catch (error: any) {
console.error('Error fetching AI config:', error)
return NextResponse.json(
{
error: error.message || 'Failed to fetch config'
},
{ status: 500 }
)
}
}

View File

@ -0,0 +1,98 @@
import { NextRequest, NextResponse } from 'next/server'
import { getSystemConfig } from '@/lib/config'
// Modèles populaires pour chaque provider (2025)
const PROVIDER_MODELS = {
ollama: {
tags: [
'llama3:latest',
'llama3.2:latest',
'granite4:latest',
'mistral:latest',
'mixtral:latest',
'phi3:latest',
'gemma2:latest',
'qwen2:latest'
],
embeddings: [
'embeddinggemma:latest',
'mxbai-embed-large:latest',
'nomic-embed-text:latest'
]
},
openai: {
tags: [
'gpt-4o',
'gpt-4o-mini',
'gpt-4-turbo',
'gpt-4',
'gpt-3.5-turbo'
],
embeddings: [
'text-embedding-3-small',
'text-embedding-3-large',
'text-embedding-ada-002'
]
},
custom: {
tags: [], // Will be loaded dynamically
embeddings: [] // Will be loaded dynamically
}
}
export async function GET(request: NextRequest) {
try {
const config = await getSystemConfig()
const provider = (config.AI_PROVIDER || 'ollama').toLowerCase()
let models = PROVIDER_MODELS[provider as keyof typeof PROVIDER_MODELS] || { tags: [], embeddings: [] }
// Pour Ollama, essayer de récupérer la liste réelle depuis l'API locale
if (provider === 'ollama') {
try {
const ollamaBaseUrl = config.OLLAMA_BASE_URL || process.env.OLLAMA_BASE_URL || 'http://localhost:11434'
const response = await fetch(`${ollamaBaseUrl}/api/tags`, {
method: 'GET',
headers: { 'Content-Type': 'application/json' }
})
if (response.ok) {
const data = await response.json()
const allModels = data.models || []
// Séparer les modèles de tags et d'embeddings
const tagModels = allModels
.filter((m: any) => !m.name.includes('embed') && !m.name.includes('Embedding'))
.map((m: any) => m.name)
.slice(0, 20) // Limiter à 20 modèles
const embeddingModels = allModels
.filter((m: any) => m.name.includes('embed') || m.name.includes('Embedding'))
.map((m: any) => m.name)
models = {
tags: tagModels.length > 0 ? tagModels : models.tags,
embeddings: embeddingModels.length > 0 ? embeddingModels : models.embeddings
}
}
} catch (error) {
console.warn('Could not fetch Ollama models, using defaults:', error)
// Garder les modèles par défaut
}
}
return NextResponse.json({
provider,
models: models || { tags: [], embeddings: [] }
})
} catch (error: any) {
console.error('Error fetching models:', error)
return NextResponse.json(
{
error: error.message || 'Failed to fetch models',
models: { tags: [], embeddings: [] }
},
{ status: 500 }
)
}
}

View File

@ -0,0 +1,91 @@
import { NextRequest, NextResponse } from 'next/server'
import { getEmbeddingsProvider } from '@/lib/ai/factory'
import { getSystemConfig } from '@/lib/config'
function getProviderDetails(config: Record<string, string>, providerType: string) {
const provider = providerType.toLowerCase()
switch (provider) {
case 'ollama':
return {
provider: 'Ollama',
baseUrl: config.OLLAMA_BASE_URL || 'http://localhost:11434',
model: config.AI_MODEL_EMBEDDING || 'embeddinggemma:latest'
}
case 'openai':
return {
provider: 'OpenAI',
baseUrl: 'https://api.openai.com/v1',
model: config.AI_MODEL_EMBEDDING || 'text-embedding-3-small'
}
case 'custom':
return {
provider: 'Custom OpenAI',
baseUrl: config.CUSTOM_OPENAI_BASE_URL || 'Not configured',
model: config.AI_MODEL_EMBEDDING || 'text-embedding-3-small'
}
default:
return {
provider: provider,
baseUrl: 'unknown',
model: config.AI_MODEL_EMBEDDING || 'unknown'
}
}
}
export async function POST(request: NextRequest) {
try {
const config = await getSystemConfig()
const provider = getEmbeddingsProvider(config)
const testText = 'test'
const startTime = Date.now()
const embeddings = await provider.getEmbeddings(testText)
const endTime = Date.now()
if (!embeddings || embeddings.length === 0) {
const providerType = config.AI_PROVIDER_EMBEDDING || 'ollama'
const details = getProviderDetails(config, providerType)
return NextResponse.json(
{
success: false,
error: 'No embeddings returned',
provider: providerType,
model: config.AI_MODEL_EMBEDDING || 'embeddinggemma:latest',
details
},
{ status: 500 }
)
}
const providerType = config.AI_PROVIDER_EMBEDDING || 'ollama'
const details = getProviderDetails(config, providerType)
return NextResponse.json({
success: true,
provider: providerType,
model: config.AI_MODEL_EMBEDDING || 'embeddinggemma:latest',
embeddingLength: embeddings.length,
firstValues: embeddings.slice(0, 5),
responseTime: endTime - startTime,
details
})
} catch (error: any) {
console.error('AI embeddings test error:', error)
const config = await getSystemConfig()
const providerType = config.AI_PROVIDER_EMBEDDING || 'ollama'
const details = getProviderDetails(config, providerType)
return NextResponse.json(
{
success: false,
error: error.message || 'Unknown error',
provider: providerType,
model: config.AI_MODEL_EMBEDDING || 'embeddinggemma:latest',
details,
stack: process.env.NODE_ENV === 'development' ? error.stack : undefined
},
{ status: 500 }
)
}
}

View File

@ -0,0 +1,50 @@
import { NextRequest, NextResponse } from 'next/server'
import { getTagsProvider } from '@/lib/ai/factory'
import { getSystemConfig } from '@/lib/config'
export async function POST(request: NextRequest) {
try {
const config = await getSystemConfig()
const provider = getTagsProvider(config)
const testContent = "This is a test note about artificial intelligence and machine learning. It contains keywords like AI, ML, neural networks, and deep learning."
const startTime = Date.now()
const tags = await provider.generateTags(testContent)
const endTime = Date.now()
if (!tags || tags.length === 0) {
return NextResponse.json(
{
success: false,
error: 'No tags generated',
provider: config.AI_PROVIDER_TAGS || 'ollama',
model: config.AI_MODEL_TAGS || 'granite4:latest'
},
{ status: 500 }
)
}
return NextResponse.json({
success: true,
provider: config.AI_PROVIDER_TAGS || 'ollama',
model: config.AI_MODEL_TAGS || 'granite4:latest',
tags: tags,
responseTime: endTime - startTime
})
} catch (error: any) {
console.error('AI tags test error:', error)
const config = await getSystemConfig()
return NextResponse.json(
{
success: false,
error: error.message || 'Unknown error',
provider: config.AI_PROVIDER_TAGS || 'ollama',
model: config.AI_MODEL_TAGS || 'granite4:latest',
stack: process.env.NODE_ENV === 'development' ? error.stack : undefined
},
{ status: 500 }
)
}
}

View File

@ -1,55 +1,88 @@
import { NextRequest, NextResponse } from 'next/server'
import { getAIProvider } from '@/lib/ai/factory'
import { getTagsProvider, getEmbeddingsProvider } from '@/lib/ai/factory'
import { getSystemConfig } from '@/lib/config'
function getProviderDetails(config: Record<string, string>, providerType: string) {
const provider = providerType.toLowerCase()
switch (provider) {
case 'ollama':
return {
provider: 'Ollama',
baseUrl: config.OLLAMA_BASE_URL || process.env.OLLAMA_BASE_URL || 'http://localhost:11434',
model: config.AI_MODEL_EMBEDDING || 'embeddinggemma:latest'
}
case 'openai':
return {
provider: 'OpenAI',
baseUrl: 'https://api.openai.com/v1',
model: config.AI_MODEL_EMBEDDING || 'text-embedding-3-small'
}
case 'custom':
return {
provider: 'Custom OpenAI',
baseUrl: config.CUSTOM_OPENAI_BASE_URL || process.env.CUSTOM_OPENAI_BASE_URL || 'Not configured',
model: config.AI_MODEL_EMBEDDING || 'text-embedding-3-small'
}
default:
return {
provider: provider,
baseUrl: 'unknown',
model: config.AI_MODEL_EMBEDDING || 'unknown'
}
}
}
export async function GET(request: NextRequest) {
try {
const config = await getSystemConfig()
const provider = getAIProvider(config)
const tagsProvider = getTagsProvider(config)
const embeddingsProvider = getEmbeddingsProvider(config)
// Test with a simple embedding request
const testText = 'test'
const embeddings = await provider.getEmbeddings(testText)
// Test embeddings provider
const embeddings = await embeddingsProvider.getEmbeddings(testText)
if (!embeddings || embeddings.length === 0) {
const providerType = config.AI_PROVIDER_EMBEDDING || 'ollama'
const details = getProviderDetails(config, providerType)
return NextResponse.json(
{
success: false,
provider: config.AI_PROVIDER || 'ollama',
tagsProvider: config.AI_PROVIDER_TAGS || 'ollama',
embeddingsProvider: providerType,
error: 'No embeddings returned',
details: {
provider: config.AI_PROVIDER || 'ollama',
baseUrl: config.OLLAMA_BASE_URL || process.env.OLLAMA_BASE_URL || 'http://localhost:11434',
model: config.AI_MODEL_EMBEDDING || process.env.OLLAMA_EMBEDDING_MODEL || 'embeddinggemma:latest'
}
details
},
{ status: 500 }
)
}
const tagsProviderType = config.AI_PROVIDER_TAGS || 'ollama'
const embeddingsProviderType = config.AI_PROVIDER_EMBEDDING || 'ollama'
const details = getProviderDetails(config, embeddingsProviderType)
return NextResponse.json({
success: true,
provider: config.AI_PROVIDER || 'ollama',
tagsProvider: tagsProviderType,
embeddingsProvider: embeddingsProviderType,
embeddingLength: embeddings.length,
firstValues: embeddings.slice(0, 5),
details: {
provider: config.AI_PROVIDER || 'ollama',
baseUrl: config.OLLAMA_BASE_URL || process.env.OLLAMA_BASE_URL || 'http://localhost:11434',
model: config.AI_MODEL_EMBEDDING || process.env.OLLAMA_EMBEDDING_MODEL || 'embeddinggemma:latest'
}
details
})
} catch (error: any) {
console.error('AI test error:', error)
const config = await getSystemConfig()
const providerType = config.AI_PROVIDER_EMBEDDING || 'ollama'
const details = getProviderDetails(config, providerType)
return NextResponse.json(
{
success: false,
error: error.message || 'Unknown error',
stack: process.env.NODE_ENV === 'development' ? error.stack : undefined,
details: {
provider: process.env.AI_PROVIDER || 'ollama',
baseUrl: process.env.OLLAMA_BASE_URL || 'http://localhost:11434',
model: process.env.OLLAMA_EMBEDDING_MODEL || 'embeddinggemma:latest'
}
details
},
{ status: 500 }
)

View File

@ -25,7 +25,6 @@ import { useLabels } from '@/context/LabelContext'
import { LabelManagementDialog } from './label-management-dialog'
import { LabelFilter } from './label-filter'
import { NotificationPanel } from './notification-panel'
import { UserNav } from './user-nav'
import { updateTheme } from '@/app/actions/profile'
interface HeaderProps {
@ -316,7 +315,6 @@ export function Header({
</DropdownMenu>
<NotificationPanel />
<UserNav user={user} />
</div>
</div>

View File

@ -1,19 +1,32 @@
'use client'
import { useState, useEffect } from 'react'
import { useState } from 'react'
import Link from 'next/link'
import { usePathname, useSearchParams } from 'next/navigation'
import { cn } from '@/lib/utils'
import { StickyNote, Bell, Archive, Trash2, Tag, ChevronDown, ChevronUp, Settings, User, Shield, Coffee } from 'lucide-react'
import { StickyNote, Bell, Archive, Trash2, Tag, ChevronDown, ChevronUp, Settings, User, Shield, Coffee, LogOut } from 'lucide-react'
import { useLabels } from '@/context/LabelContext'
import { LabelManagementDialog } from './label-management-dialog'
import { useSession } from 'next-auth/react'
import { useSession, signOut } from 'next-auth/react'
import { Avatar, AvatarFallback, AvatarImage } from '@/components/ui/avatar'
import { useRouter } from 'next/navigation'
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuGroup,
DropdownMenuItem,
DropdownMenuLabel,
DropdownMenuSeparator,
DropdownMenuTrigger,
} from '@/components/ui/dropdown-menu'
import { Button } from '@/components/ui/button'
import { LABEL_COLORS } from '@/lib/types'
export function Sidebar({ className, user }: { className?: string, user?: any }) {
const pathname = usePathname()
const searchParams = useSearchParams()
const router = useRouter()
const { labels, getLabelColor } = useLabels()
const [isLabelsExpanded, setIsLabelsExpanded] = useState(false)
const { data: session } = useSession()
@ -27,6 +40,11 @@ export function Sidebar({ className, user }: { className?: string, user?: any })
const displayedLabels = isLabelsExpanded ? labels : labels.slice(0, 5)
const hasMoreLabels = labels.length > 5
const userRole = (currentUser as any)?.role || 'USER'
const userInitials = currentUser?.name
? currentUser.name.split(' ').map((n: string) => n[0]).join('').toUpperCase().substring(0, 2)
: 'U'
const NavItem = ({ href, icon: Icon, label, active, onClick, iconColorClass }: any) => (
<Link
href={href}
@ -44,8 +62,68 @@ export function Sidebar({ className, user }: { className?: string, user?: any })
)
return (
<aside className={cn("w-[280px] flex-col gap-1 py-2 overflow-y-auto overflow-x-hidden hidden md:flex", className)}>
<NavItem
<aside className={cn("w-[280px] flex-col gap-1 overflow-y-auto overflow-x-hidden hidden md:flex", className)}>
{/* User Profile Section - Top of Sidebar */}
{currentUser && (
<div className="p-4 border-b border-gray-200 dark:border-zinc-800 bg-gray-50/50 dark:bg-zinc-900/50">
<DropdownMenu>
<DropdownMenuTrigger asChild>
<button className="flex items-center gap-3 w-full p-2 rounded-lg hover:bg-gray-100 dark:hover:bg-zinc-800 transition-colors text-left">
<Avatar className="h-10 w-10 ring-2 ring-amber-500/20">
<AvatarImage src={currentUser.image || ''} alt={currentUser.name || ''} />
<AvatarFallback className="bg-amber-500 text-white font-medium">
{userInitials}
</AvatarFallback>
</Avatar>
<div className="flex-1 min-w-0">
<p className="text-sm font-medium text-gray-900 dark:text-gray-100 truncate">
{currentUser.name}
</p>
<p className="text-xs text-gray-500 dark:text-gray-400 truncate">
{currentUser.email}
</p>
</div>
</button>
</DropdownMenuTrigger>
<DropdownMenuContent align="start" className="w-56" forceMount>
<DropdownMenuLabel className="font-normal">
<div className="flex flex-col space-y-1">
<p className="text-sm font-medium leading-none">{currentUser.name}</p>
<p className="text-xs leading-none text-muted-foreground">
{currentUser.email}
</p>
</div>
</DropdownMenuLabel>
<DropdownMenuSeparator />
<DropdownMenuGroup>
<DropdownMenuItem onClick={() => router.push('/settings/profile')}>
<User className="mr-2 h-4 w-4" />
<span>Profile</span>
</DropdownMenuItem>
{userRole === 'ADMIN' && (
<DropdownMenuItem onClick={() => router.push('/admin')}>
<Shield className="mr-2 h-4 w-4" />
<span>Admin Dashboard</span>
</DropdownMenuItem>
)}
<DropdownMenuItem onClick={() => router.push('/settings')}>
<Settings className="mr-2 h-4 w-4" />
<span>Diagnostics</span>
</DropdownMenuItem>
</DropdownMenuGroup>
<DropdownMenuSeparator />
<DropdownMenuItem onClick={() => signOut({ callbackUrl: '/login' })}>
<LogOut className="mr-2 h-4 w-4" />
<span>Log out</span>
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
</div>
)}
{/* Navigation Items */}
<div className="py-2">
<NavItem
href="/"
icon={StickyNote}
label="Notes"
@ -103,30 +181,12 @@ export function Sidebar({ className, user }: { className?: string, user?: any })
label="Archive"
active={pathname === '/archive'}
/>
<NavItem
href="/trash"
icon={Trash2}
label="Trash"
active={pathname === '/trash'}
<NavItem
href="/trash"
icon={Trash2}
label="Trash"
active={pathname === '/trash'}
/>
<div className="my-2 border-t border-gray-200 dark:border-zinc-800" />
<NavItem
href="/settings/profile"
icon={User}
label="Profile"
active={pathname === '/settings/profile'}
/>
{(currentUser as any)?.role === 'ADMIN' && (
<NavItem
href="/admin"
icon={Shield}
label="Admin"
active={pathname === '/admin'}
/>
)}
<NavItem
href="/support"
@ -134,13 +194,7 @@ export function Sidebar({ className, user }: { className?: string, user?: any })
label="Support Memento ☕"
active={pathname === '/support'}
/>
<NavItem
href="/settings"
icon={Settings}
label="Diagnostics"
active={pathname === '/settings'}
/>
</div>
</aside>
)
}

View File

@ -1,30 +1,77 @@
import { OpenAIProvider } from './providers/openai';
import { OllamaProvider } from './providers/ollama';
import { CustomOpenAIProvider } from './providers/custom-openai';
import { AIProvider } from './types';
export function getAIProvider(config?: Record<string, string>): AIProvider {
const providerType = config?.AI_PROVIDER || process.env.AI_PROVIDER || 'ollama';
type ProviderType = 'ollama' | 'openai' | 'custom';
switch (providerType.toLowerCase()) {
function createOllamaProvider(config: Record<string, string>, modelName: string, embeddingModelName: string): OllamaProvider {
let baseUrl = config?.OLLAMA_BASE_URL || process.env.OLLAMA_BASE_URL || 'http://localhost:11434';
// Ensure baseUrl doesn't end with /api, we'll add it in OllamaProvider
if (baseUrl.endsWith('/api')) {
baseUrl = baseUrl.slice(0, -4); // Remove /api
}
return new OllamaProvider(baseUrl, modelName, embeddingModelName);
}
function createOpenAIProvider(config: Record<string, string>, modelName: string, embeddingModelName: string): OpenAIProvider {
const apiKey = config?.OPENAI_API_KEY || process.env.OPENAI_API_KEY || '';
if (!apiKey) {
console.warn('OPENAI_API_KEY non configurée.');
}
return new OpenAIProvider(apiKey, modelName, embeddingModelName);
}
function createCustomOpenAIProvider(config: Record<string, string>, modelName: string, embeddingModelName: string): CustomOpenAIProvider {
const apiKey = config?.CUSTOM_OPENAI_API_KEY || process.env.CUSTOM_OPENAI_API_KEY || '';
const baseUrl = config?.CUSTOM_OPENAI_BASE_URL || process.env.CUSTOM_OPENAI_BASE_URL || '';
if (!apiKey) {
console.warn('CUSTOM_OPENAI_API_KEY non configurée.');
}
if (!baseUrl) {
console.warn('CUSTOM_OPENAI_BASE_URL non configurée.');
}
return new CustomOpenAIProvider(apiKey, baseUrl, modelName, embeddingModelName);
}
function getProviderInstance(providerType: ProviderType, config: Record<string, string>, modelName: string, embeddingModelName: string): AIProvider {
switch (providerType) {
case 'ollama':
let baseUrl = config?.OLLAMA_BASE_URL || process.env.OLLAMA_BASE_URL || 'http://localhost:11434';
const model = config?.AI_MODEL_TAGS || process.env.OLLAMA_MODEL || 'granite4:latest';
const embedModel = config?.AI_MODEL_EMBEDDING || process.env.OLLAMA_EMBEDDING_MODEL || 'embeddinggemma:latest';
// Ensure baseUrl doesn't end with /api, we'll add it in OllamaProvider
if (baseUrl.endsWith('/api')) {
baseUrl = baseUrl.slice(0, -4); // Remove /api
}
return new OllamaProvider(baseUrl, model, embedModel);
return createOllamaProvider(config, modelName, embeddingModelName);
case 'openai':
return createOpenAIProvider(config, modelName, embeddingModelName);
case 'custom':
return createCustomOpenAIProvider(config, modelName, embeddingModelName);
default:
const apiKey = config?.OPENAI_API_KEY || process.env.OPENAI_API_KEY || '';
const aiModel = config?.AI_MODEL_TAGS || process.env.OPENAI_MODEL || 'gpt-4o-mini';
if (!apiKey && providerType.toLowerCase() === 'openai') {
console.warn('OPENAI_API_KEY non configurée.');
}
return new OpenAIProvider(apiKey, aiModel);
console.warn(`Provider AI inconnu: ${providerType}, utilisation de Ollama par défaut`);
return createOllamaProvider(config, modelName, embeddingModelName);
}
}
export function getTagsProvider(config?: Record<string, string>): AIProvider {
const providerType = (config?.AI_PROVIDER_TAGS || process.env.AI_PROVIDER_TAGS || 'ollama').toLowerCase() as ProviderType;
const modelName = config?.AI_MODEL_TAGS || process.env.AI_MODEL_TAGS || 'granite4:latest';
const embeddingModelName = config?.AI_MODEL_EMBEDDING || process.env.AI_MODEL_EMBEDDING || 'embeddinggemma:latest';
return getProviderInstance(providerType, config || {}, modelName, embeddingModelName);
}
export function getEmbeddingsProvider(config?: Record<string, string>): AIProvider {
const providerType = (config?.AI_PROVIDER_EMBEDDING || process.env.AI_PROVIDER_EMBEDDING || 'ollama').toLowerCase() as ProviderType;
const modelName = config?.AI_MODEL_TAGS || process.env.AI_MODEL_TAGS || 'granite4:latest';
const embeddingModelName = config?.AI_MODEL_EMBEDDING || process.env.AI_MODEL_EMBEDDING || 'embeddinggemma:latest';
return getProviderInstance(providerType, config || {}, modelName, embeddingModelName);
}
// Legacy function for backward compatibility
export function getAIProvider(config?: Record<string, string>): AIProvider {
return getTagsProvider(config);
}

View File

@ -0,0 +1,59 @@
import { createOpenAI } from '@ai-sdk/openai';
import { generateObject, embed } from 'ai';
import { z } from 'zod';
import { AIProvider, TagSuggestion } from '../types';
export class CustomOpenAIProvider implements AIProvider {
private model: any;
private embeddingModel: any;
constructor(
apiKey: string,
baseUrl: string,
modelName: string = 'gpt-4o-mini',
embeddingModelName: string = 'text-embedding-3-small'
) {
// Create OpenAI-compatible client with custom base URL
const customClient = createOpenAI({
baseURL: baseUrl,
apiKey: apiKey,
});
this.model = customClient(modelName);
this.embeddingModel = customClient.embedding(embeddingModelName);
}
async generateTags(content: string): Promise<TagSuggestion[]> {
try {
const { object } = await generateObject({
model: this.model,
schema: z.object({
tags: z.array(z.object({
tag: z.string().describe('Le nom du tag, court et en minuscules'),
confidence: z.number().min(0).max(1).describe('Le niveau de confiance entre 0 et 1')
}))
}),
prompt: `Analyse la note suivante et suggère entre 1 et 5 tags pertinents.
Contenu de la note: "${content}"`,
});
return object.tags;
} catch (e) {
console.error('Erreur génération tags Custom OpenAI:', e);
return [];
}
}
async getEmbeddings(text: string): Promise<number[]> {
try {
const { embedding } = await embed({
model: this.embeddingModel,
value: text,
});
return embedding;
} catch (e) {
console.error('Erreur embeddings Custom OpenAI:', e);
return [];
}
}
}

View File

@ -0,0 +1,54 @@
import { createOpenAI } from '@ai-sdk/openai';
import { generateObject, embed } from 'ai';
import { z } from 'zod';
import { AIProvider, TagSuggestion } from '../types';
export class DeepSeekProvider implements AIProvider {
private model: any;
private embeddingModel: any;
constructor(apiKey: string, modelName: string = 'deepseek-chat', embeddingModelName: string = 'deepseek-embedding') {
// Create OpenAI-compatible client for DeepSeek
const deepseek = createOpenAI({
baseURL: 'https://api.deepseek.com/v1',
apiKey: apiKey,
});
this.model = deepseek(modelName);
this.embeddingModel = deepseek.embedding(embeddingModelName);
}
async generateTags(content: string): Promise<TagSuggestion[]> {
try {
const { object } = await generateObject({
model: this.model,
schema: z.object({
tags: z.array(z.object({
tag: z.string().describe('Le nom du tag, court et en minuscules'),
confidence: z.number().min(0).max(1).describe('Le niveau de confiance entre 0 et 1')
}))
}),
prompt: `Analyse la note suivante et suggère entre 1 et 5 tags pertinents.
Contenu de la note: "${content}"`,
});
return object.tags;
} catch (e) {
console.error('Erreur génération tags DeepSeek:', e);
return [];
}
}
async getEmbeddings(text: string): Promise<number[]> {
try {
const { embedding } = await embed({
model: this.embeddingModel,
value: text,
});
return embedding;
} catch (e) {
console.error('Erreur embeddings DeepSeek:', e);
return [];
}
}
}

View File

@ -1,13 +1,20 @@
import { openai } from '@ai-sdk/openai';
import { createOpenAI } from '@ai-sdk/openai';
import { generateObject, embed } from 'ai';
import { z } from 'zod';
import { AIProvider, TagSuggestion } from '../types';
export class OpenAIProvider implements AIProvider {
private model: any;
private embeddingModel: any;
constructor(apiKey: string, modelName: string = 'gpt-4o-mini') {
this.model = openai(modelName);
constructor(apiKey: string, modelName: string = 'gpt-4o-mini', embeddingModelName: string = 'text-embedding-3-small') {
// Create OpenAI client with API key
const openaiClient = createOpenAI({
apiKey: apiKey,
});
this.model = openaiClient(modelName);
this.embeddingModel = openaiClient.embedding(embeddingModelName);
}
async generateTags(content: string): Promise<TagSuggestion[]> {
@ -20,7 +27,7 @@ export class OpenAIProvider implements AIProvider {
confidence: z.number().min(0).max(1).describe('Le niveau de confiance entre 0 et 1')
}))
}),
prompt: `Analyse la note suivante et suggère entre 1 et 5 tags pertinents.
prompt: `Analyse la note suivante et suggère entre 1 et 5 tags pertinents.
Contenu de la note: "${content}"`,
});
@ -34,7 +41,7 @@ export class OpenAIProvider implements AIProvider {
async getEmbeddings(text: string): Promise<number[]> {
try {
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
model: this.embeddingModel,
value: text,
});
return embedding;

View File

@ -0,0 +1,54 @@
import { createOpenAI } from '@ai-sdk/openai';
import { generateObject, embed } from 'ai';
import { z } from 'zod';
import { AIProvider, TagSuggestion } from '../types';
export class OpenRouterProvider implements AIProvider {
private model: any;
private embeddingModel: any;
constructor(apiKey: string, modelName: string = 'anthropic/claude-3-haiku', embeddingModelName: string = 'openai/text-embedding-3-small') {
// Create OpenAI-compatible client for OpenRouter
const openrouter = createOpenAI({
baseURL: 'https://openrouter.ai/api/v1',
apiKey: apiKey,
});
this.model = openrouter(modelName);
this.embeddingModel = openrouter.embedding(embeddingModelName);
}
async generateTags(content: string): Promise<TagSuggestion[]> {
try {
const { object } = await generateObject({
model: this.model,
schema: z.object({
tags: z.array(z.object({
tag: z.string().describe('Le nom du tag, court et en minuscules'),
confidence: z.number().min(0).max(1).describe('Le niveau de confiance entre 0 et 1')
}))
}),
prompt: `Analyse la note suivante et suggère entre 1 et 5 tags pertinents.
Contenu de la note: "${content}"`,
});
return object.tags;
} catch (e) {
console.error('Erreur génération tags OpenRouter:', e);
return [];
}
}
async getEmbeddings(text: string): Promise<number[]> {
try {
const { embedding } = await embed({
model: this.embeddingModel,
value: text,
});
return embedding;
} catch (e) {
console.error('Erreur embeddings OpenRouter:', e);
return [];
}
}
}

Binary file not shown.