126 lines
3.9 KiB
Markdown
126 lines
3.9 KiB
Markdown
# aPersona - AI-Powered Personal Assistant
|
|
|
|
A fully local, offline AI-powered personal assistant that learns from your personal files, preferences, and behavior to act as your intelligent secretary.
|
|
|
|
## 🔹 Key Features
|
|
|
|
- **100% Local & Offline**: No cloud dependencies, complete data privacy
|
|
- **User Authentication**: Secure local user management
|
|
- **File Analysis**: Automatic categorization of documents, images, PDFs
|
|
- **Semantic Search**: Vector-based search through your personal data
|
|
- **Local LLM Integration**: Powered by Ollama with RAG capabilities
|
|
- **Auto-Learning**: Adaptive behavior based on user interactions
|
|
- **Smart Reminders**: Context-aware suggestions and notifications
|
|
- **Personal Context**: Deep understanding of your preferences and habits
|
|
|
|
## 🛠 Technology Stack
|
|
|
|
### Backend
|
|
- **FastAPI**: Modern Python web framework
|
|
- **SQLAlchemy**: Database ORM
|
|
- **ChromaDB**: Vector database for embeddings
|
|
- **SentenceTransformers**: Text embeddings
|
|
- **Ollama**: Local LLM runtime
|
|
|
|
### Frontend
|
|
- **React**: Modern UI framework
|
|
- **TailwindCSS**: Utility-first CSS framework
|
|
- **Vite**: Fast build tool
|
|
|
|
### AI/ML
|
|
- **Hugging Face Transformers**: Pre-trained models
|
|
- **PyTorch**: ML framework
|
|
- **Pillow**: Image processing
|
|
- **PyPDF2**: PDF text extraction
|
|
|
|
## 📁 Project Structure
|
|
|
|
```
|
|
apersona/
|
|
├── backend/ # FastAPI backend
|
|
│ ├── app/
|
|
│ │ ├── api/ # API routes
|
|
│ │ ├── core/ # Core configuration
|
|
│ │ ├── db/ # Database models
|
|
│ │ ├── services/ # Business logic
|
|
│ │ └── main.py # FastAPI app
|
|
│ ├── ai_core/ # AI/ML components
|
|
│ │ ├── embeddings/ # Text embeddings
|
|
│ │ ├── llm/ # LLM integration
|
|
│ │ ├── rag/ # RAG system
|
|
│ │ └── auto_learning/ # Adaptive learning
|
|
│ └── requirements.txt
|
|
├── frontend/ # React frontend
|
|
│ ├── src/
|
|
│ │ ├── components/ # React components
|
|
│ │ ├── pages/ # Page components
|
|
│ │ ├── services/ # API services
|
|
│ │ └── utils/ # Utility functions
|
|
│ └── package.json
|
|
├── data/ # Local data storage
|
|
│ ├── uploads/ # User uploaded files
|
|
│ ├── processed/ # Processed files
|
|
│ └── vectors/ # Vector embeddings
|
|
└── docs/ # Documentation
|
|
```
|
|
|
|
## 🚀 Quick Start
|
|
|
|
### Prerequisites
|
|
- Python 3.11+
|
|
- Node.js 18+
|
|
- Ollama installed locally
|
|
|
|
### Backend Setup
|
|
```bash
|
|
cd backend
|
|
pip install -r requirements.txt
|
|
uvicorn app.main:app --reload
|
|
```
|
|
|
|
### Frontend Setup
|
|
```bash
|
|
cd frontend
|
|
npm install
|
|
npm run dev
|
|
```
|
|
|
|
### AI Setup
|
|
```bash
|
|
# Install Ollama models
|
|
ollama pull mistral
|
|
ollama pull nomic-embed-text
|
|
```
|
|
|
|
## 🧠 Auto-Learning System
|
|
|
|
The auto-learning module continuously adapts to user behavior through:
|
|
|
|
- **Interaction Patterns**: Learning from user queries and responses
|
|
- **Preference Tracking**: Monitoring file usage and search patterns
|
|
- **Context Building**: Understanding user's work and personal contexts
|
|
- **Response Optimization**: Improving answer relevance over time
|
|
- **Proactive Suggestions**: Anticipating user needs based on patterns
|
|
|
|
## 🔒 Privacy & Security
|
|
|
|
- All data stored locally
|
|
- No external API calls
|
|
- Encrypted user authentication
|
|
- Secure file handling
|
|
- Optional data anonymization
|
|
|
|
## 📚 Documentation
|
|
|
|
- [API Documentation](./docs/api.md)
|
|
- [AI Integration Guide](./docs/ai-integration.md)
|
|
- [Auto-Learning Architecture](./docs/auto-learning.md)
|
|
- [Deployment Guide](./docs/deployment.md)
|
|
|
|
## 🤝 Contributing
|
|
|
|
This is a personal project focused on privacy and local execution. Feel free to fork and adapt for your needs.
|
|
|
|
## 📄 License
|
|
|
|
MIT License - See LICENSE file for details |