225 lines
5.0 KiB
Markdown
225 lines
5.0 KiB
Markdown
# aPersona Quick Start Guide
|
|
|
|
## Prerequisites
|
|
|
|
Before you begin, ensure you have the following installed:
|
|
|
|
- **Python 3.11+**: [Download here](https://python.org/downloads/)
|
|
- **Node.js 18+**: [Download here](https://nodejs.org/)
|
|
- **Ollama**: [Install guide](https://ollama.ai/download)
|
|
|
|
## 🚀 Automated Setup (Recommended)
|
|
|
|
The easiest way to get started is using our setup script:
|
|
|
|
```bash
|
|
# Make the setup script executable
|
|
chmod +x setup.sh
|
|
|
|
# Run the setup script
|
|
./setup.sh
|
|
```
|
|
|
|
This script will:
|
|
- Check your system requirements
|
|
- Install dependencies for both backend and frontend
|
|
- Set up the AI models
|
|
- Create necessary directories and configuration files
|
|
|
|
## 🔧 Manual Setup
|
|
|
|
If you prefer to set up manually:
|
|
|
|
### 1. Clone and Setup Backend
|
|
|
|
```bash
|
|
# Navigate to backend directory
|
|
cd backend
|
|
|
|
# Create virtual environment
|
|
python3 -m venv venv
|
|
source venv/bin/activate # On Windows: venv\Scripts\activate
|
|
|
|
# Install dependencies
|
|
pip install -r requirements.txt
|
|
|
|
# Create environment file
|
|
cp .env.example .env # Edit with your preferences
|
|
```
|
|
|
|
### 2. Setup Frontend
|
|
|
|
```bash
|
|
# Navigate to frontend directory
|
|
cd frontend
|
|
|
|
# Install dependencies
|
|
npm install
|
|
|
|
# Install and configure development tools
|
|
npm run dev # This will start the development server
|
|
```
|
|
|
|
### 3. Setup AI Services
|
|
|
|
```bash
|
|
# Start Ollama service
|
|
ollama serve
|
|
|
|
# In another terminal, pull required models
|
|
ollama pull mistral # Main LLM model
|
|
ollama pull nomic-embed-text # Embedding model
|
|
```
|
|
|
|
## 🏃♂️ Running the Application
|
|
|
|
### Start the Backend
|
|
|
|
```bash
|
|
cd backend
|
|
source venv/bin/activate # If not already activated
|
|
uvicorn app.main:app --reload
|
|
```
|
|
|
|
The backend will be available at: `http://localhost:8000`
|
|
|
|
### Start the Frontend
|
|
|
|
```bash
|
|
cd frontend
|
|
npm run dev
|
|
```
|
|
|
|
The frontend will be available at: `http://localhost:3000`
|
|
|
|
### Start Ollama (if not running)
|
|
|
|
```bash
|
|
ollama serve
|
|
```
|
|
|
|
## 🎯 First Steps
|
|
|
|
1. **Open your browser** and go to `http://localhost:3000`
|
|
|
|
2. **Create an account** using the registration form
|
|
|
|
3. **Upload some documents** to get started:
|
|
- PDFs, Word documents, text files, or images
|
|
- The system will automatically process and categorize them
|
|
|
|
4. **Start chatting** with your AI assistant:
|
|
- Ask questions about your uploaded files
|
|
- The AI will provide context-aware responses
|
|
- Give feedback to help the system learn your preferences
|
|
|
|
## 🔍 Verify Everything is Working
|
|
|
|
### Check System Health
|
|
|
|
Visit: `http://localhost:8000/health`
|
|
|
|
You should see:
|
|
```json
|
|
{
|
|
"status": "healthy",
|
|
"services": {
|
|
"database": "healthy",
|
|
"ollama": "healthy",
|
|
"embeddings": "healthy",
|
|
"vector_store": "healthy"
|
|
}
|
|
}
|
|
```
|
|
|
|
### Check API Documentation
|
|
|
|
Visit: `http://localhost:8000/docs`
|
|
|
|
This will show the interactive API documentation.
|
|
|
|
## 🐛 Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
#### 1. Ollama Not Running
|
|
```bash
|
|
# Error: Connection refused to Ollama
|
|
# Solution: Start Ollama service
|
|
ollama serve
|
|
```
|
|
|
|
#### 2. Models Not Downloaded
|
|
```bash
|
|
# Error: Model not found
|
|
# Solution: Download required models
|
|
ollama pull mistral
|
|
ollama pull nomic-embed-text
|
|
```
|
|
|
|
#### 3. Port Already in Use
|
|
```bash
|
|
# Backend port 8000 in use
|
|
uvicorn app.main:app --reload --port 8001
|
|
|
|
# Frontend port 3000 in use
|
|
npm run dev -- --port 3001
|
|
```
|
|
|
|
#### 4. Python Dependencies Issues
|
|
```bash
|
|
# Create fresh virtual environment
|
|
rm -rf venv
|
|
python3 -m venv venv
|
|
source venv/bin/activate
|
|
pip install --upgrade pip
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
#### 5. Node Dependencies Issues
|
|
```bash
|
|
# Clear cache and reinstall
|
|
rm -rf node_modules package-lock.json
|
|
npm install
|
|
```
|
|
|
|
### Performance Tips
|
|
|
|
1. **First Run**: The first time you upload files and ask questions, it may take longer as models are loading and caches are being built.
|
|
|
|
2. **Memory Usage**: The system uses local AI models which require significant RAM. Ensure you have at least 8GB RAM available.
|
|
|
|
3. **Storage**: Vector embeddings and model files require disk space. Ensure you have at least 5GB free disk space.
|
|
|
|
## 📊 System Requirements
|
|
|
|
### Minimum Requirements
|
|
- **RAM**: 8GB
|
|
- **Storage**: 5GB free space
|
|
- **CPU**: Multi-core processor (4+ cores recommended)
|
|
- **OS**: Windows 10+, macOS 10.14+, Linux (Ubuntu 18.04+)
|
|
|
|
### Recommended Requirements
|
|
- **RAM**: 16GB+
|
|
- **Storage**: 10GB+ free space
|
|
- **CPU**: 8+ cores
|
|
- **GPU**: NVIDIA GPU with CUDA support (optional, for faster processing)
|
|
|
|
## 🎉 You're Ready!
|
|
|
|
Once everything is running:
|
|
|
|
1. **Upload your documents** (PDFs, Word docs, images, etc.)
|
|
2. **Ask questions** about your content
|
|
3. **Set reminders** and let the AI help organize your life
|
|
4. **Watch it learn** and adapt to your preferences over time
|
|
|
|
## 🆘 Need Help?
|
|
|
|
- Check the [Architecture Documentation](docs/ARCHITECTURE.md) for technical details
|
|
- Review the API documentation at `http://localhost:8000/docs`
|
|
- Ensure all services are running with the health check endpoint
|
|
|
|
## 🔒 Privacy Note
|
|
|
|
Remember: **All your data stays local**. aPersona runs entirely on your machine without any cloud dependencies. Your files, conversations, and personal information never leave your device. |