5.0 KiB
aPersona Quick Start Guide
Prerequisites
Before you begin, ensure you have the following installed:
- Python 3.11+: Download here
- Node.js 18+: Download here
- Ollama: Install guide
🚀 Automated Setup (Recommended)
The easiest way to get started is using our setup script:
# Make the setup script executable
chmod +x setup.sh
# Run the setup script
./setup.sh
This script will:
- Check your system requirements
- Install dependencies for both backend and frontend
- Set up the AI models
- Create necessary directories and configuration files
🔧 Manual Setup
If you prefer to set up manually:
1. Clone and Setup Backend
# Navigate to backend directory
cd backend
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Create environment file
cp .env.example .env # Edit with your preferences
2. Setup Frontend
# Navigate to frontend directory
cd frontend
# Install dependencies
npm install
# Install and configure development tools
npm run dev # This will start the development server
3. Setup AI Services
# Start Ollama service
ollama serve
# In another terminal, pull required models
ollama pull mistral # Main LLM model
ollama pull nomic-embed-text # Embedding model
🏃♂️ Running the Application
Start the Backend
cd backend
source venv/bin/activate # If not already activated
uvicorn app.main:app --reload
The backend will be available at: http://localhost:8000
Start the Frontend
cd frontend
npm run dev
The frontend will be available at: http://localhost:3000
Start Ollama (if not running)
ollama serve
🎯 First Steps
-
Open your browser and go to
http://localhost:3000
-
Create an account using the registration form
-
Upload some documents to get started:
- PDFs, Word documents, text files, or images
- The system will automatically process and categorize them
-
Start chatting with your AI assistant:
- Ask questions about your uploaded files
- The AI will provide context-aware responses
- Give feedback to help the system learn your preferences
🔍 Verify Everything is Working
Check System Health
Visit: http://localhost:8000/health
You should see:
{
"status": "healthy",
"services": {
"database": "healthy",
"ollama": "healthy",
"embeddings": "healthy",
"vector_store": "healthy"
}
}
Check API Documentation
Visit: http://localhost:8000/docs
This will show the interactive API documentation.
🐛 Troubleshooting
Common Issues
1. Ollama Not Running
# Error: Connection refused to Ollama
# Solution: Start Ollama service
ollama serve
2. Models Not Downloaded
# Error: Model not found
# Solution: Download required models
ollama pull mistral
ollama pull nomic-embed-text
3. Port Already in Use
# Backend port 8000 in use
uvicorn app.main:app --reload --port 8001
# Frontend port 3000 in use
npm run dev -- --port 3001
4. Python Dependencies Issues
# Create fresh virtual environment
rm -rf venv
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
5. Node Dependencies Issues
# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm install
Performance Tips
-
First Run: The first time you upload files and ask questions, it may take longer as models are loading and caches are being built.
-
Memory Usage: The system uses local AI models which require significant RAM. Ensure you have at least 8GB RAM available.
-
Storage: Vector embeddings and model files require disk space. Ensure you have at least 5GB free disk space.
📊 System Requirements
Minimum Requirements
- RAM: 8GB
- Storage: 5GB free space
- CPU: Multi-core processor (4+ cores recommended)
- OS: Windows 10+, macOS 10.14+, Linux (Ubuntu 18.04+)
Recommended Requirements
- RAM: 16GB+
- Storage: 10GB+ free space
- CPU: 8+ cores
- GPU: NVIDIA GPU with CUDA support (optional, for faster processing)
🎉 You're Ready!
Once everything is running:
- Upload your documents (PDFs, Word docs, images, etc.)
- Ask questions about your content
- Set reminders and let the AI help organize your life
- Watch it learn and adapt to your preferences over time
🆘 Need Help?
- Check the Architecture Documentation for technical details
- Review the API documentation at
http://localhost:8000/docs
- Ensure all services are running with the health check endpoint
🔒 Privacy Note
Remember: All your data stays local. aPersona runs entirely on your machine without any cloud dependencies. Your files, conversations, and personal information never leave your device.