apersona/QUICK_START.md

5.0 KiB

aPersona Quick Start Guide

Prerequisites

Before you begin, ensure you have the following installed:

The easiest way to get started is using our setup script:

# Make the setup script executable
chmod +x setup.sh

# Run the setup script
./setup.sh

This script will:

  • Check your system requirements
  • Install dependencies for both backend and frontend
  • Set up the AI models
  • Create necessary directories and configuration files

🔧 Manual Setup

If you prefer to set up manually:

1. Clone and Setup Backend

# Navigate to backend directory
cd backend

# Create virtual environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Create environment file
cp .env.example .env  # Edit with your preferences

2. Setup Frontend

# Navigate to frontend directory
cd frontend

# Install dependencies
npm install

# Install and configure development tools
npm run dev  # This will start the development server

3. Setup AI Services

# Start Ollama service
ollama serve

# In another terminal, pull required models
ollama pull mistral       # Main LLM model
ollama pull nomic-embed-text  # Embedding model

🏃‍♂️ Running the Application

Start the Backend

cd backend
source venv/bin/activate  # If not already activated
uvicorn app.main:app --reload

The backend will be available at: http://localhost:8000

Start the Frontend

cd frontend
npm run dev

The frontend will be available at: http://localhost:3000

Start Ollama (if not running)

ollama serve

🎯 First Steps

  1. Open your browser and go to http://localhost:3000

  2. Create an account using the registration form

  3. Upload some documents to get started:

    • PDFs, Word documents, text files, or images
    • The system will automatically process and categorize them
  4. Start chatting with your AI assistant:

    • Ask questions about your uploaded files
    • The AI will provide context-aware responses
    • Give feedback to help the system learn your preferences

🔍 Verify Everything is Working

Check System Health

Visit: http://localhost:8000/health

You should see:

{
  "status": "healthy",
  "services": {
    "database": "healthy",
    "ollama": "healthy",
    "embeddings": "healthy",
    "vector_store": "healthy"
  }
}

Check API Documentation

Visit: http://localhost:8000/docs

This will show the interactive API documentation.

🐛 Troubleshooting

Common Issues

1. Ollama Not Running

# Error: Connection refused to Ollama
# Solution: Start Ollama service
ollama serve

2. Models Not Downloaded

# Error: Model not found
# Solution: Download required models
ollama pull mistral
ollama pull nomic-embed-text

3. Port Already in Use

# Backend port 8000 in use
uvicorn app.main:app --reload --port 8001

# Frontend port 3000 in use
npm run dev -- --port 3001

4. Python Dependencies Issues

# Create fresh virtual environment
rm -rf venv
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt

5. Node Dependencies Issues

# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm install

Performance Tips

  1. First Run: The first time you upload files and ask questions, it may take longer as models are loading and caches are being built.

  2. Memory Usage: The system uses local AI models which require significant RAM. Ensure you have at least 8GB RAM available.

  3. Storage: Vector embeddings and model files require disk space. Ensure you have at least 5GB free disk space.

📊 System Requirements

Minimum Requirements

  • RAM: 8GB
  • Storage: 5GB free space
  • CPU: Multi-core processor (4+ cores recommended)
  • OS: Windows 10+, macOS 10.14+, Linux (Ubuntu 18.04+)
  • RAM: 16GB+
  • Storage: 10GB+ free space
  • CPU: 8+ cores
  • GPU: NVIDIA GPU with CUDA support (optional, for faster processing)

🎉 You're Ready!

Once everything is running:

  1. Upload your documents (PDFs, Word docs, images, etc.)
  2. Ask questions about your content
  3. Set reminders and let the AI help organize your life
  4. Watch it learn and adapt to your preferences over time

🆘 Need Help?

  • Check the Architecture Documentation for technical details
  • Review the API documentation at http://localhost:8000/docs
  • Ensure all services are running with the health check endpoint

🔒 Privacy Note

Remember: All your data stays local. aPersona runs entirely on your machine without any cloud dependencies. Your files, conversations, and personal information never leave your device.