Terraphim AI Frontend Testing Guide
This guide covers the comprehensive end-to-end testing setup for Terraphim AI, including chat functionality, document summarization, and Ollama integration.
π Overview
The testing suite provides comprehensive coverage for:
- Chat Functionality: Complete chat system with Ollama integration
- Document Summarization: AI-powered summarization with local LLM
- Ollama Integration: Local LLM connectivity and model management
- Configuration Management: LLM provider setup and validation
- Haystack Integration: External service configuration with secrets
π Quick Start
1. Environment Setup
# Copy environment template
# Edit .env with your actual API keys and secrets
# Required for haystack integrations (Atomic Server, ClickUp, etc.)2. Automated Setup
# Run the complete setup script
This script will:
- Check and install Ollama
- Pull required models (llama3.2:3b)
- Start backend server with Ollama configuration
- Install frontend dependencies
- Setup Playwright browsers
3. Validate Ollama
# Validate Ollama configuration
4. Run Tests
# Run all LLM-related tests
# Run specific test suites
# Run comprehensive test suite
5. Cleanup
# Clean up test environment
π§ͺ Test Suites
Chat Functionality (chat-functionality.spec.ts)
Tests the complete chat system including:
- Interface Initialization: Chat UI components and navigation
- Message Handling: Send/receive messages with Ollama
- Context Management: Add, edit, delete conversation context
- KG Search Integration: Knowledge graph search within chat
- Conversation Management: Create, load, persist conversations
- Error Handling: Graceful handling of LLM failures
Key Test Cases:
- Chat interface displays correctly
- Messages send and receive with Ollama
- Context panel management
- KG search modal integration
- Error recovery and retry functionality
Summarization (summarization.spec.ts)
Tests document summarization features:
- Basic Summarization: Generate AI summaries for search results
- Auto-summarization: Automatic summarization based on role config
- Cache Management: Summary caching and regeneration
- Error Handling: Timeout and service unavailable scenarios
- Performance Testing: Loading indicators and cancellation
Key Test Cases:
- Generate summaries for search results
- Respect auto-summarize configuration
- Handle Ollama service failures gracefully
- Provide visual feedback during generation
Ollama Integration (ollama-integration.spec.ts)
Tests Ollama connectivity and functionality:
- Health Checks: Service availability and model validation
- Model Quality: Response coherence and relevance
- Streaming Responses: Real-time response generation
- Performance Testing: Concurrent requests and memory usage
- Configuration: Model switching and parameter validation
Key Test Cases:
- Verify Ollama service is running
- Validate model availability (llama3.2:3b)
- Test response quality for programming questions
- Handle service unavailable scenarios
Configuration Wizard (config-wizard-complete.spec.ts)
Tests the complete configuration system:
- LLM Provider Setup: Ollama and OpenRouter configuration
- Validation: API key and connectivity testing
- Haystack Configuration: External service setup with secrets
- Role Management: Create, edit, delete roles
- Persistence: Configuration save and reload
Key Test Cases:
- Configure Ollama with base URL and model
- Configure OpenRouter with API key validation
- Setup haystack services (Atomic Server, ClickUp)
- Validate required fields and handle conflicts
π§ Configuration
Environment Variables
Required environment variables in .env:
# Ollama Configuration
OLLAMA_BASE_URL=http://127.0.0.1:11434
OLLAMA_MODEL=llama3.2:3b
# Atomic Server
ATOMIC_SERVER_URL=http://localhost:9883
ATOMIC_SERVER_SECRET=your_secret_here
# OpenRouter (optional)
OPENROUTER_API_KEY=sk-or-v1-your-key-here
# ClickUp (optional)
CLICKUP_API_TOKEN=pk_your_token_here
CLICKUP_TEAM_ID=your_team_id
# GitHub (optional)
GITHUB_TOKEN=ghp_your_token_hereTest Timeouts
Test timeouts are configured for LLM operations:
- Standard tests: 120 seconds
- LLM response timeout: 60 seconds
- Summarization timeout: 45 seconds
- Health check timeout: 10 seconds
Model Requirements
Required Ollama models:
llama3.2:3b(primary model for testing)llama3:8b(alternative model)
π NPM Scripts
Setup and Validation
npm run setup:test- Complete environment setupnpm run validate:ollama- Validate Ollama configurationnpm run cleanup:test- Clean up test environment
Individual Test Suites
npm run test:chat- Chat functionality testsnpm run test:summarization- Summarization testsnpm run test:ollama- Ollama integration testsnpm run test:config- Configuration wizard tests
Test Variations
npm run test:chat:headed- Run with browser UI visiblenpm run test:chat:ci- CI mode with retries and reportingnpm run test:llm- All LLM-related testsnpm run test:comprehensive- Complete test suite
CI/CD Scripts
npm run test:comprehensive:ci- Full CI test run with reporting- All
:civariants include retries and structured reporting
π¨ Troubleshooting
Common Issues
Ollama Not Running
# Start Ollama service
# Verify it's running
Model Not Available
# Pull required model
# List available models
Backend Server Not Starting
# Check if port is in use
|
# Start backend manually with Ollama config
Test Failures Due to Timeouts
- Increase timeout values in test files
- Check system resources (CPU, memory)
- Verify Ollama model is loaded and warm
Environment Variable Issues
# Verify .env file exists and has correct values
# Check if variables are loaded
Debug Mode
Run tests with debug output:
# Debug specific test
# Run with browser visible
# Verbose output
DEBUG=pw:* Performance Issues
If tests are slow:
- Ensure Ollama model is pre-loaded
- Check available system memory
- Reduce concurrent test workers
- Use smaller models for testing
ποΈ CI/CD Integration
GitHub Actions Example
name: E2E Tests with Ollama
on:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Ollama
run: |
curl https://ollama.ai/install.sh | sh
ollama serve &
sleep 10
ollama pull llama3.2:3b
- name: Install dependencies
run: |
cd desktop
npm install
- name: Setup test environment
run: |
cd desktop
npm run setup:test
- name: Run comprehensive tests
run: |
cd desktop
npm run test:comprehensive:ci
- name: Upload test results
uses: actions/upload-artifact@v3
with:
name: test-results
path: desktop/test-results/Environment Secrets
Configure these secrets in your CI environment:
ATOMIC_SERVER_SECRETOPENROUTER_API_KEYCLICKUP_API_TOKENGITHUB_TOKEN
π Test Coverage
The test suite provides comprehensive coverage across:
- Frontend Components: All major UI components tested
- API Integration: Backend API calls and responses
- LLM Integration: Complete Ollama workflow
- Configuration: All configuration scenarios
- Error Handling: Network failures, timeouts, invalid inputs
- Performance: Load testing and concurrent operations
Coverage Reports
Generate test coverage reports:
# Run tests with coverage
# View coverage report
π Monitoring and Metrics
Tests include performance monitoring:
- Response Times: LLM response latency
- Memory Usage: Frontend and Ollama memory consumption
- Error Rates: Failed requests and timeouts
- Model Performance: Token generation speed
View metrics in test output and reports.
π Additional Resources
π€ Contributing
When adding new tests:
- Follow existing test patterns
- Include proper error handling
- Add timeout configurations
- Update this documentation
- Test in both CI and local environments
π License
This testing suite is part of the Terraphim AI project and follows the same license terms.