Desktop App for Terraphim AI Assistant
This is a Tauri-based desktop application with Svelte frontend for the Terraphim AI assistant.
Architecture
- Backend: Rust with Tauri for system integration, search, and configuration
- Frontend: Svelte with Bulma CSS for the user interface
- Features: System tray, global shortcuts, multi-theme support, typeahead search
Development
To run in development mode:
Testing
We have implemented a comprehensive testing strategy covering multiple aspects:
Backend Tests (Rust)
Tests include:
- Unit tests for Tauri commands (search, config, thesaurus)
- Integration tests for state management
- Error handling and edge cases
- Async functionality testing
Frontend Tests (Svelte)
Tests include:
- Component tests for Search, ThemeSwitcher, etc.
- Store and state management tests
- User interaction tests
- Mock Tauri API integration
End-to-End Tests
Tests include:
- Complete user workflows
- Search functionality
- Navigation and routing
- Theme switching
- Error handling
Visual Regression Tests
Tests include:
- Theme consistency across all 22 themes
- Responsive design testing
- Component visual consistency
- Accessibility visual checks
Performance Tests
# Requires Lighthouse CI
Test Structure
desktop/
βββ src-tauri/
β βββ tests/
β β βββ cmd_tests.rs # Backend unit tests
β βββ src/
β βββ cmd.rs # Commands with test coverage
β βββ lib.rs # Exposed for testing
βββ src/
β βββ lib/
β β βββ Search/
β β β βββ Search.test.ts # Search component tests
β β βββ ThemeSwitcher.test.ts # Theme tests
β βββ test-utils/
β βββ setup.ts # Test configuration
βββ tests/
β βββ e2e/
β β βββ search.spec.ts # E2E search tests
β β βββ navigation.spec.ts # E2E navigation tests
β βββ visual/
β β βββ themes.spec.ts # Visual regression tests
β βββ global-setup.ts # Test data setup
β βββ global-teardown.ts # Test cleanup
βββ vitest.config.ts # Frontend test config
βββ playwright.config.ts # E2E test configTesting Best Practices
- Isolation: Each test is independent and can run in any order
- Mocking: External dependencies are properly mocked
- Coverage: Aim for >80% code coverage
- Performance: Tests run efficiently in CI/CD
- Reliability: Tests are stable and don't have flaky behavior
Continuous Integration
Tests run automatically on:
- Push to main/develop branches
- Pull requests
- Multiple platforms (Ubuntu, macOS, Windows)
Test results include:
- Unit test results and coverage
- E2E test results with screenshots/videos
- Visual regression differences
- Performance metrics
- Security audit results
Production
To build for production:
Testing Coverage Goals
- Backend: >90% coverage for business logic
- Frontend: >85% coverage for components and stores
- E2E: Cover all major user workflows
- Visual: Test all themes and responsive breakpoints
- Performance: Maintain Lighthouse scores >80
Running All Tests
To run the complete test suite:
# Install dependencies
# Run all tests
&& &&