System Architecture
Overview
The RACE Management Console follows a modern Flask-based MVC architecture with service layers for API integration, background processing, and AI-powered analysis. The system is designed for industrial data monitoring with real-time event processing and autonomous AI investigation capabilities.
RACE Framework Architecture
The RACE Management Console implements the Rule-Action-Cognition-Events framework as shown in the architectural overview:

Figure 1: RACE MES Management Console Architecture - Advanced Manufacturing Execution System with AI-powered Rule-Action-Cognition-Events framework, featuring Event-Driven Workflows, Variable Cascade Engine, Workflow Visualizer, and Multi-Provider AI Function Calling for intelligent industrial automation and real-time monitoring
RACE Components
Rule Engine
- AI Normalization: Intelligent rule processing with AI-enhanced logic
- Event Processing: Real-time evaluation of industrial data streams
- Template System: Reusable rule definitions with placeholder-based configuration
Action Engine
- Workflow Automation: Automated response execution based on rule triggers
- Microservices Integration: Distributed action processing
- External System Integration: REST API connectivity to external systems
Cognition Engine
- AI-Powered Investigation: Autonomous analysis of MES operations
- Multi-Provider Support: OpenAI, Anthropic, Google AI integration
- Function Calling: Dynamic data retrieval and context analysis
Events Engine
- Lifecycle Management: Complete event tracking from creation to resolution
- Real-time Visualization: Timeline-based event monitoring
- Data Enrichment: Context-aware event data enhancement
Technical Architecture Layers
┌─────────────────────────────────────────────────────────────────┐
│ Frontend Layer │
├─────────────────────────────────────────────────────────────────┤
│ Bootstrap 5 UI │ Chart.js │ Feather Icons │ Vanilla JS │
│ Modern Industrial Navigation │ Connection Sources Menu │
└─────────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────────────────────┐
│ Flask Application Layer │
├─────────────────────────────────────────────────────────────────┤
│ Routes │ Templates │ Static Assets │ Session Management │
└─────────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────────────────────┐
│ Service Layer │
├─────────────────────────────────────────────────────────────────┤
│ Rule Engine │ Monitoring │ AI Services │ Event Processing │
│ PlaceholderResolver │ FunctionCalling │ ContextExtractor │
└─────────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────────────────────┐
│ Data Layer │
├─────────────────────────────────────────────────────────────────┤
│ SQLAlchemy ORM │ PostgreSQL │ Database Models │
└─────────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────────────────────┐
│ External Integrations │
├─────────────────────────────────────────────────────────────────┤
│ CONNECT Data │ Azure IoT │ OPC UA │ MQTT │ Modbus │ REST API │
│ OpenAI │ Anthropic │ Google AI │ Azure OpenAI │
└─────────────────────────────────────────────────────────────────┘
Data Flow Architecture
The system processes data through multiple interconnected pathways:
- CDS/SDS Data Sources: Industrial data from AVEVA CONNECT Data Services
- Events Data: Real-time event streams and historical data
- REST API: External system integration and data exchange
- Rule Engine: Central processing hub with AI normalization
- Action Engine: Automated workflow execution and microservices coordination
- Cognition Engine: AI-powered analysis and autonomous investigation
RACE Integration Points
Rule Engine Integration
- Input Sources: CDS/SDS data streams, Events data, REST API endpoints
- AI Normalization: Intelligent data preprocessing and rule condition evaluation
- Event Generation: Automatic event creation based on rule triggers
- Output: Structured events fed to Action Engine and Events system
Action Engine Integration
- Input: Rule-triggered events and external system requests
- Processing: Workflow automation and microservices orchestration
- External Integration: REST API calls to external systems
- Feedback Loop: Action results fed back to Rule Engine for further processing
Cognition Engine Integration
- AI Providers: Multi-vendor AI integration (OpenAI, Anthropic, Google AI)
- Context Awareness: Real-time access to all system data (Events, Rules, Assets, Streams)
- Function Calling: Autonomous data retrieval and analysis
- Investigation Mode: Proactive MES operations analysis and recommendations
Events Engine Integration
- Event Lifecycle: Complete tracking from creation through resolution
- Data Enrichment: Context-aware enhancement with production data
- Real-time Processing: Immediate event processing and visualization
- Historical Analysis: Long-term trend analysis and pattern recognition
Core Components
1. Flask Application (app.py)
- Purpose: Main application factory and configuration
- Key Features:
- SQLAlchemy integration with PostgreSQL
- Session management and security
- Background engine initialization
- WSGI proxy configuration for production
2. Data Models (models.py)
- Package: Top-level organization unit
- RuleTemplate: Reusable rule definitions with placeholders
- TemplateInstance: Deployed instances with specific equipment mappings
- Rule: Individual rule configurations within templates
- RuleEvent: Generated events from rule triggers
- AIProvider: AI service configurations
- AssistantType: AI assistant behavior definitions
- ConversationSession: AI conversation tracking
- MonitoredStream: CONNECT data stream configurations
3. Service Layer
Rule Engine (services/rule_engine.py)
class RuleEngine:
def evaluate_rules_for_stream(self, stream_name, stream_value)
def trigger_rule(self, rule, instance, stream_value)
def close_existing_events(self, rule, instance)
Monitoring Engine (services/monitoring_engine.py)
class MonitoringEngine:
def start_monitoring(self)
def monitor_streams(self)
def evaluate_stream_value(self, stream, new_value)
AI Services
- AIConversationService: Multi-provider AI conversation handling
- FunctionCallingService: Autonomous data retrieval
- ContextExtractor: MES context data extraction
4. Background Processing
- APScheduler: Periodic monitoring (30-second intervals)
- Cleanup Jobs: Hourly retention & caps enforcement for monitoring events (age-based + global/per-stream caps, batched)
- Stream Monitoring: Real-time data collection from CONNECT
- Event Processing: Rule evaluation and event generation
- Cleanup Jobs: Automatic maintenance tasks
Data Flow
1. Configuration Flow
User Config → API Credentials → CONNECT Authentication → Asset Discovery → Stream Registration
2. Monitoring Flow
Scheduled Job → Stream Polling → Value Comparison → Rule Evaluation → Event Generation → UI Update
3. AI Investigation Flow
User Query → Context Selection → Function Calling → Data Retrieval → AI Processing → Response Display
Database Schema
Core Tables
packages- Top-level rule organizationrule_templates- Reusable rule definitionstemplate_instances- Deployed template instancestemplate_placeholders- Instance-specific mappingsrules- Individual rule configurationsrule_events- Generated eventsmonitored_streams- CONNECT data streams
AI Tables
ai_providers- AI service configurationsassistant_types- AI behavior definitionsconversation_sessions- AI conversation trackingconversation_messages- Message historyfunction_call_logs- Function execution logsapi_usage_logs- API usage statistics
Security Architecture
Authentication & Authorization
- Session Management: Flask sessions with secure cookies
- API Authentication: OAuth2 client credentials for CONNECT
- AI Provider Security: Encrypted API key storage
- CSRF Protection: Built-in Flask CSRF protection
Data Protection
- Environment Variables: Sensitive configuration isolation
- Database Encryption: Connection string encryption
- API Rate Limiting: Configurable request throttling
- Input Validation: Comprehensive data sanitization
Scalability Considerations
Performance Optimization
- Database Indexing: Optimized queries for large datasets
- Connection Pooling: Efficient database connection management
- Caching: Stream value caching to reduce API calls
- Background Processing: Non-blocking monitoring operations
Horizontal Scaling
- Stateless Design: Session-based state management
- Database Separation: Configurable database endpoints
- Load Balancing: WSGI-compatible deployment
- Microservice Ready: Modular service architecture
Technology Stack
Backend Technologies
- Flask 2.3+: Web framework
- SQLAlchemy 2.0+: ORM and database abstraction
- APScheduler 3.10+: Background job scheduling
- Requests: HTTP client for external APIs
- PostgreSQL: Production database
- Gunicorn: WSGI HTTP server
Frontend Technologies
- Bootstrap 5: UI framework with dark theme
- Chart.js: Data visualization
- Feather Icons: Icon library
- Vanilla JavaScript: Client-side functionality
External Dependencies
- AVEVA CONNECT Data Services: Industrial data platform
- OpenAI API: GPT models and function calling
- Anthropic Claude: Advanced reasoning capabilities
- Google AI: Gemini models
- Azure OpenAI: Enterprise AI services
Deployment Architecture
Development Environment
- Local Database: SQLite for development
- Debug Mode: Enhanced logging and hot reload
- Mock Services: Optional service mocking
Production Environment
- PostgreSQL: Production database with connection pooling
- Gunicorn: Multi-worker WSGI server
- Reverse Proxy: Nginx for static content and load balancing
- SSL/TLS: HTTPS enforcement
- Environment Separation: Configuration isolation
Monitoring & Observability
Application Monitoring
- Structured Logging: Comprehensive application logging
- Performance Metrics: Response time and throughput tracking
- Error Tracking: Exception monitoring and alerting
- Health Checks: Application and dependency health monitoring
Business Monitoring
- Event Metrics: Rule trigger rates and event volumes
- AI Usage: Function call statistics and response times
- Stream Monitoring: Data collection success rates
- User Activity: Session and feature usage tracking