Configuration Reference
Overview
The RACE Management Console uses multiple configuration sources to provide flexibility across different environments and deployment scenarios.
Configuration Hierarchy
- Environment Variables (Highest Priority)
- Configuration Files (config/app_config.json)
- Database Settings (User-configured via UI)
- Default Values (Lowest Priority)
Environment Variables
Flask Application Settings
Required Variables
# Flask Secret Key (Required)
FLASK_SECRET_KEY=your-very-secure-secret-key-here
# Database Connection (Required)
DATABASE_URL=postgresql://username:password@localhost:5432/race_console
Optional Flask Settings
# Flask Environment
FLASK_ENV=production # development, production
DEBUG=False # True, False
# Session Configuration
SESSION_COOKIE_SECURE=True # True for HTTPS
SESSION_COOKIE_HTTPONLY=True # Prevent XSS
SESSION_COOKIE_SAMESITE=Lax # Lax, Strict, None
SESSION_PERMANENT=False # Session persistence
# Security Headers
FORCE_HTTPS=True # Redirect HTTP to HTTPS
Database Configuration
PostgreSQL (Production)
DATABASE_URL=postgresql://username:password@host:port/database
SQLALCHEMY_ENGINE_OPTIONS='{"pool_recycle": 300, "pool_pre_ping": true}'
SQLite (Development)
DATABASE_URL=sqlite:///race_console.db
Connection Pool Settings
SQLALCHEMY_POOL_SIZE=10 # Connection pool size
SQLALCHEMY_POOL_TIMEOUT=20 # Connection timeout (seconds)
SQLALCHEMY_POOL_RECYCLE=3600 # Connection recycle time (seconds)
SQLALCHEMY_MAX_OVERFLOW=20 # Max overflow connections
AI Provider API Keys
OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key
OPENAI_ORGANIZATION=org-your-organization-id
Anthropic Configuration
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key
Google AI Configuration
GEMINI_API_KEY=your-gemini-api-key
Azure OpenAI Configuration
AZURE_OPENAI_API_KEY=your-azure-api-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_API_VERSION=2023-12-01-preview
AVEVA CONNECT Configuration
# API Credentials (Can also be configured via UI)
CONNECT_CLIENT_ID=your-client-id
CONNECT_CLIENT_SECRET=your-client-secret
CONNECT_REGION=us # us, eu, ap
CONNECT_TENANT_ID=your-tenant-id
CONNECT_NAMESPACE_ID=your-namespace-id
# API Settings
CONNECT_API_TIMEOUT=30 # Request timeout (seconds)
CONNECT_RETRY_ATTEMPTS=3 # Retry attempts
CONNECT_CACHE_DURATION=300 # Token cache duration (seconds)
Monitoring and Performance
Background Processing
# Monitoring Engine Settings
MONITORING_INTERVAL=30 # Stream monitoring interval (seconds)
MAX_CONCURRENT_STREAMS=50 # Maximum concurrent stream monitoring
STREAM_TIMEOUT=10 # Stream request timeout (seconds)
# Scheduler Settings
SCHEDULER_TIMEZONE=UTC # Scheduler timezone
MAX_WORKER_THREADS=5 # Maximum worker threads
Logging Configuration
# Logging Levels
LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR, CRITICAL
SQL_LOG_LEVEL=WARNING # Database query logging level
# Log File Settings
LOG_FILE_PATH=/var/log/race-console/app.log
LOG_MAX_BYTES=10485760 # 10MB
LOG_BACKUP_COUNT=5 # Number of backup files
Security Settings
Authentication and Authorization
# Session Security
SESSION_TIMEOUT=3600 # Session timeout (seconds)
MAX_LOGIN_ATTEMPTS=5 # Maximum login attempts
LOCKOUT_DURATION=300 # Account lockout duration (seconds)
# CSRF Protection
WTF_CSRF_ENABLED=True # Enable CSRF protection
WTF_CSRF_TIME_LIMIT=3600 # CSRF token timeout (seconds)
API Rate Limiting
# Rate Limiting
RATE_LIMIT_ENABLED=True # Enable rate limiting
DEFAULT_RATE_LIMIT=100 # Requests per minute per IP
AI_RATE_LIMIT=20 # AI API requests per minute
HEAVY_OPERATION_RATE_LIMIT=10 # Heavy operations per minute
Connection Sources
The Configuration menu has been redesigned with a modern, user-friendly structure focused on industrial connectivity. The Connection Sources section provides access to various data integration protocols:
Available Connection Types
Connect Data Source (AVEVA CONNECT)
Primary data source for AVEVA CONNECT Data Services integration: - OAuth2 client credentials authentication - Multi-regional support (US, EU, AP) - Real-time stream monitoring - Asset and namespace discovery - RESTful API access
Configuration Location: Configuration → Connection Sources → Connect Data Source
Azure IoT Hub
Enterprise cloud IoT platform integration: - Device-to-cloud messaging - Cloud-to-device commands - Device twin synchronization - IoT Central integration - Azure Monitor integration
Status: Coming Soon - UI placeholder available
OPC UA Server
Industrial automation standard protocol: - Client/server architecture - Secure communication channels - Data modeling and discovery - Historical data access - Subscription-based updates
Status: Coming Soon - UI placeholder available
MQTT Broker
Lightweight messaging protocol for IoT: - Publish/subscribe messaging - Quality of Service levels - Retained messages - Will messages - Topic-based routing
Status: Coming Soon - UI placeholder available
Modbus TCP
Industrial communication protocol: - TCP/IP network communication - Master/slave architecture - Register-based data access - Function code support - Multiple device support
Status: Coming Soon - UI placeholder available
REST API
HTTP-based web service integration: - RESTful endpoints - JSON data exchange - Authentication support - Rate limiting - Error handling
Status: Coming Soon - UI placeholder available
Menu Organization
The Configuration dropdown is organized into logical sections:
- Connection Sources - Data integration protocols
- AI & System Configuration - AI providers and assistant types
- System Management - System monitoring and parameters
This structure provides intuitive navigation while covering all industrial connectivity needs from enterprise cloud platforms to traditional fieldbus protocols.
Configuration Files
Application Configuration (config/app_config.json)
{
"app": {
"name": "RACE Management Console",
"version": "1.0.0",
"description": "Rule-Action-Cognition-Events Management System"
},
"ui": {
"theme": "dark",
"default_language": "en",
"items_per_page": 20,
"auto_refresh_interval": 30000,
"chart_colors": {
"primary": "#007bff",
"success": "#28a745",
"warning": "#ffc107",
"danger": "#dc3545",
"info": "#17a2b8"
}
},
"monitoring": {
"default_interval": 30,
"max_events_history": 10000,
"event_retention_days": 365,
"cleanup_interval_hours": 24
},
"ai": {
"default_max_tokens": 4000,
"default_temperature": 0.7,
"function_call_timeout": 30,
"max_context_items": 50,
"conversation_history_limit": 100
},
"api": {
"pagination": {
"default_limit": 20,
"max_limit": 100
},
"timeouts": {
"default": 30,
"heavy_operations": 120
},
"retry": {
"max_attempts": 3,
"backoff_factor": 2
}
}
}
System Tags Configuration (config/system_tags.json)
{
"tags": [
{
"name": "EQUIPMENT",
"description": "Equipment identifier placeholder",
"type": "string",
"validation_pattern": "^[A-Za-z0-9_-]+$"
},
{
"name": "LINE",
"description": "Production line identifier",
"type": "string",
"validation_pattern": "^[A-Za-z0-9_-]+$"
},
{
"name": "ZONE",
"description": "Plant zone identifier",
"type": "string",
"validation_pattern": "^[A-Za-z0-9_-]+$"
},
{
"name": "AREA",
"description": "Plant area identifier",
"type": "string",
"validation_pattern": "^[A-Za-z0-9_-]+$"
}
],
"validation": {
"strict_mode": true,
"require_mapping": true
}
}
Database Configuration
Connection Parameters
Production PostgreSQL Settings
-- postgresql.conf optimizations
shared_buffers = 256MB
effective_cache_size = 1GB
maintenance_work_mem = 64MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 4MB
max_connections = 100
Database Indexes
-- Performance indexes
CREATE INDEX idx_rule_events_active ON rule_events(is_active, start_time);
CREATE INDEX idx_rule_events_template_instance ON rule_events(template_instance_id);
CREATE INDEX idx_monitored_streams_active ON monitored_streams(is_active);
CREATE INDEX idx_conversation_sessions_provider ON conversation_sessions(ai_provider_id);
CREATE INDEX idx_function_call_logs_session ON function_call_logs(session_id);
Backup Configuration
Automated Backup Script
#!/bin/bash
# /opt/race-console/scripts/backup.sh
BACKUP_DIR="/backups/postgresql"
DATE=$(date +%Y%m%d_%H%M%S)
DB_NAME="race_console"
DB_USER="race_user"
# Create backup directory
mkdir -p $BACKUP_DIR
# Perform backup
pg_dump -h localhost -U $DB_USER $DB_NAME | gzip > $BACKUP_DIR/race_console_$DATE.sql.gz
# Cleanup old backups (keep last 30 days)
find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete
# Log backup completion
echo "$(date): Backup completed - race_console_$DATE.sql.gz" >> /var/log/race-console/backup.log
AI Provider Configuration
OpenAI Configuration
Model Settings
{
"openai": {
"models": {
"gpt-4": {
"max_tokens": 8192,
"temperature": 0.7,
"supports_function_calling": true
},
"gpt-4-turbo": {
"max_tokens": 128000,
"temperature": 0.7,
"supports_function_calling": true
}
},
"default_model": "gpt-4",
"timeout": 60,
"max_retries": 3
}
}
Anthropic Configuration
Claude Settings
{
"anthropic": {
"models": {
"claude-3-sonnet-20240229": {
"max_tokens": 4096,
"temperature": 0.7,
"supports_function_calling": true
},
"claude-3-opus-20240229": {
"max_tokens": 4096,
"temperature": 0.7,
"supports_function_calling": true
}
},
"default_model": "claude-3-sonnet-20240229",
"timeout": 60,
"max_retries": 3
}
}
Google AI Configuration
Gemini Settings
{
"google": {
"models": {
"gemini-pro": {
"max_tokens": 4096,
"temperature": 0.7,
"supports_function_calling": true
},
"gemini-pro-vision": {
"max_tokens": 4096,
"temperature": 0.7,
"supports_function_calling": false
}
},
"default_model": "gemini-pro",
"timeout": 60,
"max_retries": 3
}
}
Monitoring Configuration
APScheduler Settings
# Scheduler configuration
SCHEDULER_CONFIG = {
'apscheduler.jobstores.default': {
'type': 'sqlalchemy',
'url': os.environ.get('DATABASE_URL')
},
'apscheduler.executors.default': {
'class': 'apscheduler.executors.pool:ThreadPoolExecutor',
'max_workers': 5
},
'apscheduler.job_defaults.coalesce': False,
'apscheduler.job_defaults.max_instances': 3,
'apscheduler.timezone': 'UTC'
}
Event Monitoring
Stream Monitoring Settings
{
"monitoring": {
"stream_polling": {
"interval_seconds": 30,
"timeout_seconds": 10,
"max_concurrent": 50,
"retry_attempts": 3
},
"event_processing": {
"batch_size": 100,
"processing_timeout": 30,
"enrichment_timeout": 10
},
"cleanup": {
"interval_hours": 24,
"retention_days": 365,
"batch_size": 1000
}
}
}
Performance Tuning
Caching Configuration
Redis Settings (Optional)
# Redis configuration
REDIS_URL=redis://localhost:6379/0
CACHE_TYPE=redis
CACHE_DEFAULT_TIMEOUT=300
CACHE_KEY_PREFIX=race_console_
In-Memory Caching
# Flask-Caching configuration
CACHE_CONFIG = {
'CACHE_TYPE': 'simple',
'CACHE_DEFAULT_TIMEOUT': 300
}
Database Performance
Connection Pool Tuning
# SQLAlchemy engine options
ENGINE_OPTIONS = {
'pool_size': 10,
'pool_timeout': 20,
'pool_recycle': 3600,
'max_overflow': 20,
'pool_pre_ping': True
}
Query Optimization
# Pagination settings
PAGINATION_CONFIG = {
'default_per_page': 20,
'max_per_page': 100,
'error_out': False
}
Security Configuration
HTTPS and SSL
SSL Certificate Configuration
# Nginx SSL configuration
ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/private.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
Security Headers
# Flask security headers
SECURITY_HEADERS = {
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains',
'X-Content-Type-Options': 'nosniff',
'X-Frame-Options': 'DENY',
'X-XSS-Protection': '1; mode=block',
'Referrer-Policy': 'strict-origin-when-cross-origin'
}
Input Validation
Request Validation
# Input validation settings
VALIDATION_CONFIG = {
'max_content_length': 16 * 1024 * 1024, # 16MB
'allowed_file_types': ['.json', '.csv', '.txt'],
'max_string_length': 1000,
'sql_injection_protection': True
}
Development Configuration
Development Environment
# Development settings
FLASK_ENV=development
DEBUG=True
TESTING=False
# Database
DATABASE_URL=sqlite:///dev_race_console.db
# Logging
LOG_LEVEL=DEBUG
SQL_ECHO=True
# AI Providers (optional for development)
OPENAI_API_KEY=sk-your-dev-key
Testing Configuration
# Test environment
FLASK_ENV=testing
TESTING=True
DATABASE_URL=sqlite:///:memory:
# Disable external calls
MOCK_EXTERNAL_APIS=True
SKIP_AI_PROVIDERS=True
Configuration Validation
Environment Validation Script
#!/usr/bin/env python3
# scripts/validate_config.py
import os
import sys
from urllib.parse import urlparse
def validate_database_url():
"""Validate database URL format"""
db_url = os.environ.get('DATABASE_URL')
if not db_url:
print("ERROR: DATABASE_URL not set")
return False
try:
parsed = urlparse(db_url)
if not all([parsed.scheme, parsed.netloc]):
print("ERROR: Invalid DATABASE_URL format")
return False
except Exception as e:
print(f"ERROR: Database URL validation failed: {e}")
return False
return True
def validate_secret_key():
"""Validate Flask secret key"""
secret_key = os.environ.get('FLASK_SECRET_KEY')
if not secret_key:
print("ERROR: FLASK_SECRET_KEY not set")
return False
if len(secret_key) < 32:
print("WARNING: FLASK_SECRET_KEY should be at least 32 characters")
return False
return True
def main():
"""Run all configuration validations"""
checks = [
validate_database_url,
validate_secret_key
]
failed_checks = []
for check in checks:
if not check():
failed_checks.append(check.__name__)
if failed_checks:
print(f"Configuration validation failed: {failed_checks}")
sys.exit(1)
else:
print("Configuration validation passed")
sys.exit(0)
if __name__ == '__main__':
main()
Usage
# Validate configuration before deployment
python scripts/validate_config.py
Troubleshooting Configuration
Common Issues
Database Connection Problems
# Test database connection
python -c "
from app import app, db
with app.app_context():
try:
db.session.execute('SELECT 1')
print('Database connection successful')
except Exception as e:
print(f'Database connection failed: {e}')
"
AI Provider Connection Issues
# Test AI provider connection
python -c "
import os
from services.ai_conversation import AIConversationService
service = AIConversationService()
# Test specific provider
"
Configuration File Loading
# Check configuration file syntax
python -c "
import json
try:
with open('config/app_config.json') as f:
config = json.load(f)
print('Configuration file is valid JSON')
except Exception as e:
print(f'Configuration file error: {e}')
"
Debug Mode
Enable Debug Logging
# Temporary debug configuration
import logging
logging.basicConfig(level=logging.DEBUG)
logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)
Configuration Dump
# Debug configuration values
def dump_config():
"""Print current configuration for debugging"""
from app import app
print("Current Configuration:")
for key, value in app.config.items():
if 'SECRET' in key or 'PASSWORD' in key:
print(f"{key}: {'*' * len(str(value))}")
else:
print(f"{key}: {value}")
Monitoring Event Retention & Cleanup
The monitoring_event table is kept bounded by time-based retention and record-count caps. Configure these under monitoring in config/app_config.json (overrides) or config/default_config.json (defaults):
{
"monitoring": {
"event_cleanup_days": 7,
"max_total_events": 300000,
"max_events_per_stream": 10000,
"batch_cleanup_size": 10000
}
}
event_cleanup_days: Delete events older than N daysmax_total_events: Global cap on total events; deletes oldest records beyond this capmax_events_per_stream: Per-stream cap; deletes oldest records beyond this cap for noisy streamsbatch_cleanup_size: Batch size for deletions to avoid long locks
Cleanup runs hourly via APScheduler. You can also trigger manual cleanup from the Admin UI (see User Guide → Administration).