Compute
Generate statistical summaries of data over time periods for trend analysis.
Compute Executor: Statistical Data Summaries
The Compute Executor generates statistical summaries of data point values over configurable time periods. Calculate averages, minimums, maximums, standard deviations, and other statistical measures to gain insights from your time-series data and reduce storage requirements for long-term analysis.
Info: This is an Executor node type that can be executed to perform work by the execution of a parent node. When triggered, it retrieves historical data from the time-series database and computes the requested statistical operation.
Info: This feature uses TargetingNodeMetaData, which means it can be configured to read from a source Data Point’s historical data and write computed results to a target Data Point. This enables distributing computed summaries across your server mesh.
Overview
Compute Executors analyze historical time-series data to produce meaningful statistical summaries. Instead of processing every individual data point, you can aggregate data over hours, days, or weeks to identify trends, monitor performance, and archive long-term statistics while conserving storage space.
Key Features
- Time Range Selection: Minute, Hour, Day, Week, Month, Year
- Multiple Operations: Average, Mean, Median, High, Low, Sum, Count, Range, StdDev
- Historical Analysis: Queries time-series database for data samples
- Remote Storage: Send computed results to data points on other servers
- Trend Detection: Identify patterns over time
- Data Reduction: Compress large datasets into summary statistics
- Flexible Scheduling: Trigger via cron for periodic summary generation
Compute Processing Flow
graph TD
A[Parent Node Triggers] --> B[Read Source Data Point]
B --> C[Calculate Time Range]
C --> D[Query Time-Series Database]
D --> E{Data Available?}
E -->|No| F[Log Warning]
E -->|Yes| G[Extract Values]
G --> H[Apply Statistical Operation]
H --> I[Create Summary Snapshot]
I --> J[Update Target Data Point]
J --> K[Execute Children]
Supported Statistical Operations
| Operation | Description | Use Case |
|---|---|---|
| AVERAGE / MEAN | Arithmetic mean of all values | Temperature trends |
| MEDIAN | Middle value when sorted | Robust average (ignores outliers) |
| HIGH / MAX | Maximum value in range | Peak detection |
| LOW / MIN | Minimum value in range | Trough detection |
| SUM | Total of all values | Energy consumption totals |
| COUNT | Number of data points | Activity level |
| RANGE | Difference between high and low | Variability measure |
| FIRST | First value in time range | Starting value |
| LAST | Most recent value | Current value |
| STDDEV | Standard deviation | Data spread/consistency |
Time Range Options
| Range | Description | Sample Count Example |
|---|---|---|
| MINUTE | Last 60 seconds | 60+ samples (1/sec) |
| HOUR | Last 60 minutes | 3600+ samples |
| DAY | Last 24 hours | 86400+ samples |
| WEEK | Last 7 days | 604800+ samples |
| MONTH | Last 30 days | 2.6M+ samples |
| YEAR | Last 365 days | 31.5M+ samples |
How It Works
When a Compute Executor runs:
- Trigger: Parent node (typically Cron Timer) activates the executor
- Source Identification: Identifies which Data Point to analyze
- Time Range Calculation: Computes start time based on range selection
- Database Query: Retrieves all samples from time-series database in range
- Value Extraction: Converts samples to numeric values
- Statistical Computation: Applies selected operation to values
- Result Snapshot: Creates new snapshot with computed value
- Target Update: Writes summary to target Data Point
- Distribution: Target can be on any Krill Server in mesh network
Configuration
| Field | Description | Required |
|---|---|---|
operation | Statistical operation to perform | Yes |
range | Time range to analyze | Yes |
sources | Source Data Point ID (historical data) | Yes |
targets | Target Data Point ID (summary destination) | Yes |
Use Cases
- Hourly Averages: Compute average temperature each hour
- Daily Summaries: Calculate daily energy consumption totals
- Weekly Reports: Generate weekly performance statistics
- Trend Analysis: Track maximum/minimum values over time
- Data Archival: Store summaries while purging raw data
- Alerting: Trigger alerts based on statistical thresholds
- Dashboard Metrics: Display summary statistics in real-time
- Performance Monitoring: Track equipment efficiency over time
Example Workflows
Hourly Temperature Average:
- Trigger: Cron Timer (every hour on the hour)
- Executor: Compute
- Operation: AVERAGE
- Range: HOUR
- Source: Temperature Data Point (raw readings)
- Target: Hourly Temp Average Data Point
- Result: One summary value per hour
Daily Energy Consumption:
- Trigger: Cron Timer (midnight daily)
- Executor: Compute
- Operation: SUM
- Range: DAY
- Source: Power Usage Data Point (watts)
- Target: Daily Energy Total (kWh)
- Result: Total energy consumed per day
Weekly Peak Detection:
- Trigger: Cron Timer (Sunday at midnight)
- Executor: Compute
- Operation: HIGH
- Range: WEEK
- Source: Pressure Data Point
- Target: Weekly Peak Pressure
- Result: Maximum pressure recorded each week
Statistical Quality Control:
- Trigger: Cron Timer (every 6 hours)
- Executor: Compute (AVERAGE)
- Executor: Compute (STDDEV)
- Executor: Calculation (coefficient of variation)
- Trigger: Threshold (alert if too variable)
Data Archival Strategy
Compute enables efficient long-term data retention:
1
2
3
4
Raw Data (1 sample/sec)
└─> Hourly Averages (keep 1 week)
└─> Daily Summaries (keep 1 month)
└─> Monthly Stats (keep forever)
This reduces storage from millions of samples to hundreds while preserving trends.
Integration with Distributed Architecture
Since targets can be on any Krill Server:
1
2
3
Edge Server (collects raw data)
└─> Compute Executor
└─> Target: Central Archive Server
This allows edge devices to process and summarize data locally, then send only summaries to central servers for long-term storage.
Multi-Operation Analysis
Chain multiple Compute executors for comprehensive analysis:
- Compute AVERAGE → Average value
- Compute STDDEV → Variability
- Compute HIGH → Peak value
- Compute LOW → Minimum value
- Compute RANGE → Total variation
All from the same time period of source data.
Performance Considerations
- Sample Count: Larger time ranges process more samples
- Database I/O: Queries hit the time-series database
- Frequency: Balance summary frequency vs. system load
- Cleanup: Use summaries to justify purging old raw data
Best Practices
- Schedule Wisely: Run hourly summaries at :00, daily at midnight, etc.
- Target Organization: Create dedicated summary data points
- Data Retention: Define retention policies for different granularities
- Validation: Ensure source data points have sufficient history
- Monitoring: Log compute operations for debugging
- Backup Strategies: Archive summaries to remote servers
- Documentation: Label summary data points clearly
Example: Complete Monitoring System
1
2
3
4
5
Temperature Sensor (1 sample/sec)
├─> Hourly Compute (AVERAGE) → Hourly Temp
├─> Daily Compute (HIGH) → Daily Max Temp
├─> Daily Compute (LOW) → Daily Min Temp
└─> Weekly Compute (STDDEV) → Weekly Temp Variation
Each summary provides different insights while reducing storage needs.
The Compute Executor is essential for transforming raw time-series data into actionable statistical insights and managing long-term data efficiently.