mirror of
https://github.com/The-Low-Code-Foundation/OpenNoodl.git
synced 2026-03-08 01:53:30 +01:00
feat: Phase 5 BYOB foundation + Phase 3 GitHub integration
Phase 5 - BYOB Backend (TASK-007A/B): - LocalSQL Adapter with full CloudStore API compatibility - QueryBuilder translates Parse-style queries to SQL - SchemaManager with PostgreSQL/Supabase export - LocalBackendServer with REST endpoints - BackendManager with IPC handlers for Electron - In-memory fallback when better-sqlite3 unavailable Phase 3 - GitHub Panel (GIT-004): - Issues tab with list/detail views - Pull Requests tab with list/detail views - GitHub API client with OAuth support - Repository info hook integration Phase 3 - Editor UX Bugfixes (TASK-013): - Legacy runtime detection banners - Read-only enforcement for legacy projects - Code editor modal close improvements - Property panel stuck state fix - Blockly node deletion and UI polish Phase 11 - Cloud Functions Planning: - Architecture documentation for workflow automation - Execution history storage schema design - Canvas overlay concept for debugging Docs: Updated LEARNINGS.md and COMMON-ISSUES.md
This commit is contained in:
@@ -0,0 +1,441 @@
|
||||
# CF11-001: Logic Nodes (IF/Switch/ForEach/Merge)
|
||||
|
||||
## Metadata
|
||||
|
||||
| Field | Value |
|
||||
| ------------------ | ------------------------------------ |
|
||||
| **ID** | CF11-001 |
|
||||
| **Phase** | Phase 11 |
|
||||
| **Series** | 1 - Advanced Workflow Nodes |
|
||||
| **Priority** | 🟡 High |
|
||||
| **Difficulty** | 🟡 Medium |
|
||||
| **Estimated Time** | 12-16 hours |
|
||||
| **Prerequisites** | Phase 5 TASK-007C (Workflow Runtime) |
|
||||
| **Branch** | `feature/cf11-001-logic-nodes` |
|
||||
|
||||
## Objective
|
||||
|
||||
Create advanced workflow logic nodes that enable conditional branching, multi-way routing, array iteration, and parallel execution paths - essential for building non-trivial automation workflows.
|
||||
|
||||
## Background
|
||||
|
||||
Current Noodl nodes are designed for UI and data flow but lack the control-flow constructs needed for workflow automation. To compete with n8n/Zapier, we need:
|
||||
|
||||
- **IF Node**: Route data based on conditions
|
||||
- **Switch Node**: Multi-way branching (like a switch statement)
|
||||
- **For Each Node**: Iterate over arrays
|
||||
- **Merge Node**: Combine multiple execution paths
|
||||
|
||||
## Current State
|
||||
|
||||
- Basic condition node exists but isn't suited for workflows
|
||||
- No iteration nodes
|
||||
- No way to merge parallel execution paths
|
||||
|
||||
## Desired State
|
||||
|
||||
- IF node with visual expression builder
|
||||
- Switch node with multiple case outputs
|
||||
- For Each node for array iteration
|
||||
- Merge node to combine paths
|
||||
- All nodes work in CloudRunner workflow context
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
|
||||
- [ ] IF Node implementation
|
||||
- [ ] Switch Node implementation
|
||||
- [ ] For Each Node implementation
|
||||
- [ ] Merge Node implementation
|
||||
- [ ] Property editor integrations
|
||||
- [ ] CloudRunner execution support
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- UI runtime nodes (frontend-only)
|
||||
- Visual expression builder (can use existing or defer)
|
||||
|
||||
## Technical Approach
|
||||
|
||||
### IF Node
|
||||
|
||||
Routes execution based on a boolean condition.
|
||||
|
||||
```typescript
|
||||
// packages/noodl-viewer-cloud/src/nodes/logic/IfNode.ts
|
||||
|
||||
const IfNode = {
|
||||
name: 'Workflow IF',
|
||||
displayName: 'IF',
|
||||
category: 'Workflow Logic',
|
||||
color: 'logic',
|
||||
|
||||
inputs: {
|
||||
condition: {
|
||||
type: 'boolean',
|
||||
displayName: 'Condition',
|
||||
description: 'Boolean expression to evaluate'
|
||||
},
|
||||
data: {
|
||||
type: '*',
|
||||
displayName: 'Data',
|
||||
description: 'Data to pass through'
|
||||
},
|
||||
run: {
|
||||
type: 'signal',
|
||||
displayName: 'Run',
|
||||
description: 'Trigger to evaluate condition'
|
||||
}
|
||||
},
|
||||
|
||||
outputs: {
|
||||
onTrue: {
|
||||
type: 'signal',
|
||||
displayName: 'True',
|
||||
description: 'Triggered when condition is true'
|
||||
},
|
||||
onFalse: {
|
||||
type: 'signal',
|
||||
displayName: 'False',
|
||||
description: 'Triggered when condition is false'
|
||||
},
|
||||
data: {
|
||||
type: '*',
|
||||
displayName: 'Data',
|
||||
description: 'Pass-through data'
|
||||
}
|
||||
},
|
||||
|
||||
run(context) {
|
||||
const condition = context.inputs.condition;
|
||||
context.outputs.data = context.inputs.data;
|
||||
|
||||
if (condition) {
|
||||
context.triggerOutput('onTrue');
|
||||
} else {
|
||||
context.triggerOutput('onFalse');
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Switch Node
|
||||
|
||||
Routes to one of multiple outputs based on value matching.
|
||||
|
||||
```typescript
|
||||
// packages/noodl-viewer-cloud/src/nodes/logic/SwitchNode.ts
|
||||
|
||||
const SwitchNode = {
|
||||
name: 'Workflow Switch',
|
||||
displayName: 'Switch',
|
||||
category: 'Workflow Logic',
|
||||
color: 'logic',
|
||||
|
||||
inputs: {
|
||||
value: {
|
||||
type: '*',
|
||||
displayName: 'Value',
|
||||
description: 'Value to switch on'
|
||||
},
|
||||
data: {
|
||||
type: '*',
|
||||
displayName: 'Data'
|
||||
},
|
||||
run: {
|
||||
type: 'signal',
|
||||
displayName: 'Run'
|
||||
}
|
||||
},
|
||||
|
||||
outputs: {
|
||||
default: {
|
||||
type: 'signal',
|
||||
displayName: 'Default',
|
||||
description: 'Triggered if no case matches'
|
||||
},
|
||||
data: {
|
||||
type: '*',
|
||||
displayName: 'Data'
|
||||
}
|
||||
},
|
||||
|
||||
// Dynamic outputs for cases - configured via property panel
|
||||
dynamicports: {
|
||||
outputs: {
|
||||
cases: {
|
||||
type: 'signal'
|
||||
// Generated from cases array: case_0, case_1, etc.
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
setup(context) {
|
||||
// Register cases from configuration
|
||||
const cases = context.parameters.cases || [];
|
||||
cases.forEach((caseValue, index) => {
|
||||
context.registerOutput(`case_${index}`, {
|
||||
type: 'signal',
|
||||
displayName: `Case: ${caseValue}`
|
||||
});
|
||||
});
|
||||
},
|
||||
|
||||
run(context) {
|
||||
const value = context.inputs.value;
|
||||
const cases = context.parameters.cases || [];
|
||||
context.outputs.data = context.inputs.data;
|
||||
|
||||
const matchIndex = cases.indexOf(value);
|
||||
if (matchIndex >= 0) {
|
||||
context.triggerOutput(`case_${matchIndex}`);
|
||||
} else {
|
||||
context.triggerOutput('default');
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### For Each Node
|
||||
|
||||
Iterates over an array, executing the output for each item.
|
||||
|
||||
```typescript
|
||||
// packages/noodl-viewer-cloud/src/nodes/logic/ForEachNode.ts
|
||||
|
||||
const ForEachNode = {
|
||||
name: 'Workflow For Each',
|
||||
displayName: 'For Each',
|
||||
category: 'Workflow Logic',
|
||||
color: 'logic',
|
||||
|
||||
inputs: {
|
||||
items: {
|
||||
type: 'array',
|
||||
displayName: 'Items',
|
||||
description: 'Array to iterate over'
|
||||
},
|
||||
run: {
|
||||
type: 'signal',
|
||||
displayName: 'Run'
|
||||
}
|
||||
},
|
||||
|
||||
outputs: {
|
||||
iteration: {
|
||||
type: 'signal',
|
||||
displayName: 'For Each Item',
|
||||
description: 'Triggered for each item'
|
||||
},
|
||||
currentItem: {
|
||||
type: '*',
|
||||
displayName: 'Current Item'
|
||||
},
|
||||
currentIndex: {
|
||||
type: 'number',
|
||||
displayName: 'Index'
|
||||
},
|
||||
completed: {
|
||||
type: 'signal',
|
||||
displayName: 'Completed',
|
||||
description: 'Triggered when iteration is complete'
|
||||
},
|
||||
allResults: {
|
||||
type: 'array',
|
||||
displayName: 'Results',
|
||||
description: 'Collected results from all iterations'
|
||||
}
|
||||
},
|
||||
|
||||
async run(context) {
|
||||
const items = context.inputs.items || [];
|
||||
const results = [];
|
||||
|
||||
for (let i = 0; i < items.length; i++) {
|
||||
context.outputs.currentItem = items[i];
|
||||
context.outputs.currentIndex = i;
|
||||
|
||||
// Trigger and wait for downstream to complete
|
||||
const result = await context.triggerOutputAndWait('iteration');
|
||||
if (result !== undefined) {
|
||||
results.push(result);
|
||||
}
|
||||
}
|
||||
|
||||
context.outputs.allResults = results;
|
||||
context.triggerOutput('completed');
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Merge Node
|
||||
|
||||
Waits for all input paths before continuing.
|
||||
|
||||
```typescript
|
||||
// packages/noodl-viewer-cloud/src/nodes/logic/MergeNode.ts
|
||||
|
||||
const MergeNode = {
|
||||
name: 'Workflow Merge',
|
||||
displayName: 'Merge',
|
||||
category: 'Workflow Logic',
|
||||
color: 'logic',
|
||||
|
||||
inputs: {
|
||||
// Dynamic inputs based on configuration
|
||||
},
|
||||
|
||||
outputs: {
|
||||
merged: {
|
||||
type: 'signal',
|
||||
displayName: 'Merged',
|
||||
description: 'Triggered when all inputs received'
|
||||
},
|
||||
data: {
|
||||
type: 'object',
|
||||
displayName: 'Data',
|
||||
description: 'Combined data from all inputs'
|
||||
}
|
||||
},
|
||||
|
||||
dynamicports: {
|
||||
inputs: {
|
||||
branches: {
|
||||
type: 'signal'
|
||||
// Generated: branch_0, branch_1, etc.
|
||||
},
|
||||
branchData: {
|
||||
type: '*'
|
||||
// Generated: data_0, data_1, etc.
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
setup(context) {
|
||||
const branchCount = context.parameters.branchCount || 2;
|
||||
context._receivedBranches = new Set();
|
||||
context._branchData = {};
|
||||
|
||||
for (let i = 0; i < branchCount; i++) {
|
||||
context.registerInput(`branch_${i}`, {
|
||||
type: 'signal',
|
||||
displayName: `Branch ${i + 1}`
|
||||
});
|
||||
context.registerInput(`data_${i}`, {
|
||||
type: '*',
|
||||
displayName: `Data ${i + 1}`
|
||||
});
|
||||
}
|
||||
},
|
||||
|
||||
onInputChange(context, inputName, value) {
|
||||
if (inputName.startsWith('branch_')) {
|
||||
const index = parseInt(inputName.split('_')[1]);
|
||||
context._receivedBranches.add(index);
|
||||
context._branchData[index] = context.inputs[`data_${index}`];
|
||||
|
||||
const branchCount = context.parameters.branchCount || 2;
|
||||
if (context._receivedBranches.size >= branchCount) {
|
||||
context.outputs.data = { ...context._branchData };
|
||||
context.triggerOutput('merged');
|
||||
|
||||
// Reset for next execution
|
||||
context._receivedBranches.clear();
|
||||
context._branchData = {};
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Key Files to Create
|
||||
|
||||
| File | Purpose |
|
||||
| ---------------------------- | ------------------------ |
|
||||
| `nodes/logic/IfNode.ts` | IF node definition |
|
||||
| `nodes/logic/SwitchNode.ts` | Switch node definition |
|
||||
| `nodes/logic/ForEachNode.ts` | For Each node definition |
|
||||
| `nodes/logic/MergeNode.ts` | Merge node definition |
|
||||
| `nodes/logic/index.ts` | Module exports |
|
||||
| `tests/logic-nodes.test.ts` | Unit tests |
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: IF Node (3h)
|
||||
|
||||
1. Create node definition
|
||||
2. Implement run logic
|
||||
3. Add to node registry
|
||||
4. Test with CloudRunner
|
||||
|
||||
### Step 2: Switch Node (4h)
|
||||
|
||||
1. Create node with dynamic ports
|
||||
2. Implement case matching logic
|
||||
3. Property editor for case configuration
|
||||
4. Test edge cases
|
||||
|
||||
### Step 3: For Each Node (4h)
|
||||
|
||||
1. Create node definition
|
||||
2. Implement async iteration
|
||||
3. Handle `triggerOutputAndWait` pattern
|
||||
4. Test with arrays and objects
|
||||
|
||||
### Step 4: Merge Node (3h)
|
||||
|
||||
1. Create node with dynamic inputs
|
||||
2. Implement branch tracking
|
||||
3. Handle reset logic
|
||||
4. Test parallel paths
|
||||
|
||||
### Step 5: Integration & Testing (2h)
|
||||
|
||||
1. Register all nodes
|
||||
2. Integration tests
|
||||
3. Manual testing in editor
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### Unit Tests
|
||||
|
||||
- [ ] IF Node routes correctly on true/false
|
||||
- [ ] Switch Node matches correct case
|
||||
- [ ] Switch Node triggers default when no match
|
||||
- [ ] For Each iterates all items
|
||||
- [ ] For Each handles empty arrays
|
||||
- [ ] For Each collects results
|
||||
- [ ] Merge waits for all branches
|
||||
- [ ] Merge combines data correctly
|
||||
|
||||
### Integration Tests
|
||||
|
||||
- [ ] IF → downstream nodes execute correctly
|
||||
- [ ] Switch → multiple paths work
|
||||
- [ ] For Each → nested workflows work
|
||||
- [ ] Merge → parallel execution converges
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] IF Node works in CloudRunner
|
||||
- [ ] Switch Node with dynamic cases works
|
||||
- [ ] For Each iterates and collects results
|
||||
- [ ] Merge combines parallel paths
|
||||
- [ ] All nodes appear in node picker
|
||||
- [ ] Property panels work correctly
|
||||
- [ ] All tests pass
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Mitigation |
|
||||
| ------------------------ | --------------------------------------- |
|
||||
| For Each performance | Limit max iterations, batch if needed |
|
||||
| Merge race conditions | Use Set for tracking, atomic operations |
|
||||
| Dynamic ports complexity | Follow existing patterns from BYOB |
|
||||
|
||||
## References
|
||||
|
||||
- [Node Patterns Guide](../../../reference/NODE-PATTERNS.md)
|
||||
- [LEARNINGS-NODE-CREATION](../../../reference/LEARNINGS-NODE-CREATION.md)
|
||||
- [n8n IF Node](https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.if/) - Reference
|
||||
@@ -0,0 +1,172 @@
|
||||
# CF11-002: Error Handling Nodes (Try/Catch/Retry)
|
||||
|
||||
## Metadata
|
||||
|
||||
| Field | Value |
|
||||
| ------------------ | ------------------------------------ |
|
||||
| **ID** | CF11-002 |
|
||||
| **Phase** | Phase 11 |
|
||||
| **Series** | 1 - Advanced Workflow Nodes |
|
||||
| **Priority** | 🟡 High |
|
||||
| **Difficulty** | 🟡 Medium |
|
||||
| **Estimated Time** | 8-10 hours |
|
||||
| **Prerequisites** | Phase 5 TASK-007C (Workflow Runtime) |
|
||||
| **Branch** | `feature/cf11-002-error-nodes` |
|
||||
|
||||
## Objective
|
||||
|
||||
Create error handling nodes for workflows: Try/Catch for graceful error recovery and Retry for transient failure handling - critical for building reliable automation.
|
||||
|
||||
## Background
|
||||
|
||||
External API calls fail. Databases timeout. Services go down. Production workflows need error handling:
|
||||
|
||||
- **Try/Catch**: Wrap operations and handle failures gracefully
|
||||
- **Retry**: Automatically retry failed operations with configurable backoff
|
||||
- **Stop/Error**: Explicitly fail a workflow with a message
|
||||
|
||||
Without these, any external failure crashes the entire workflow.
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
|
||||
- [ ] Try/Catch Node implementation
|
||||
- [ ] Retry Node implementation
|
||||
- [ ] Stop/Error Node implementation
|
||||
- [ ] Configurable retry strategies
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- Global error handlers (future)
|
||||
- Error reporting/alerting (future)
|
||||
|
||||
## Technical Approach
|
||||
|
||||
### Try/Catch Node
|
||||
|
||||
```typescript
|
||||
const TryCatchNode = {
|
||||
name: 'Workflow Try Catch',
|
||||
displayName: 'Try / Catch',
|
||||
category: 'Workflow Logic',
|
||||
|
||||
inputs: {
|
||||
run: { type: 'signal', displayName: 'Try' }
|
||||
},
|
||||
|
||||
outputs: {
|
||||
try: { type: 'signal', displayName: 'Try Block' },
|
||||
catch: { type: 'signal', displayName: 'Catch Block' },
|
||||
finally: { type: 'signal', displayName: 'Finally' },
|
||||
error: { type: 'object', displayName: 'Error' },
|
||||
success: { type: 'boolean', displayName: 'Success' }
|
||||
},
|
||||
|
||||
async run(context) {
|
||||
try {
|
||||
await context.triggerOutputAndWait('try');
|
||||
context.outputs.success = true;
|
||||
} catch (error) {
|
||||
context.outputs.error = { message: error.message, stack: error.stack };
|
||||
context.outputs.success = false;
|
||||
context.triggerOutput('catch');
|
||||
} finally {
|
||||
context.triggerOutput('finally');
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Retry Node
|
||||
|
||||
```typescript
|
||||
const RetryNode = {
|
||||
name: 'Workflow Retry',
|
||||
displayName: 'Retry',
|
||||
category: 'Workflow Logic',
|
||||
|
||||
inputs: {
|
||||
run: { type: 'signal' },
|
||||
maxAttempts: { type: 'number', default: 3 },
|
||||
delayMs: { type: 'number', default: 1000 },
|
||||
backoffMultiplier: { type: 'number', default: 2 }
|
||||
},
|
||||
|
||||
outputs: {
|
||||
attempt: { type: 'signal', displayName: 'Attempt' },
|
||||
success: { type: 'signal' },
|
||||
failure: { type: 'signal' },
|
||||
attemptNumber: { type: 'number' },
|
||||
lastError: { type: 'object' }
|
||||
},
|
||||
|
||||
async run(context) {
|
||||
const maxAttempts = context.inputs.maxAttempts || 3;
|
||||
const baseDelay = context.inputs.delayMs || 1000;
|
||||
const multiplier = context.inputs.backoffMultiplier || 2;
|
||||
|
||||
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
|
||||
context.outputs.attemptNumber = attempt;
|
||||
|
||||
try {
|
||||
await context.triggerOutputAndWait('attempt');
|
||||
context.triggerOutput('success');
|
||||
return;
|
||||
} catch (error) {
|
||||
context.outputs.lastError = { message: error.message };
|
||||
|
||||
if (attempt < maxAttempts) {
|
||||
const delay = baseDelay * Math.pow(multiplier, attempt - 1);
|
||||
await sleep(delay);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
context.triggerOutput('failure');
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Stop/Error Node
|
||||
|
||||
```typescript
|
||||
const StopNode = {
|
||||
name: 'Workflow Stop',
|
||||
displayName: 'Stop / Error',
|
||||
category: 'Workflow Logic',
|
||||
|
||||
inputs: {
|
||||
run: { type: 'signal' },
|
||||
errorMessage: { type: 'string', default: 'Workflow stopped' },
|
||||
isError: { type: 'boolean', default: true }
|
||||
},
|
||||
|
||||
run(context) {
|
||||
const message = context.inputs.errorMessage || 'Workflow stopped';
|
||||
if (context.inputs.isError) {
|
||||
throw new WorkflowError(message);
|
||||
}
|
||||
// Non-error stop - just terminates this path
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
1. **Try/Catch Node (3h)** - Error boundary, output routing
|
||||
2. **Retry Node (3h)** - Attempt loop, backoff logic, timeout handling
|
||||
3. **Stop/Error Node (1h)** - Simple error throwing
|
||||
4. **Testing (2h)** - Unit tests, integration tests
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] Try/Catch captures downstream errors
|
||||
- [ ] Retry attempts with exponential backoff
|
||||
- [ ] Stop/Error terminates workflow with message
|
||||
- [ ] Error data captured in execution history
|
||||
|
||||
## References
|
||||
|
||||
- [CF11-001 Logic Nodes](../CF11-001-logic-nodes/README.md) - Same patterns
|
||||
- [n8n Error Handling](https://docs.n8n.io/flow-logic/error-handling/)
|
||||
@@ -0,0 +1,173 @@
|
||||
# CF11-003: Wait/Delay Nodes
|
||||
|
||||
## Metadata
|
||||
|
||||
| Field | Value |
|
||||
| ------------------ | ------------------------------------ |
|
||||
| **ID** | CF11-003 |
|
||||
| **Phase** | Phase 11 |
|
||||
| **Series** | 1 - Advanced Workflow Nodes |
|
||||
| **Priority** | 🟢 Medium |
|
||||
| **Difficulty** | 🟢 Low |
|
||||
| **Estimated Time** | 4-6 hours |
|
||||
| **Prerequisites** | Phase 5 TASK-007C (Workflow Runtime) |
|
||||
| **Branch** | `feature/cf11-003-delay-nodes` |
|
||||
|
||||
## Objective
|
||||
|
||||
Create timing-related workflow nodes: Wait for explicit delays, Wait Until for scheduled execution, and debounce utilities - essential for rate limiting and scheduled workflows.
|
||||
|
||||
## Background
|
||||
|
||||
Workflows often need timing control:
|
||||
|
||||
- **Wait**: Pause execution for a duration (rate limiting APIs)
|
||||
- **Wait Until**: Execute at a specific time (scheduled tasks)
|
||||
- **Debounce**: Prevent rapid repeated execution
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
|
||||
- [ ] Wait Node (delay for X milliseconds)
|
||||
- [ ] Wait Until Node (wait until specific time)
|
||||
- [ ] Debounce Node (rate limit execution)
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- Cron scheduling (handled by triggers)
|
||||
- Throttle node (future)
|
||||
|
||||
## Technical Approach
|
||||
|
||||
### Wait Node
|
||||
|
||||
```typescript
|
||||
const WaitNode = {
|
||||
name: 'Workflow Wait',
|
||||
displayName: 'Wait',
|
||||
category: 'Workflow Logic',
|
||||
|
||||
inputs: {
|
||||
run: { type: 'signal' },
|
||||
duration: { type: 'number', displayName: 'Duration (ms)', default: 1000 },
|
||||
unit: {
|
||||
type: 'enum',
|
||||
options: ['milliseconds', 'seconds', 'minutes', 'hours'],
|
||||
default: 'milliseconds'
|
||||
}
|
||||
},
|
||||
|
||||
outputs: {
|
||||
done: { type: 'signal', displayName: 'Done' }
|
||||
},
|
||||
|
||||
async run(context) {
|
||||
let ms = context.inputs.duration || 1000;
|
||||
const unit = context.inputs.unit || 'milliseconds';
|
||||
|
||||
// Convert to milliseconds
|
||||
const multipliers = {
|
||||
milliseconds: 1,
|
||||
seconds: 1000,
|
||||
minutes: 60 * 1000,
|
||||
hours: 60 * 60 * 1000
|
||||
};
|
||||
ms = ms * (multipliers[unit] || 1);
|
||||
|
||||
await new Promise((resolve) => setTimeout(resolve, ms));
|
||||
context.triggerOutput('done');
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Wait Until Node
|
||||
|
||||
```typescript
|
||||
const WaitUntilNode = {
|
||||
name: 'Workflow Wait Until',
|
||||
displayName: 'Wait Until',
|
||||
category: 'Workflow Logic',
|
||||
|
||||
inputs: {
|
||||
run: { type: 'signal' },
|
||||
targetTime: { type: 'string', displayName: 'Target Time (ISO)' },
|
||||
targetDate: { type: 'date', displayName: 'Target Date' }
|
||||
},
|
||||
|
||||
outputs: {
|
||||
done: { type: 'signal' },
|
||||
skipped: { type: 'signal', displayName: 'Already Passed' }
|
||||
},
|
||||
|
||||
async run(context) {
|
||||
const target = context.inputs.targetDate || new Date(context.inputs.targetTime);
|
||||
const now = Date.now();
|
||||
const targetMs = target.getTime();
|
||||
|
||||
if (targetMs <= now) {
|
||||
// Time already passed
|
||||
context.triggerOutput('skipped');
|
||||
return;
|
||||
}
|
||||
|
||||
const waitTime = targetMs - now;
|
||||
await new Promise((resolve) => setTimeout(resolve, waitTime));
|
||||
context.triggerOutput('done');
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Debounce Node
|
||||
|
||||
```typescript
|
||||
const DebounceNode = {
|
||||
name: 'Workflow Debounce',
|
||||
displayName: 'Debounce',
|
||||
category: 'Workflow Logic',
|
||||
|
||||
inputs: {
|
||||
run: { type: 'signal' },
|
||||
delay: { type: 'number', default: 500 }
|
||||
},
|
||||
|
||||
outputs: {
|
||||
trigger: { type: 'signal' }
|
||||
},
|
||||
|
||||
setup(context) {
|
||||
context._debounceTimer = null;
|
||||
},
|
||||
|
||||
run(context) {
|
||||
if (context._debounceTimer) {
|
||||
clearTimeout(context._debounceTimer);
|
||||
}
|
||||
|
||||
context._debounceTimer = setTimeout(() => {
|
||||
context.triggerOutput('trigger');
|
||||
context._debounceTimer = null;
|
||||
}, context.inputs.delay || 500);
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
1. **Wait Node (2h)** - Simple delay, unit conversion
|
||||
2. **Wait Until Node (2h)** - Date parsing, time calculation
|
||||
3. **Debounce Node (1h)** - Timer management
|
||||
4. **Testing (1h)** - Unit tests
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] Wait delays for specified duration
|
||||
- [ ] Wait supports multiple time units
|
||||
- [ ] Wait Until triggers at target time
|
||||
- [ ] Wait Until handles past times gracefully
|
||||
- [ ] Debounce prevents rapid triggers
|
||||
|
||||
## References
|
||||
|
||||
- [CF11-001 Logic Nodes](../CF11-001-logic-nodes/README.md)
|
||||
- [n8n Wait Node](https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.wait/)
|
||||
@@ -0,0 +1,331 @@
|
||||
# CF11-004: Execution Storage Schema
|
||||
|
||||
## Metadata
|
||||
|
||||
| Field | Value |
|
||||
| ------------------ | ------------------------------------------- |
|
||||
| **ID** | CF11-004 |
|
||||
| **Phase** | Phase 11 |
|
||||
| **Series** | 2 - Execution History |
|
||||
| **Priority** | 🔴 Critical |
|
||||
| **Difficulty** | 🟡 Medium |
|
||||
| **Estimated Time** | 8-10 hours |
|
||||
| **Prerequisites** | Phase 5 TASK-007A (LocalSQL Adapter) |
|
||||
| **Branch** | `feature/cf11-004-execution-storage-schema` |
|
||||
|
||||
## Objective
|
||||
|
||||
Create the SQLite database schema and TypeScript interfaces for storing workflow execution history, enabling full visibility into every workflow run with node-by-node data capture.
|
||||
|
||||
## Background
|
||||
|
||||
Workflow debugging is currently impossible in OpenNoodl. When a workflow fails, users have no visibility into:
|
||||
|
||||
- What data flowed through each node
|
||||
- Where exactly the failure occurred
|
||||
- What inputs caused the failure
|
||||
- How long each step took
|
||||
|
||||
n8n provides complete execution history - every workflow run is logged with input/output data for each node. This is the **#1 feature** needed for OpenNoodl to be production-ready.
|
||||
|
||||
This task creates the storage foundation. Subsequent tasks (CF11-005, CF11-006, CF11-007) will build the logging integration and UI.
|
||||
|
||||
## Current State
|
||||
|
||||
- No execution history storage exists
|
||||
- CloudRunner executes workflows but discards all intermediate data
|
||||
- Users cannot debug failed workflows
|
||||
- No performance metrics available
|
||||
|
||||
## Desired State
|
||||
|
||||
- SQLite tables store all execution data
|
||||
- TypeScript interfaces define the data structures
|
||||
- Query APIs enable efficient retrieval
|
||||
- Retention policies prevent unbounded storage growth
|
||||
- Foundation ready for CF11-005 logger integration
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
|
||||
- [ ] SQLite table schema design
|
||||
- [ ] TypeScript interfaces for execution data
|
||||
- [ ] ExecutionStore class with CRUD operations
|
||||
- [ ] Query methods for filtering/pagination
|
||||
- [ ] Retention/cleanup utilities
|
||||
- [ ] Unit tests for storage operations
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- CloudRunner integration (CF11-005)
|
||||
- UI components (CF11-006)
|
||||
- Canvas overlay (CF11-007)
|
||||
|
||||
## Technical Approach
|
||||
|
||||
### Database Schema
|
||||
|
||||
```sql
|
||||
-- Workflow execution records
|
||||
CREATE TABLE workflow_executions (
|
||||
id TEXT PRIMARY KEY,
|
||||
workflow_id TEXT NOT NULL,
|
||||
workflow_name TEXT NOT NULL,
|
||||
trigger_type TEXT NOT NULL, -- 'webhook', 'schedule', 'manual', 'db_change'
|
||||
trigger_data TEXT, -- JSON: request body, cron expression, etc.
|
||||
status TEXT NOT NULL, -- 'running', 'success', 'error'
|
||||
started_at INTEGER NOT NULL, -- Unix timestamp ms
|
||||
completed_at INTEGER,
|
||||
duration_ms INTEGER,
|
||||
error_message TEXT,
|
||||
error_stack TEXT,
|
||||
metadata TEXT, -- JSON: additional context
|
||||
FOREIGN KEY (workflow_id) REFERENCES components(id)
|
||||
);
|
||||
|
||||
-- Individual node execution steps
|
||||
CREATE TABLE execution_steps (
|
||||
id TEXT PRIMARY KEY,
|
||||
execution_id TEXT NOT NULL,
|
||||
node_id TEXT NOT NULL,
|
||||
node_type TEXT NOT NULL,
|
||||
node_name TEXT,
|
||||
step_index INTEGER NOT NULL,
|
||||
started_at INTEGER NOT NULL,
|
||||
completed_at INTEGER,
|
||||
duration_ms INTEGER,
|
||||
status TEXT NOT NULL, -- 'running', 'success', 'error', 'skipped'
|
||||
input_data TEXT, -- JSON (truncated if large)
|
||||
output_data TEXT, -- JSON (truncated if large)
|
||||
error_message TEXT,
|
||||
FOREIGN KEY (execution_id) REFERENCES workflow_executions(id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
-- Indexes for common queries
|
||||
CREATE INDEX idx_executions_workflow ON workflow_executions(workflow_id);
|
||||
CREATE INDEX idx_executions_status ON workflow_executions(status);
|
||||
CREATE INDEX idx_executions_started ON workflow_executions(started_at DESC);
|
||||
CREATE INDEX idx_steps_execution ON execution_steps(execution_id);
|
||||
CREATE INDEX idx_steps_node ON execution_steps(node_id);
|
||||
```
|
||||
|
||||
### TypeScript Interfaces
|
||||
|
||||
```typescript
|
||||
// packages/noodl-viewer-cloud/src/execution-history/types.ts
|
||||
|
||||
export type ExecutionStatus = 'running' | 'success' | 'error';
|
||||
export type StepStatus = 'running' | 'success' | 'error' | 'skipped';
|
||||
export type TriggerType = 'webhook' | 'schedule' | 'manual' | 'db_change' | 'internal_event';
|
||||
|
||||
export interface WorkflowExecution {
|
||||
id: string;
|
||||
workflowId: string;
|
||||
workflowName: string;
|
||||
triggerType: TriggerType;
|
||||
triggerData?: Record<string, unknown>;
|
||||
status: ExecutionStatus;
|
||||
startedAt: number;
|
||||
completedAt?: number;
|
||||
durationMs?: number;
|
||||
errorMessage?: string;
|
||||
errorStack?: string;
|
||||
metadata?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
export interface ExecutionStep {
|
||||
id: string;
|
||||
executionId: string;
|
||||
nodeId: string;
|
||||
nodeType: string;
|
||||
nodeName?: string;
|
||||
stepIndex: number;
|
||||
startedAt: number;
|
||||
completedAt?: number;
|
||||
durationMs?: number;
|
||||
status: StepStatus;
|
||||
inputData?: Record<string, unknown>;
|
||||
outputData?: Record<string, unknown>;
|
||||
errorMessage?: string;
|
||||
}
|
||||
|
||||
export interface ExecutionQuery {
|
||||
workflowId?: string;
|
||||
status?: ExecutionStatus;
|
||||
triggerType?: TriggerType;
|
||||
startedAfter?: number;
|
||||
startedBefore?: number;
|
||||
limit?: number;
|
||||
offset?: number;
|
||||
orderBy?: 'started_at' | 'duration_ms';
|
||||
orderDir?: 'asc' | 'desc';
|
||||
}
|
||||
|
||||
export interface ExecutionWithSteps extends WorkflowExecution {
|
||||
steps: ExecutionStep[];
|
||||
}
|
||||
```
|
||||
|
||||
### ExecutionStore Class
|
||||
|
||||
```typescript
|
||||
// packages/noodl-viewer-cloud/src/execution-history/ExecutionStore.ts
|
||||
|
||||
export class ExecutionStore {
|
||||
constructor(private db: Database.Database) {
|
||||
this.initSchema();
|
||||
}
|
||||
|
||||
private initSchema(): void {
|
||||
// Create tables if not exist
|
||||
}
|
||||
|
||||
// === Execution CRUD ===
|
||||
|
||||
async createExecution(execution: Omit<WorkflowExecution, 'id'>): Promise<string> {
|
||||
const id = this.generateId();
|
||||
// Insert and return ID
|
||||
return id;
|
||||
}
|
||||
|
||||
async updateExecution(id: string, updates: Partial<WorkflowExecution>): Promise<void> {
|
||||
// Update execution record
|
||||
}
|
||||
|
||||
async getExecution(id: string): Promise<WorkflowExecution | null> {
|
||||
// Get single execution
|
||||
}
|
||||
|
||||
async getExecutionWithSteps(id: string): Promise<ExecutionWithSteps | null> {
|
||||
// Get execution with all steps
|
||||
}
|
||||
|
||||
async queryExecutions(query: ExecutionQuery): Promise<WorkflowExecution[]> {
|
||||
// Query with filters and pagination
|
||||
}
|
||||
|
||||
async deleteExecution(id: string): Promise<void> {
|
||||
// Delete execution and steps (cascade)
|
||||
}
|
||||
|
||||
// === Step CRUD ===
|
||||
|
||||
async addStep(step: Omit<ExecutionStep, 'id'>): Promise<string> {
|
||||
// Add step to execution
|
||||
}
|
||||
|
||||
async updateStep(id: string, updates: Partial<ExecutionStep>): Promise<void> {
|
||||
// Update step
|
||||
}
|
||||
|
||||
async getStepsForExecution(executionId: string): Promise<ExecutionStep[]> {
|
||||
// Get all steps for execution
|
||||
}
|
||||
|
||||
// === Retention ===
|
||||
|
||||
async cleanupOldExecutions(maxAgeMs: number): Promise<number> {
|
||||
// Delete executions older than maxAge
|
||||
}
|
||||
|
||||
async cleanupByCount(maxCount: number, workflowId?: string): Promise<number> {
|
||||
// Keep only N most recent executions
|
||||
}
|
||||
|
||||
// === Stats ===
|
||||
|
||||
async getExecutionStats(workflowId?: string): Promise<ExecutionStats> {
|
||||
// Get aggregated stats
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Key Files to Create
|
||||
|
||||
| File | Purpose |
|
||||
| -------------------------------------------------------------- | --------------------- |
|
||||
| `packages/noodl-viewer-cloud/src/execution-history/types.ts` | TypeScript interfaces |
|
||||
| `packages/noodl-viewer-cloud/src/execution-history/schema.sql` | SQLite schema |
|
||||
| `packages/noodl-viewer-cloud/src/execution-history/store.ts` | ExecutionStore class |
|
||||
| `packages/noodl-viewer-cloud/src/execution-history/index.ts` | Module exports |
|
||||
| `packages/noodl-viewer-cloud/tests/execution-history.test.ts` | Unit tests |
|
||||
|
||||
### Dependencies
|
||||
|
||||
- [ ] Phase 5 TASK-007A (LocalSQL Adapter) provides SQLite integration
|
||||
- [ ] `better-sqlite3` package (already in project)
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: Create Type Definitions (2h)
|
||||
|
||||
1. Create `types.ts` with all interfaces
|
||||
2. Define enums for status types
|
||||
3. Add JSDoc documentation
|
||||
|
||||
### Step 2: Create Schema (1h)
|
||||
|
||||
1. Create `schema.sql` with table definitions
|
||||
2. Add indexes for performance
|
||||
3. Document schema decisions
|
||||
|
||||
### Step 3: Implement ExecutionStore (4h)
|
||||
|
||||
1. Create `store.ts` with ExecutionStore class
|
||||
2. Implement CRUD operations
|
||||
3. Implement query with filters
|
||||
4. Implement retention utilities
|
||||
5. Add error handling
|
||||
|
||||
### Step 4: Write Tests (2h)
|
||||
|
||||
1. Test CRUD operations
|
||||
2. Test query filtering
|
||||
3. Test retention cleanup
|
||||
4. Test edge cases (large data, concurrent access)
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### Unit Tests
|
||||
|
||||
- [ ] Create execution record
|
||||
- [ ] Update execution status
|
||||
- [ ] Add steps to execution
|
||||
- [ ] Query with filters
|
||||
- [ ] Pagination works correctly
|
||||
- [ ] Cleanup by age
|
||||
- [ ] Cleanup by count
|
||||
- [ ] Handle large input/output data (truncation)
|
||||
- [ ] Concurrent write access
|
||||
|
||||
### Manual Testing
|
||||
|
||||
- [ ] Schema creates correctly on first run
|
||||
- [ ] Data persists across restarts
|
||||
- [ ] Query performance acceptable (<100ms for 1000 records)
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] All TypeScript interfaces defined
|
||||
- [ ] SQLite schema creates tables correctly
|
||||
- [ ] CRUD operations work for executions and steps
|
||||
- [ ] Query filtering and pagination work
|
||||
- [ ] Retention cleanup works
|
||||
- [ ] All unit tests pass
|
||||
- [ ] No TypeScript errors
|
||||
- [ ] Documentation complete
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Mitigation |
|
||||
| ------------------------------ | ----------------------------------------------- |
|
||||
| Large data causing slow writes | Truncate input/output data at configurable size |
|
||||
| Unbounded storage growth | Implement retention policies from day 1 |
|
||||
| SQLite lock contention | Use WAL mode, batch writes where possible |
|
||||
|
||||
## References
|
||||
|
||||
- [Phase 5 TASK-007A LocalSQL Adapter](../../phase-5-multi-target-deployment/01-byob-backend/TASK-007-integrated-backend/TASK-007A-localsql-adapter.md)
|
||||
- [Original Cloud Functions Plan](../cloud-functions-revival-plan.md) - Execution History section
|
||||
- [better-sqlite3 docs](https://github.com/WiseLibs/better-sqlite3)
|
||||
@@ -0,0 +1,344 @@
|
||||
# CF11-005: Execution Logger Integration
|
||||
|
||||
## Metadata
|
||||
|
||||
| Field | Value |
|
||||
| ------------------ | -------------------------------------------- |
|
||||
| **ID** | CF11-005 |
|
||||
| **Phase** | Phase 11 |
|
||||
| **Series** | 2 - Execution History |
|
||||
| **Priority** | 🔴 Critical |
|
||||
| **Difficulty** | 🟡 Medium |
|
||||
| **Estimated Time** | 8-10 hours |
|
||||
| **Prerequisites** | CF11-004 (Storage Schema), Phase 5 TASK-007C |
|
||||
| **Branch** | `feature/cf11-005-execution-logger` |
|
||||
|
||||
## Objective
|
||||
|
||||
Integrate execution logging into the CloudRunner workflow engine so that every workflow execution is automatically captured with full node-by-node data.
|
||||
|
||||
## Background
|
||||
|
||||
CF11-004 provides the storage layer for execution history. This task connects that storage to the actual workflow execution engine, capturing:
|
||||
|
||||
- When workflows start/complete
|
||||
- Input/output data for each node
|
||||
- Timing information
|
||||
- Error details when failures occur
|
||||
|
||||
This is the "bridge" between runtime and storage - without it, the database remains empty.
|
||||
|
||||
## Current State
|
||||
|
||||
- ExecutionStore exists (from CF11-004)
|
||||
- CloudRunner executes workflows
|
||||
- **No connection between them** - executions are not logged
|
||||
|
||||
## Desired State
|
||||
|
||||
- Every workflow execution creates a record
|
||||
- Each node execution creates a step record
|
||||
- Data flows automatically without explicit logging calls
|
||||
- Configurable data capture (can disable for performance)
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
|
||||
- [ ] ExecutionLogger class wrapping ExecutionStore
|
||||
- [ ] Integration hooks in CloudRunner
|
||||
- [ ] Node execution instrumentation
|
||||
- [ ] Configuration for capture settings
|
||||
- [ ] Data truncation for large payloads
|
||||
- [ ] Unit tests
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- UI components (CF11-006)
|
||||
- Canvas overlay (CF11-007)
|
||||
- Real-time streaming (future enhancement)
|
||||
|
||||
## Technical Approach
|
||||
|
||||
### ExecutionLogger Class
|
||||
|
||||
```typescript
|
||||
// packages/noodl-viewer-cloud/src/execution-history/ExecutionLogger.ts
|
||||
|
||||
export interface LoggerConfig {
|
||||
enabled: boolean;
|
||||
captureInputs: boolean;
|
||||
captureOutputs: boolean;
|
||||
maxDataSize: number; // bytes, truncate above this
|
||||
retentionDays: number;
|
||||
}
|
||||
|
||||
export class ExecutionLogger {
|
||||
private store: ExecutionStore;
|
||||
private config: LoggerConfig;
|
||||
private currentExecution: string | null = null;
|
||||
private stepIndex: number = 0;
|
||||
|
||||
constructor(store: ExecutionStore, config?: Partial<LoggerConfig>) {
|
||||
this.store = store;
|
||||
this.config = {
|
||||
enabled: true,
|
||||
captureInputs: true,
|
||||
captureOutputs: true,
|
||||
maxDataSize: 100_000, // 100KB default
|
||||
retentionDays: 30,
|
||||
...config
|
||||
};
|
||||
}
|
||||
|
||||
// === Execution Lifecycle ===
|
||||
|
||||
async startExecution(params: {
|
||||
workflowId: string;
|
||||
workflowName: string;
|
||||
triggerType: TriggerType;
|
||||
triggerData?: Record<string, unknown>;
|
||||
}): Promise<string> {
|
||||
if (!this.config.enabled) return '';
|
||||
|
||||
const executionId = await this.store.createExecution({
|
||||
workflowId: params.workflowId,
|
||||
workflowName: params.workflowName,
|
||||
triggerType: params.triggerType,
|
||||
triggerData: params.triggerData,
|
||||
status: 'running',
|
||||
startedAt: Date.now()
|
||||
});
|
||||
|
||||
this.currentExecution = executionId;
|
||||
this.stepIndex = 0;
|
||||
return executionId;
|
||||
}
|
||||
|
||||
async completeExecution(success: boolean, error?: Error): Promise<void> {
|
||||
if (!this.config.enabled || !this.currentExecution) return;
|
||||
|
||||
await this.store.updateExecution(this.currentExecution, {
|
||||
status: success ? 'success' : 'error',
|
||||
completedAt: Date.now(),
|
||||
durationMs: Date.now() - /* startedAt */,
|
||||
errorMessage: error?.message,
|
||||
errorStack: error?.stack
|
||||
});
|
||||
|
||||
this.currentExecution = null;
|
||||
}
|
||||
|
||||
// === Node Lifecycle ===
|
||||
|
||||
async startNode(params: {
|
||||
nodeId: string;
|
||||
nodeType: string;
|
||||
nodeName?: string;
|
||||
inputData?: Record<string, unknown>;
|
||||
}): Promise<string> {
|
||||
if (!this.config.enabled || !this.currentExecution) return '';
|
||||
|
||||
const stepId = await this.store.addStep({
|
||||
executionId: this.currentExecution,
|
||||
nodeId: params.nodeId,
|
||||
nodeType: params.nodeType,
|
||||
nodeName: params.nodeName,
|
||||
stepIndex: this.stepIndex++,
|
||||
startedAt: Date.now(),
|
||||
status: 'running',
|
||||
inputData: this.config.captureInputs
|
||||
? this.truncateData(params.inputData)
|
||||
: undefined
|
||||
});
|
||||
|
||||
return stepId;
|
||||
}
|
||||
|
||||
async completeNode(
|
||||
stepId: string,
|
||||
success: boolean,
|
||||
outputData?: Record<string, unknown>,
|
||||
error?: Error
|
||||
): Promise<void> {
|
||||
if (!this.config.enabled || !stepId) return;
|
||||
|
||||
await this.store.updateStep(stepId, {
|
||||
status: success ? 'success' : 'error',
|
||||
completedAt: Date.now(),
|
||||
outputData: this.config.captureOutputs
|
||||
? this.truncateData(outputData)
|
||||
: undefined,
|
||||
errorMessage: error?.message
|
||||
});
|
||||
}
|
||||
|
||||
// === Utilities ===
|
||||
|
||||
private truncateData(data?: Record<string, unknown>): Record<string, unknown> | undefined {
|
||||
if (!data) return undefined;
|
||||
const json = JSON.stringify(data);
|
||||
if (json.length <= this.config.maxDataSize) return data;
|
||||
|
||||
return {
|
||||
_truncated: true,
|
||||
_originalSize: json.length,
|
||||
_preview: json.substring(0, 1000) + '...'
|
||||
};
|
||||
}
|
||||
|
||||
async runRetentionCleanup(): Promise<number> {
|
||||
const maxAge = this.config.retentionDays * 24 * 60 * 60 * 1000;
|
||||
return this.store.cleanupOldExecutions(maxAge);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### CloudRunner Integration Points
|
||||
|
||||
The CloudRunner needs hooks at these points:
|
||||
|
||||
```typescript
|
||||
// packages/noodl-viewer-cloud/src/cloudrunner.ts
|
||||
|
||||
class CloudRunner {
|
||||
private logger: ExecutionLogger;
|
||||
|
||||
async executeWorkflow(workflow: Component, trigger: TriggerInfo): Promise<void> {
|
||||
// 1. Start execution logging
|
||||
const executionId = await this.logger.startExecution({
|
||||
workflowId: workflow.id,
|
||||
workflowName: workflow.name,
|
||||
triggerType: trigger.type,
|
||||
triggerData: trigger.data
|
||||
});
|
||||
|
||||
try {
|
||||
// 2. Execute nodes (with per-node logging)
|
||||
for (const node of this.getExecutionOrder(workflow)) {
|
||||
await this.executeNode(node, executionId);
|
||||
}
|
||||
|
||||
// 3. Complete successfully
|
||||
await this.logger.completeExecution(true);
|
||||
} catch (error) {
|
||||
// 4. Complete with error
|
||||
await this.logger.completeExecution(false, error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
private async executeNode(node: RuntimeNode, executionId: string): Promise<void> {
|
||||
// Get input data from connected nodes
|
||||
const inputData = this.collectNodeInputs(node);
|
||||
|
||||
// Start node logging
|
||||
const stepId = await this.logger.startNode({
|
||||
nodeId: node.id,
|
||||
nodeType: node.type,
|
||||
nodeName: node.label,
|
||||
inputData
|
||||
});
|
||||
|
||||
try {
|
||||
// Actually execute the node
|
||||
await node.execute();
|
||||
|
||||
// Get output data
|
||||
const outputData = this.collectNodeOutputs(node);
|
||||
|
||||
// Complete node logging
|
||||
await this.logger.completeNode(stepId, true, outputData);
|
||||
} catch (error) {
|
||||
await this.logger.completeNode(stepId, false, undefined, error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Key Files to Modify/Create
|
||||
|
||||
| File | Action | Purpose |
|
||||
| -------------------------------------- | ------ | -------------------- |
|
||||
| `execution-history/ExecutionLogger.ts` | Create | Logger wrapper class |
|
||||
| `execution-history/index.ts` | Update | Export logger |
|
||||
| `cloudrunner.ts` | Modify | Add logging hooks |
|
||||
| `tests/execution-logger.test.ts` | Create | Unit tests |
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: Create ExecutionLogger Class (3h)
|
||||
|
||||
1. Create `ExecutionLogger.ts`
|
||||
2. Implement execution lifecycle methods
|
||||
3. Implement node lifecycle methods
|
||||
4. Implement data truncation
|
||||
5. Add configuration handling
|
||||
|
||||
### Step 2: Integrate with CloudRunner (3h)
|
||||
|
||||
1. Identify hook points in CloudRunner
|
||||
2. Add logger initialization
|
||||
3. Instrument workflow execution
|
||||
4. Instrument individual node execution
|
||||
5. Handle errors properly
|
||||
|
||||
### Step 3: Add Configuration (1h)
|
||||
|
||||
1. Add project-level settings for logging
|
||||
2. Environment variable overrides
|
||||
3. Runtime toggle capability
|
||||
|
||||
### Step 4: Write Tests (2h)
|
||||
|
||||
1. Test logger with mock store
|
||||
2. Test data truncation
|
||||
3. Test error handling
|
||||
4. Integration test with CloudRunner
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### Unit Tests
|
||||
|
||||
- [ ] Logger creates execution on start
|
||||
- [ ] Logger updates execution on complete
|
||||
- [ ] Logger handles success path
|
||||
- [ ] Logger handles error path
|
||||
- [ ] Node steps are recorded correctly
|
||||
- [ ] Data truncation works for large payloads
|
||||
- [ ] Disabled logger is a no-op
|
||||
- [ ] Retention cleanup works
|
||||
|
||||
### Integration Tests
|
||||
|
||||
- [ ] Full workflow execution is captured
|
||||
- [ ] All nodes have step records
|
||||
- [ ] Input/output data is captured
|
||||
- [ ] Error workflows have error details
|
||||
- [ ] Multiple concurrent workflows work
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] ExecutionLogger class implemented
|
||||
- [ ] CloudRunner integration complete
|
||||
- [ ] All workflow executions create records
|
||||
- [ ] Node steps are captured with data
|
||||
- [ ] Errors are captured with details
|
||||
- [ ] Data truncation prevents storage bloat
|
||||
- [ ] Configuration allows disabling
|
||||
- [ ] All tests pass
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Mitigation |
|
||||
| ------------------------------- | ---------------------------------- |
|
||||
| Performance overhead | Make logging async, configurable |
|
||||
| Large data payloads | Truncation with configurable limit |
|
||||
| Failed logging crashes workflow | Wrap in try/catch, fail gracefully |
|
||||
| CloudRunner changes in Phase 5 | Coordinate with Phase 5 TASK-007C |
|
||||
|
||||
## References
|
||||
|
||||
- [CF11-004 Execution Storage Schema](../CF11-004-execution-storage-schema/README.md)
|
||||
- [Phase 5 TASK-007C Workflow Runtime](../../phase-5-multi-target-deployment/01-byob-backend/TASK-007-integrated-backend/TASK-007C-workflow-runtime.md)
|
||||
@@ -0,0 +1,411 @@
|
||||
# CF11-006: Execution History Panel UI
|
||||
|
||||
## Metadata
|
||||
|
||||
| Field | Value |
|
||||
| ------------------ | ------------------------------------------ |
|
||||
| **ID** | CF11-006 |
|
||||
| **Phase** | Phase 11 |
|
||||
| **Series** | 2 - Execution History |
|
||||
| **Priority** | 🔴 Critical |
|
||||
| **Difficulty** | 🟡 Medium |
|
||||
| **Estimated Time** | 12-16 hours |
|
||||
| **Prerequisites** | CF11-004, CF11-005 |
|
||||
| **Branch** | `feature/cf11-006-execution-history-panel` |
|
||||
|
||||
## Objective
|
||||
|
||||
Create a sidebar panel in the editor that displays workflow execution history, allowing users to view past executions, inspect node data, and debug failed workflows.
|
||||
|
||||
## Background
|
||||
|
||||
With execution data being captured (CF11-004, CF11-005), users need a way to:
|
||||
|
||||
- View all past executions for a workflow
|
||||
- See execution status at a glance (success/error)
|
||||
- Drill into individual executions to see node-by-node data
|
||||
- Quickly identify where workflows fail
|
||||
|
||||
This is the primary debugging interface for workflow developers.
|
||||
|
||||
## Current State
|
||||
|
||||
- Execution data is stored in SQLite
|
||||
- No UI to view execution history
|
||||
- Users cannot debug failed workflows
|
||||
|
||||
## Desired State
|
||||
|
||||
- New "Execution History" panel in sidebar
|
||||
- List of past executions with status, duration, timestamp
|
||||
- Expandable execution detail view
|
||||
- Node step list with input/output data
|
||||
- Search/filter capabilities
|
||||
- Delete/clear history options
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
|
||||
- [ ] ExecutionHistoryPanel React component
|
||||
- [ ] ExecutionList component
|
||||
- [ ] ExecutionDetail component with node steps
|
||||
- [ ] Data display for inputs/outputs (JSON viewer)
|
||||
- [ ] Filter by status, date range
|
||||
- [ ] Integration with sidebar navigation
|
||||
- [ ] Proper styling with design tokens
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- Canvas overlay (CF11-007)
|
||||
- Real-time streaming of executions
|
||||
- Export/import of execution data
|
||||
|
||||
## Technical Approach
|
||||
|
||||
### Component Structure
|
||||
|
||||
```
|
||||
ExecutionHistoryPanel/
|
||||
├── index.ts
|
||||
├── ExecutionHistoryPanel.tsx # Main panel container
|
||||
├── ExecutionHistoryPanel.module.scss
|
||||
├── components/
|
||||
│ ├── ExecutionList/
|
||||
│ │ ├── ExecutionList.tsx # List of executions
|
||||
│ │ ├── ExecutionList.module.scss
|
||||
│ │ ├── ExecutionItem.tsx # Single execution row
|
||||
│ │ └── ExecutionItem.module.scss
|
||||
│ ├── ExecutionDetail/
|
||||
│ │ ├── ExecutionDetail.tsx # Expanded execution view
|
||||
│ │ ├── ExecutionDetail.module.scss
|
||||
│ │ ├── NodeStepList.tsx # List of node steps
|
||||
│ │ ├── NodeStepList.module.scss
|
||||
│ │ ├── NodeStepItem.tsx # Single step row
|
||||
│ │ └── NodeStepItem.module.scss
|
||||
│ └── ExecutionFilters/
|
||||
│ ├── ExecutionFilters.tsx # Filter controls
|
||||
│ └── ExecutionFilters.module.scss
|
||||
└── hooks/
|
||||
├── useExecutionHistory.ts # Data fetching hook
|
||||
└── useExecutionDetail.ts # Single execution hook
|
||||
```
|
||||
|
||||
### Main Panel Component
|
||||
|
||||
```tsx
|
||||
// ExecutionHistoryPanel.tsx
|
||||
import React, { useState } from 'react';
|
||||
|
||||
import { PanelHeader } from '@noodl-core-ui/components/sidebar/PanelHeader';
|
||||
|
||||
import { ExecutionDetail } from './components/ExecutionDetail';
|
||||
import { ExecutionFilters } from './components/ExecutionFilters';
|
||||
import { ExecutionList } from './components/ExecutionList';
|
||||
import styles from './ExecutionHistoryPanel.module.scss';
|
||||
import { useExecutionHistory } from './hooks/useExecutionHistory';
|
||||
|
||||
export function ExecutionHistoryPanel() {
|
||||
const [selectedExecutionId, setSelectedExecutionId] = useState<string | null>(null);
|
||||
const [filters, setFilters] = useState<ExecutionFilters>({
|
||||
status: undefined,
|
||||
startDate: undefined,
|
||||
endDate: undefined
|
||||
});
|
||||
|
||||
const { executions, loading, refresh } = useExecutionHistory(filters);
|
||||
|
||||
return (
|
||||
<div className={styles.Panel}>
|
||||
<PanelHeader title="Execution History" onRefresh={refresh} />
|
||||
|
||||
<ExecutionFilters filters={filters} onChange={setFilters} />
|
||||
|
||||
{selectedExecutionId ? (
|
||||
<ExecutionDetail executionId={selectedExecutionId} onBack={() => setSelectedExecutionId(null)} />
|
||||
) : (
|
||||
<ExecutionList executions={executions} loading={loading} onSelect={setSelectedExecutionId} />
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Execution List Item
|
||||
|
||||
```tsx
|
||||
// ExecutionItem.tsx
|
||||
|
||||
import { WorkflowExecution } from '@noodl-viewer-cloud/execution-history';
|
||||
import React from 'react';
|
||||
|
||||
import { Icon } from '@noodl-core-ui/components/common/Icon';
|
||||
|
||||
import styles from './ExecutionItem.module.scss';
|
||||
|
||||
interface Props {
|
||||
execution: WorkflowExecution;
|
||||
onSelect: () => void;
|
||||
}
|
||||
|
||||
export function ExecutionItem({ execution, onSelect }: Props) {
|
||||
const statusIcon =
|
||||
execution.status === 'success' ? 'check-circle' : execution.status === 'error' ? 'x-circle' : 'loader';
|
||||
|
||||
const statusColor =
|
||||
execution.status === 'success'
|
||||
? 'var(--theme-color-success)'
|
||||
: execution.status === 'error'
|
||||
? 'var(--theme-color-error)'
|
||||
: 'var(--theme-color-fg-default)';
|
||||
|
||||
return (
|
||||
<div className={styles.Item} onClick={onSelect}>
|
||||
<Icon icon={statusIcon} style={{ color: statusColor }} />
|
||||
<div className={styles.Info}>
|
||||
<span className={styles.Name}>{execution.workflowName}</span>
|
||||
<span className={styles.Time}>{formatRelativeTime(execution.startedAt)}</span>
|
||||
</div>
|
||||
<div className={styles.Meta}>
|
||||
<span className={styles.Duration}>{formatDuration(execution.durationMs)}</span>
|
||||
<span className={styles.Trigger}>{execution.triggerType}</span>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Execution Detail View
|
||||
|
||||
```tsx
|
||||
// ExecutionDetail.tsx
|
||||
import React from 'react';
|
||||
|
||||
import { JSONViewer } from '@noodl-core-ui/components/json-editor';
|
||||
|
||||
import { useExecutionDetail } from '../../hooks/useExecutionDetail';
|
||||
import styles from './ExecutionDetail.module.scss';
|
||||
import { NodeStepList } from './NodeStepList';
|
||||
|
||||
interface Props {
|
||||
executionId: string;
|
||||
onBack: () => void;
|
||||
onPinToCanvas?: () => void; // For CF11-007 integration
|
||||
}
|
||||
|
||||
export function ExecutionDetail({ executionId, onBack, onPinToCanvas }: Props) {
|
||||
const { execution, loading } = useExecutionDetail(executionId);
|
||||
|
||||
if (loading || !execution) {
|
||||
return <div>Loading...</div>;
|
||||
}
|
||||
|
||||
return (
|
||||
<div className={styles.Detail}>
|
||||
<header className={styles.Header}>
|
||||
<button onClick={onBack}>← Back</button>
|
||||
<h3>{execution.workflowName}</h3>
|
||||
{onPinToCanvas && <button onClick={onPinToCanvas}>Pin to Canvas</button>}
|
||||
</header>
|
||||
|
||||
<section className={styles.Summary}>
|
||||
<div className={styles.Status} data-status={execution.status}>
|
||||
{execution.status}
|
||||
</div>
|
||||
<div>Started: {formatTime(execution.startedAt)}</div>
|
||||
<div>Duration: {formatDuration(execution.durationMs)}</div>
|
||||
<div>Trigger: {execution.triggerType}</div>
|
||||
</section>
|
||||
|
||||
{execution.errorMessage && (
|
||||
<section className={styles.Error}>
|
||||
<h4>Error</h4>
|
||||
<pre>{execution.errorMessage}</pre>
|
||||
{execution.errorStack && (
|
||||
<details>
|
||||
<summary>Stack Trace</summary>
|
||||
<pre>{execution.errorStack}</pre>
|
||||
</details>
|
||||
)}
|
||||
</section>
|
||||
)}
|
||||
|
||||
{execution.triggerData && (
|
||||
<section className={styles.TriggerData}>
|
||||
<h4>Trigger Data</h4>
|
||||
<JSONViewer data={execution.triggerData} />
|
||||
</section>
|
||||
)}
|
||||
|
||||
<section className={styles.Steps}>
|
||||
<h4>Node Execution Steps ({execution.steps.length})</h4>
|
||||
<NodeStepList steps={execution.steps} />
|
||||
</section>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Data Fetching Hooks
|
||||
|
||||
```typescript
|
||||
// useExecutionHistory.ts
|
||||
|
||||
import { CloudService } from '@noodl-editor/services/CloudService';
|
||||
import { WorkflowExecution, ExecutionQuery } from '@noodl-viewer-cloud/execution-history';
|
||||
import { useState, useEffect, useCallback } from 'react';
|
||||
|
||||
export function useExecutionHistory(filters: ExecutionFilters) {
|
||||
const [executions, setExecutions] = useState<WorkflowExecution[]>([]);
|
||||
const [loading, setLoading] = useState(true);
|
||||
|
||||
const fetch = useCallback(async () => {
|
||||
setLoading(true);
|
||||
try {
|
||||
const query: ExecutionQuery = {
|
||||
status: filters.status,
|
||||
startedAfter: filters.startDate?.getTime(),
|
||||
startedBefore: filters.endDate?.getTime(),
|
||||
limit: 100,
|
||||
orderBy: 'started_at',
|
||||
orderDir: 'desc'
|
||||
};
|
||||
const result = await CloudService.getExecutionHistory(query);
|
||||
setExecutions(result);
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
}, [filters]);
|
||||
|
||||
useEffect(() => {
|
||||
fetch();
|
||||
}, [fetch]);
|
||||
|
||||
return { executions, loading, refresh: fetch };
|
||||
}
|
||||
```
|
||||
|
||||
### Styling Guidelines
|
||||
|
||||
All styles MUST use design tokens:
|
||||
|
||||
```scss
|
||||
// ExecutionItem.module.scss
|
||||
.Item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: var(--theme-spacing-3);
|
||||
background-color: var(--theme-color-bg-2);
|
||||
border-bottom: 1px solid var(--theme-color-border-default);
|
||||
cursor: pointer;
|
||||
|
||||
&:hover {
|
||||
background-color: var(--theme-color-bg-3);
|
||||
}
|
||||
}
|
||||
|
||||
.Name {
|
||||
color: var(--theme-color-fg-default);
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.Time {
|
||||
color: var(--theme-color-fg-default-shy);
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
// Status colors
|
||||
[data-status='success'] {
|
||||
color: var(--theme-color-success);
|
||||
}
|
||||
|
||||
[data-status='error'] {
|
||||
color: var(--theme-color-error);
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: Create Panel Structure (3h)
|
||||
|
||||
1. Create folder structure
|
||||
2. Create ExecutionHistoryPanel component
|
||||
3. Register panel in sidebar navigation
|
||||
4. Basic layout and header
|
||||
|
||||
### Step 2: Implement Execution List (3h)
|
||||
|
||||
1. Create ExecutionList component
|
||||
2. Create ExecutionItem component
|
||||
3. Implement useExecutionHistory hook
|
||||
4. Add loading/empty states
|
||||
|
||||
### Step 3: Implement Execution Detail (4h)
|
||||
|
||||
1. Create ExecutionDetail component
|
||||
2. Create NodeStepList/NodeStepItem
|
||||
3. Implement useExecutionDetail hook
|
||||
4. Add JSON viewer for data display
|
||||
5. Handle error display
|
||||
|
||||
### Step 4: Add Filters & Search (2h)
|
||||
|
||||
1. Create ExecutionFilters component
|
||||
2. Status filter dropdown
|
||||
3. Date range picker
|
||||
4. Integration with list
|
||||
|
||||
### Step 5: Polish & Testing (3h)
|
||||
|
||||
1. Responsive styling
|
||||
2. Keyboard navigation
|
||||
3. Manual testing
|
||||
4. Edge cases
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### Manual Testing
|
||||
|
||||
- [ ] Panel appears in sidebar
|
||||
- [ ] Executions load correctly
|
||||
- [ ] Clicking execution shows detail
|
||||
- [ ] Back button returns to list
|
||||
- [ ] Filter by status works
|
||||
- [ ] Filter by date works
|
||||
- [ ] Node steps display correctly
|
||||
- [ ] Input/output data renders
|
||||
- [ ] Error display works
|
||||
- [ ] Empty state shows correctly
|
||||
|
||||
### Automated Testing
|
||||
|
||||
- [ ] useExecutionHistory hook tests
|
||||
- [ ] useExecutionDetail hook tests
|
||||
- [ ] ExecutionItem renders correctly
|
||||
- [ ] Filter state management
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] Panel accessible from sidebar
|
||||
- [ ] Execution list shows all executions
|
||||
- [ ] Detail view shows full execution data
|
||||
- [ ] Node steps show input/output data
|
||||
- [ ] Filters work correctly
|
||||
- [ ] All styles use design tokens
|
||||
- [ ] No hardcoded colors
|
||||
- [ ] Responsive at different panel widths
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Mitigation |
|
||||
| -------------------------- | ------------------------------ |
|
||||
| Large execution lists slow | Virtual scrolling, pagination |
|
||||
| JSON viewer performance | Lazy load, collapse by default |
|
||||
| Missing CloudService API | Coordinate with CF11-005 |
|
||||
|
||||
## References
|
||||
|
||||
- [UI Styling Guide](../../../reference/UI-STYLING-GUIDE.md)
|
||||
- [CF11-004 Storage Schema](../CF11-004-execution-storage-schema/README.md)
|
||||
- [CF11-005 Logger Integration](../CF11-005-execution-logger-integration/README.md)
|
||||
- [GitHubPanel](../../../../packages/noodl-editor/src/editor/src/views/panels/GitHubPanel/) - Similar panel pattern
|
||||
@@ -0,0 +1,429 @@
|
||||
# CF11-007: Canvas Execution Overlay
|
||||
|
||||
## Metadata
|
||||
|
||||
| Field | Value |
|
||||
| ------------------ | ------------------------------------------- |
|
||||
| **ID** | CF11-007 |
|
||||
| **Phase** | Phase 11 |
|
||||
| **Series** | 2 - Execution History |
|
||||
| **Priority** | 🟡 High |
|
||||
| **Difficulty** | 🟡 Medium |
|
||||
| **Estimated Time** | 8-10 hours |
|
||||
| **Prerequisites** | CF11-004, CF11-005, CF11-006 |
|
||||
| **Branch** | `feature/cf11-007-canvas-execution-overlay` |
|
||||
|
||||
## Objective
|
||||
|
||||
Create a canvas overlay that visualizes execution data directly on workflow nodes, allowing users to "pin" an execution to the canvas and see input/output data flowing through each node.
|
||||
|
||||
## Background
|
||||
|
||||
The Execution History Panel (CF11-006) shows execution data in a list format. But for debugging, users need to see this data **in context** - overlaid directly on the nodes in the canvas.
|
||||
|
||||
This is similar to n8n's execution visualization where you can click on any past execution and see the data that flowed through each node, directly on the canvas.
|
||||
|
||||
This task builds on the existing HighlightOverlay pattern already in the codebase.
|
||||
|
||||
## Current State
|
||||
|
||||
- Execution data viewable in panel (CF11-006)
|
||||
- No visualization on canvas
|
||||
- Users must mentally map panel data to nodes
|
||||
|
||||
## Desired State
|
||||
|
||||
- "Pin to Canvas" button in Execution History Panel
|
||||
- Overlay shows execution status on each node (green/red/gray)
|
||||
- Clicking a node shows input/output data popup
|
||||
- Timeline scrubber to step through execution
|
||||
- Clear visual distinction from normal canvas view
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
|
||||
- [ ] ExecutionOverlay React component
|
||||
- [ ] Node status badges (success/error/pending)
|
||||
- [ ] Data popup on node click
|
||||
- [ ] Timeline/step navigation
|
||||
- [ ] Integration with ExecutionHistoryPanel
|
||||
- [ ] "Unpin" to return to normal view
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- Real-time streaming visualization
|
||||
- Connection animation showing data flow
|
||||
- Comparison between executions
|
||||
|
||||
## Technical Approach
|
||||
|
||||
### Using Existing Overlay Pattern
|
||||
|
||||
The codebase already has `HighlightOverlay` - we'll follow the same pattern:
|
||||
|
||||
```
|
||||
packages/noodl-editor/src/editor/src/views/CanvasOverlays/
|
||||
├── HighlightOverlay/ # Existing - reference pattern
|
||||
│ ├── HighlightOverlay.tsx
|
||||
│ ├── HighlightedNode.tsx
|
||||
│ └── ...
|
||||
└── ExecutionOverlay/ # New
|
||||
├── index.ts
|
||||
├── ExecutionOverlay.tsx
|
||||
├── ExecutionOverlay.module.scss
|
||||
├── ExecutionNodeBadge.tsx
|
||||
├── ExecutionNodeBadge.module.scss
|
||||
├── ExecutionDataPopup.tsx
|
||||
├── ExecutionDataPopup.module.scss
|
||||
└── ExecutionTimeline.tsx
|
||||
```
|
||||
|
||||
### Main Overlay Component
|
||||
|
||||
```tsx
|
||||
// ExecutionOverlay.tsx
|
||||
|
||||
import { useCanvasCoordinates } from '@noodl-hooks/useCanvasCoordinates';
|
||||
import { ExecutionWithSteps } from '@noodl-viewer-cloud/execution-history';
|
||||
import React, { useMemo } from 'react';
|
||||
|
||||
import { ExecutionDataPopup } from './ExecutionDataPopup';
|
||||
import { ExecutionNodeBadge } from './ExecutionNodeBadge';
|
||||
import styles from './ExecutionOverlay.module.scss';
|
||||
import { ExecutionTimeline } from './ExecutionTimeline';
|
||||
|
||||
interface Props {
|
||||
execution: ExecutionWithSteps;
|
||||
onClose: () => void;
|
||||
}
|
||||
|
||||
export function ExecutionOverlay({ execution, onClose }: Props) {
|
||||
const [selectedNodeId, setSelectedNodeId] = React.useState<string | null>(null);
|
||||
const [currentStepIndex, setCurrentStepIndex] = React.useState<number>(execution.steps.length - 1);
|
||||
|
||||
const nodeStepMap = useMemo(() => {
|
||||
const map = new Map<string, ExecutionStep>();
|
||||
for (const step of execution.steps) {
|
||||
if (step.stepIndex <= currentStepIndex) {
|
||||
map.set(step.nodeId, step);
|
||||
}
|
||||
}
|
||||
return map;
|
||||
}, [execution.steps, currentStepIndex]);
|
||||
|
||||
const selectedStep = selectedNodeId ? nodeStepMap.get(selectedNodeId) : null;
|
||||
|
||||
return (
|
||||
<div className={styles.Overlay}>
|
||||
{/* Header bar */}
|
||||
<div className={styles.Header}>
|
||||
<span className={styles.Title}>Execution: {execution.workflowName}</span>
|
||||
<span className={styles.Status} data-status={execution.status}>
|
||||
{execution.status}
|
||||
</span>
|
||||
<button className={styles.CloseButton} onClick={onClose}>
|
||||
× Close
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Node badges */}
|
||||
{Array.from(nodeStepMap.entries()).map(([nodeId, step]) => (
|
||||
<ExecutionNodeBadge
|
||||
key={nodeId}
|
||||
nodeId={nodeId}
|
||||
step={step}
|
||||
onClick={() => setSelectedNodeId(nodeId)}
|
||||
selected={nodeId === selectedNodeId}
|
||||
/>
|
||||
))}
|
||||
|
||||
{/* Data popup for selected node */}
|
||||
{selectedStep && <ExecutionDataPopup step={selectedStep} onClose={() => setSelectedNodeId(null)} />}
|
||||
|
||||
{/* Timeline scrubber */}
|
||||
<ExecutionTimeline steps={execution.steps} currentIndex={currentStepIndex} onIndexChange={setCurrentStepIndex} />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Node Badge Component
|
||||
|
||||
```tsx
|
||||
// ExecutionNodeBadge.tsx
|
||||
|
||||
import { useCanvasNodePosition } from '@noodl-hooks/useCanvasNodePosition';
|
||||
import { ExecutionStep } from '@noodl-viewer-cloud/execution-history';
|
||||
import React from 'react';
|
||||
|
||||
import styles from './ExecutionNodeBadge.module.scss';
|
||||
|
||||
interface Props {
|
||||
nodeId: string;
|
||||
step: ExecutionStep;
|
||||
onClick: () => void;
|
||||
selected: boolean;
|
||||
}
|
||||
|
||||
export function ExecutionNodeBadge({ nodeId, step, onClick, selected }: Props) {
|
||||
const position = useCanvasNodePosition(nodeId);
|
||||
|
||||
if (!position) return null;
|
||||
|
||||
const statusIcon = step.status === 'success' ? '✓' : step.status === 'error' ? '✗' : '⋯';
|
||||
|
||||
return (
|
||||
<div
|
||||
className={styles.Badge}
|
||||
data-status={step.status}
|
||||
data-selected={selected}
|
||||
style={{
|
||||
left: position.x + position.width + 4,
|
||||
top: position.y - 8
|
||||
}}
|
||||
onClick={onClick}
|
||||
>
|
||||
<span className={styles.Icon}>{statusIcon}</span>
|
||||
<span className={styles.Duration}>{formatDuration(step.durationMs)}</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Data Popup Component
|
||||
|
||||
```tsx
|
||||
// ExecutionDataPopup.tsx
|
||||
|
||||
import { ExecutionStep } from '@noodl-viewer-cloud/execution-history';
|
||||
import React from 'react';
|
||||
|
||||
import { JSONViewer } from '@noodl-core-ui/components/json-editor';
|
||||
|
||||
import styles from './ExecutionDataPopup.module.scss';
|
||||
|
||||
interface Props {
|
||||
step: ExecutionStep;
|
||||
onClose: () => void;
|
||||
}
|
||||
|
||||
export function ExecutionDataPopup({ step, onClose }: Props) {
|
||||
return (
|
||||
<div className={styles.Popup}>
|
||||
<header className={styles.Header}>
|
||||
<h4>{step.nodeName || step.nodeType}</h4>
|
||||
<span className={styles.Status} data-status={step.status}>
|
||||
{step.status}
|
||||
</span>
|
||||
<button onClick={onClose}>×</button>
|
||||
</header>
|
||||
|
||||
<div className={styles.Content}>
|
||||
{step.inputData && (
|
||||
<section className={styles.Section}>
|
||||
<h5>Input Data</h5>
|
||||
<JSONViewer data={step.inputData} />
|
||||
</section>
|
||||
)}
|
||||
|
||||
{step.outputData && (
|
||||
<section className={styles.Section}>
|
||||
<h5>Output Data</h5>
|
||||
<JSONViewer data={step.outputData} />
|
||||
</section>
|
||||
)}
|
||||
|
||||
{step.errorMessage && (
|
||||
<section className={styles.Error}>
|
||||
<h5>Error</h5>
|
||||
<pre>{step.errorMessage}</pre>
|
||||
</section>
|
||||
)}
|
||||
|
||||
<section className={styles.Meta}>
|
||||
<div>Duration: {formatDuration(step.durationMs)}</div>
|
||||
<div>Started: {formatTime(step.startedAt)}</div>
|
||||
</section>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Timeline Scrubber
|
||||
|
||||
```tsx
|
||||
// ExecutionTimeline.tsx
|
||||
|
||||
import { ExecutionStep } from '@noodl-viewer-cloud/execution-history';
|
||||
import React from 'react';
|
||||
|
||||
import styles from './ExecutionTimeline.module.scss';
|
||||
|
||||
interface Props {
|
||||
steps: ExecutionStep[];
|
||||
currentIndex: number;
|
||||
onIndexChange: (index: number) => void;
|
||||
}
|
||||
|
||||
export function ExecutionTimeline({ steps, currentIndex, onIndexChange }: Props) {
|
||||
return (
|
||||
<div className={styles.Timeline}>
|
||||
<button disabled={currentIndex <= 0} onClick={() => onIndexChange(currentIndex - 1)}>
|
||||
← Prev
|
||||
</button>
|
||||
|
||||
<input
|
||||
type="range"
|
||||
min={0}
|
||||
max={steps.length - 1}
|
||||
value={currentIndex}
|
||||
onChange={(e) => onIndexChange(Number(e.target.value))}
|
||||
/>
|
||||
|
||||
<span className={styles.Counter}>
|
||||
Step {currentIndex + 1} of {steps.length}
|
||||
</span>
|
||||
|
||||
<button disabled={currentIndex >= steps.length - 1} onClick={() => onIndexChange(currentIndex + 1)}>
|
||||
Next →
|
||||
</button>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Styling
|
||||
|
||||
```scss
|
||||
// ExecutionNodeBadge.module.scss
|
||||
.Badge {
|
||||
position: absolute;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
padding: 2px 6px;
|
||||
border-radius: 4px;
|
||||
font-size: 11px;
|
||||
cursor: pointer;
|
||||
z-index: 1000;
|
||||
|
||||
&[data-status='success'] {
|
||||
background-color: var(--theme-color-success-bg);
|
||||
color: var(--theme-color-success);
|
||||
}
|
||||
|
||||
&[data-status='error'] {
|
||||
background-color: var(--theme-color-error-bg);
|
||||
color: var(--theme-color-error);
|
||||
}
|
||||
|
||||
&[data-status='running'] {
|
||||
background-color: var(--theme-color-bg-3);
|
||||
color: var(--theme-color-fg-default);
|
||||
}
|
||||
|
||||
&[data-selected='true'] {
|
||||
outline: 2px solid var(--theme-color-primary);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration with ExecutionHistoryPanel
|
||||
|
||||
```tsx
|
||||
// In ExecutionDetail.tsx, add handler:
|
||||
const handlePinToCanvas = () => {
|
||||
// Dispatch event to show overlay
|
||||
EventDispatcher.instance.emit('execution:pinToCanvas', { executionId });
|
||||
};
|
||||
|
||||
// In the main canvas view, listen:
|
||||
useEventListener(EventDispatcher.instance, 'execution:pinToCanvas', ({ executionId }) => {
|
||||
setPinnedExecution(executionId);
|
||||
});
|
||||
```
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: Create Overlay Structure (2h)
|
||||
|
||||
1. Create folder structure
|
||||
2. Create ExecutionOverlay container
|
||||
3. Add state management for pinned execution
|
||||
4. Integration point with canvas
|
||||
|
||||
### Step 2: Implement Node Badges (2h)
|
||||
|
||||
1. Create ExecutionNodeBadge component
|
||||
2. Position calculation using canvas coordinates
|
||||
3. Status-based styling
|
||||
4. Click handling
|
||||
|
||||
### Step 3: Implement Data Popup (2h)
|
||||
|
||||
1. Create ExecutionDataPopup component
|
||||
2. JSON viewer integration
|
||||
3. Positioning relative to node
|
||||
4. Close handling
|
||||
|
||||
### Step 4: Add Timeline Navigation (1.5h)
|
||||
|
||||
1. Create ExecutionTimeline component
|
||||
2. Step navigation logic
|
||||
3. Scrubber UI
|
||||
4. Keyboard shortcuts
|
||||
|
||||
### Step 5: Polish & Integration (2h)
|
||||
|
||||
1. Connect to ExecutionHistoryPanel
|
||||
2. "Pin to Canvas" button
|
||||
3. "Unpin" functionality
|
||||
4. Edge cases and testing
|
||||
|
||||
## Testing Plan
|
||||
|
||||
### Manual Testing
|
||||
|
||||
- [ ] "Pin to Canvas" shows overlay
|
||||
- [ ] Node badges appear at correct positions
|
||||
- [ ] Badges show correct status colors
|
||||
- [ ] Clicking badge shows data popup
|
||||
- [ ] Popup displays input/output data
|
||||
- [ ] Error nodes show error message
|
||||
- [ ] Timeline scrubber works
|
||||
- [ ] Step navigation updates badges
|
||||
- [ ] Close button removes overlay
|
||||
- [ ] Overlay survives pan/zoom
|
||||
|
||||
### Automated Testing
|
||||
|
||||
- [ ] ExecutionNodeBadge renders correctly
|
||||
- [ ] Position calculations work
|
||||
- [ ] Timeline navigation logic
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] Pin/unpin execution to canvas works
|
||||
- [ ] Node badges show execution status
|
||||
- [ ] Clicking shows data popup
|
||||
- [ ] Timeline allows stepping through execution
|
||||
- [ ] Clear visual feedback for errors
|
||||
- [ ] Overlay respects pan/zoom
|
||||
- [ ] All styles use design tokens
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Mitigation |
|
||||
| ---------------------------- | ---------------------------------------- |
|
||||
| Canvas coordinate complexity | Follow existing HighlightOverlay pattern |
|
||||
| Performance with many nodes | Virtualize badges, lazy load popups |
|
||||
| Data popup positioning | Smart positioning to stay in viewport |
|
||||
|
||||
## References
|
||||
|
||||
- [Canvas Overlay Architecture](../../../reference/CANVAS-OVERLAY-ARCHITECTURE.md)
|
||||
- [Canvas Overlay Coordinates](../../../reference/CANVAS-OVERLAY-COORDINATES.md)
|
||||
- [HighlightOverlay](../../../../packages/noodl-editor/src/editor/src/views/CanvasOverlays/HighlightOverlay/) - Pattern reference
|
||||
- [CF11-006 Execution History Panel](../CF11-006-execution-history-panel/README.md)
|
||||
193
dev-docs/tasks/phase-11-cloud-functions/FUTURE-INTEGRATIONS.md
Normal file
193
dev-docs/tasks/phase-11-cloud-functions/FUTURE-INTEGRATIONS.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# Future: External Service Integrations
|
||||
|
||||
**Status:** Deferred
|
||||
**Target Phase:** Phase 12 or later
|
||||
**Dependencies:** Phase 11 Series 1-4 complete
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the external service integrations that would transform OpenNoodl into a true n8n competitor. These are **deferred** from Phase 11 to keep the initial scope manageable.
|
||||
|
||||
Phase 11 focuses on the workflow engine foundation (execution history, deployment, monitoring). Once that foundation is solid, these integrations become the natural next step.
|
||||
|
||||
---
|
||||
|
||||
## Integration Categories
|
||||
|
||||
### Tier 1: Essential (Do First)
|
||||
|
||||
These integrations cover 80% of workflow automation use cases:
|
||||
|
||||
| Integration | Description | Complexity | Notes |
|
||||
| ----------------- | ---------------------------- | ---------- | --------------------------------- |
|
||||
| **HTTP Request** | Generic REST API calls | 🟢 Low | Already exists, needs improvement |
|
||||
| **Webhook** | Receive HTTP requests | 🟢 Low | Already in Phase 5 TASK-007 |
|
||||
| **Email (SMTP)** | Send emails via SMTP | 🟢 Low | Simple protocol |
|
||||
| **SendGrid** | Transactional email | 🟢 Low | REST API |
|
||||
| **Slack** | Send messages, read channels | 🟡 Medium | OAuth, webhooks |
|
||||
| **Discord** | Bot messages | 🟡 Medium | Bot token auth |
|
||||
| **Google Sheets** | Read/write spreadsheets | 🟡 Medium | OAuth2, complex API |
|
||||
|
||||
### Tier 2: Popular (High Value)
|
||||
|
||||
| Integration | Description | Complexity | Notes |
|
||||
| ------------ | ----------------------- | ---------- | --------------- |
|
||||
| **Stripe** | Payments, subscriptions | 🟡 Medium | Webhooks, REST |
|
||||
| **Airtable** | Database operations | 🟡 Medium | REST API |
|
||||
| **Notion** | Pages, databases | 🟡 Medium | REST API |
|
||||
| **GitHub** | Issues, PRs, webhooks | 🟡 Medium | REST + webhooks |
|
||||
| **Twilio** | SMS, voice | 🟡 Medium | REST API |
|
||||
| **AWS S3** | File storage | 🟡 Medium | SDK integration |
|
||||
|
||||
### Tier 3: Specialized
|
||||
|
||||
| Integration | Description | Complexity | Notes |
|
||||
| ------------------- | ------------------ | ---------- | ------------------- |
|
||||
| **Salesforce** | CRM operations | 🔴 High | Complex OAuth, SOQL |
|
||||
| **HubSpot** | CRM, marketing | 🟡 Medium | REST API |
|
||||
| **Zendesk** | Support tickets | 🟡 Medium | REST API |
|
||||
| **Shopify** | E-commerce | 🟡 Medium | REST + webhooks |
|
||||
| **Zapier Webhooks** | Zapier integration | 🟢 Low | Simple webhooks |
|
||||
|
||||
---
|
||||
|
||||
## Architecture Pattern
|
||||
|
||||
All integrations should follow a consistent pattern:
|
||||
|
||||
### Node Structure
|
||||
|
||||
```typescript
|
||||
// Each integration has:
|
||||
// 1. Auth configuration node (one per project)
|
||||
// 2. Action nodes (Send Message, Create Record, etc.)
|
||||
// 3. Trigger nodes (On New Message, On Record Created, etc.)
|
||||
|
||||
// Example: Slack integration
|
||||
// - Slack Auth (configure workspace)
|
||||
// - Slack Send Message (action)
|
||||
// - Slack Create Channel (action)
|
||||
// - Slack On Message (trigger)
|
||||
```
|
||||
|
||||
### Auth Pattern
|
||||
|
||||
```typescript
|
||||
interface IntegrationAuth {
|
||||
type: 'api_key' | 'oauth2' | 'basic' | 'custom';
|
||||
credentials: Record<string, string>; // Encrypted at rest
|
||||
testConnection(): Promise<boolean>;
|
||||
}
|
||||
```
|
||||
|
||||
### Credential Storage
|
||||
|
||||
- Credentials stored encrypted in SQLite
|
||||
- Per-project credential scope
|
||||
- UI for managing credentials
|
||||
- Test connection before save
|
||||
|
||||
---
|
||||
|
||||
## MVP Integration: Slack
|
||||
|
||||
As a reference implementation, here's what a Slack integration would look like:
|
||||
|
||||
### Nodes
|
||||
|
||||
1. **Slack Auth** (config node)
|
||||
|
||||
- OAuth2 flow or bot token
|
||||
- Test connection
|
||||
- Store credentials
|
||||
|
||||
2. **Slack Send Message** (action)
|
||||
|
||||
- Channel selector
|
||||
- Message text (with variables)
|
||||
- Optional: blocks, attachments
|
||||
- Outputs: message ID, timestamp
|
||||
|
||||
3. **Slack On Message** (trigger)
|
||||
- Channel filter
|
||||
- User filter
|
||||
- Keyword filter
|
||||
- Outputs: message, user, channel, timestamp
|
||||
|
||||
### Implementation Estimate
|
||||
|
||||
| Component | Effort |
|
||||
| ------------------------------ | ------- |
|
||||
| Auth flow & credential storage | 4h |
|
||||
| Send Message node | 4h |
|
||||
| On Message trigger | 6h |
|
||||
| Testing & polish | 4h |
|
||||
| **Total** | **18h** |
|
||||
|
||||
---
|
||||
|
||||
## Integration Framework
|
||||
|
||||
Before building many integrations, create a framework:
|
||||
|
||||
### Integration Registry
|
||||
|
||||
```typescript
|
||||
interface Integration {
|
||||
id: string;
|
||||
name: string;
|
||||
icon: string;
|
||||
category: 'communication' | 'database' | 'file_storage' | 'marketing' | 'payment' | 'custom';
|
||||
authType: 'api_key' | 'oauth2' | 'basic' | 'none';
|
||||
nodes: IntegrationNode[];
|
||||
}
|
||||
|
||||
interface IntegrationNode {
|
||||
type: 'action' | 'trigger';
|
||||
name: string;
|
||||
description: string;
|
||||
inputs: NodeInput[];
|
||||
outputs: NodeOutput[];
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Builder (Future)
|
||||
|
||||
Eventually, allow users to create custom integrations:
|
||||
|
||||
- Define auth requirements
|
||||
- Build actions with HTTP requests
|
||||
- Create triggers with webhooks/polling
|
||||
- Share integrations via marketplace
|
||||
|
||||
---
|
||||
|
||||
## Recommended Implementation Order
|
||||
|
||||
1. **Framework** (8h) - Auth storage, credential UI, node patterns
|
||||
2. **HTTP Request improvements** (4h) - Better auth, response parsing
|
||||
3. **SendGrid** (6h) - Simple, high value
|
||||
4. **Slack** (18h) - Most requested
|
||||
5. **Stripe** (12h) - High business value
|
||||
6. **Google Sheets** (16h) - Popular but complex OAuth
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [n8n integrations](https://n8n.io/integrations/) - Feature reference
|
||||
- [Zapier apps](https://zapier.com/apps) - Integration inspiration
|
||||
- [Native BaaS Integrations](../../future-projects/NATIVE-BAAS-INTEGRATIONS.md) - Related concept
|
||||
|
||||
---
|
||||
|
||||
## Why Deferred?
|
||||
|
||||
1. **Foundation first** - Execution history is more important than more integrations
|
||||
2. **Scope creep** - Each integration is 8-20h of work
|
||||
3. **HTTP covers most cases** - Generic HTTP Request node handles many APIs
|
||||
4. **Community opportunity** - Integration framework enables community contributions
|
||||
|
||||
Once Phase 11 core is complete, integrations become the obvious next step.
|
||||
284
dev-docs/tasks/phase-11-cloud-functions/README.md
Normal file
284
dev-docs/tasks/phase-11-cloud-functions/README.md
Normal file
@@ -0,0 +1,284 @@
|
||||
# Phase 11: Cloud Functions & Workflow Automation
|
||||
|
||||
**Status:** Planning
|
||||
**Dependencies:** Phase 5 TASK-007 (Integrated Local Backend) - MUST BE COMPLETE
|
||||
**Total Estimated Effort:** 10-12 weeks
|
||||
**Strategic Goal:** Transform OpenNoodl into a viable workflow automation platform
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Phase 11 extends the local backend infrastructure from Phase 5 TASK-007 to add workflow automation features that enable OpenNoodl to compete with tools like n8n. This phase focuses on **unique features not covered elsewhere** - execution history, cloud deployment, monitoring, advanced workflow nodes, and Python/AI runtime support.
|
||||
|
||||
> ⚠️ **Important:** This phase assumes Phase 5 TASK-007 is complete. That phase provides the foundational SQLite database, Express backend server, CloudRunner adaptation, and basic trigger nodes (Schedule, DB Change, Webhook).
|
||||
|
||||
---
|
||||
|
||||
## What This Phase Delivers
|
||||
|
||||
### 1. Advanced Workflow Nodes
|
||||
|
||||
Visual logic nodes that make complex workflows possible without code:
|
||||
|
||||
- IF/ELSE conditions with visual expression builder
|
||||
- Switch nodes (multi-branch routing)
|
||||
- For Each loops (array iteration)
|
||||
- Merge/Split nodes (parallel execution)
|
||||
- Error handling (try/catch, retry logic)
|
||||
- Wait/Delay nodes
|
||||
|
||||
### 2. Execution History & Debugging
|
||||
|
||||
Complete visibility into workflow execution:
|
||||
|
||||
- Full execution log for every workflow run
|
||||
- Input/output data captured for each node
|
||||
- Timeline visualization
|
||||
- Canvas overlay showing execution data
|
||||
- Search and filter execution history
|
||||
|
||||
### 3. Cloud Deployment
|
||||
|
||||
One-click deployment to production:
|
||||
|
||||
- Docker container generation
|
||||
- Fly.io, Railway, Render integrations
|
||||
- Environment variable management
|
||||
- SSL/domain configuration
|
||||
- Rollback capability
|
||||
|
||||
### 4. Monitoring & Observability
|
||||
|
||||
Production-ready monitoring:
|
||||
|
||||
- Workflow performance metrics
|
||||
- Error tracking and alerting
|
||||
- Real-time execution feed
|
||||
- Email/webhook notifications
|
||||
|
||||
### 5. Python Runtime & AI Nodes (Bonus)
|
||||
|
||||
AI-first workflow capabilities:
|
||||
|
||||
- Dual JavaScript/Python runtime
|
||||
- Claude/OpenAI completion nodes
|
||||
- LangGraph agent nodes
|
||||
- Vector store integrations
|
||||
|
||||
---
|
||||
|
||||
## Phase Structure
|
||||
|
||||
| Series | Name | Duration | Priority |
|
||||
| ------ | ----------------------------- | -------- | ------------ |
|
||||
| **1** | Advanced Workflow Nodes | 2 weeks | High |
|
||||
| **2** | Execution History & Debugging | 3 weeks | **Critical** |
|
||||
| **3** | Cloud Deployment | 3 weeks | High |
|
||||
| **4** | Monitoring & Observability | 2 weeks | Medium |
|
||||
| **5** | Python Runtime & AI Nodes | 4 weeks | Medium |
|
||||
|
||||
**Recommended Order:** Series 1 → 2 → 3 → 4 → 5
|
||||
|
||||
Series 2 (Execution History) is the highest priority as it enables debugging of workflows - critical for any production use.
|
||||
|
||||
---
|
||||
|
||||
## Recommended Task Execution Order
|
||||
|
||||
> ⚠️ **Critical:** To avoid rework, follow this sequencing.
|
||||
|
||||
### Step 1: Phase 5 TASK-007 (Foundation) — DO FIRST
|
||||
|
||||
| Sub-task | Name | Hours | Phase 11 Needs? |
|
||||
| --------- | ------------------------------ | ------ | -------------------------------------- |
|
||||
| TASK-007A | LocalSQL Adapter (SQLite) | 16-20h | **YES** - CF11-004 reuses patterns |
|
||||
| TASK-007B | Backend Server (Express) | 12-16h | **YES** - Execution APIs live here |
|
||||
| TASK-007C | Workflow Runtime (CloudRunner) | 12-16h | **YES** - All workflow nodes need this |
|
||||
| TASK-007D | Launcher Integration | 8-10h | No - Can defer |
|
||||
| TASK-007E | Migration/Export | 8-10h | No - Can defer |
|
||||
| TASK-007F | Standalone Deployment | 8-10h | No - Can defer |
|
||||
|
||||
**Start with TASK-007A/B/C only** (~45h). This creates the foundation without doing unnecessary work.
|
||||
|
||||
### Step 2: Phase 11 Series 1 & 2 (Core Workflow Features)
|
||||
|
||||
Once TASK-007A/B/C are complete:
|
||||
|
||||
1. **CF11-001 → CF11-003** (Advanced Nodes) - 2 weeks
|
||||
2. **CF11-004 → CF11-007** (Execution History) - 3 weeks ⭐ PRIORITY
|
||||
|
||||
### Step 3: Continue Either Phase
|
||||
|
||||
At this point, you can:
|
||||
|
||||
- Continue Phase 11 (Series 3-5: Deployment, Monitoring, AI)
|
||||
- Return to Phase 5 (TASK-007D/E/F: Launcher, Migration, Deployment)
|
||||
|
||||
### Why This Order?
|
||||
|
||||
If CF11-004 (Execution Storage) is built **before** TASK-007A (SQLite Adapter):
|
||||
|
||||
- Two independent SQLite implementations would be created
|
||||
- Later refactoring needed to harmonize patterns
|
||||
- **~4-8 hours of preventable rework**
|
||||
|
||||
The CloudRunner (TASK-007C) must exist before any workflow nodes can be tested.
|
||||
|
||||
---
|
||||
|
||||
## Dependency Graph
|
||||
|
||||
```
|
||||
Phase 5 TASK-007 (Local Backend)
|
||||
│
|
||||
├── SQLite Adapter ✓
|
||||
├── Backend Server ✓
|
||||
├── CloudRunner ✓
|
||||
├── Basic Triggers ✓
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ PHASE 11 │
|
||||
├─────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Series 1: Advanced Nodes ─┬─► Series 2: Exec History
|
||||
│ │ │
|
||||
│ │ ▼
|
||||
│ └─► Series 3: Deployment
|
||||
│ │
|
||||
│ ▼
|
||||
│ Series 4: Monitoring
|
||||
│ │
|
||||
│ ▼
|
||||
│ Series 5: Python/AI
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task List
|
||||
|
||||
### Series 1: Advanced Workflow Nodes (2 weeks)
|
||||
|
||||
| Task | Name | Effort | Status |
|
||||
| -------- | --------------------------------------- | ------ | ----------- |
|
||||
| CF11-001 | Logic Nodes (IF/Switch/ForEach/Merge) | 12-16h | Not Started |
|
||||
| CF11-002 | Error Handling Nodes (Try/Catch, Retry) | 8-10h | Not Started |
|
||||
| CF11-003 | Wait/Delay Nodes | 4-6h | Not Started |
|
||||
|
||||
### Series 2: Execution History (3 weeks) ⭐ PRIORITY
|
||||
|
||||
| Task | Name | Effort | Status |
|
||||
| -------- | ---------------------------- | ------ | ----------- |
|
||||
| CF11-004 | Execution Storage Schema | 8-10h | Not Started |
|
||||
| CF11-005 | Execution Logger Integration | 8-10h | Not Started |
|
||||
| CF11-006 | Execution History Panel UI | 12-16h | Not Started |
|
||||
| CF11-007 | Canvas Execution Overlay | 8-10h | Not Started |
|
||||
|
||||
### Series 3: Cloud Deployment (3 weeks)
|
||||
|
||||
| Task | Name | Effort | Status |
|
||||
| -------- | --------------------------- | ------ | ----------- |
|
||||
| CF11-008 | Docker Container Builder | 10-12h | Not Started |
|
||||
| CF11-009 | Fly.io Deployment Provider | 8-10h | Not Started |
|
||||
| CF11-010 | Railway Deployment Provider | 6-8h | Not Started |
|
||||
| CF11-011 | Cloud Deploy Panel UI | 10-12h | Not Started |
|
||||
|
||||
### Series 4: Monitoring & Observability (2 weeks)
|
||||
|
||||
| Task | Name | Effort | Status |
|
||||
| -------- | ------------------------- | ------ | ----------- |
|
||||
| CF11-012 | Metrics Collection System | 8-10h | Not Started |
|
||||
| CF11-013 | Monitoring Dashboard UI | 12-16h | Not Started |
|
||||
| CF11-014 | Alerting System | 6-8h | Not Started |
|
||||
|
||||
### Series 5: Python Runtime & AI Nodes (4 weeks)
|
||||
|
||||
| Task | Name | Effort | Status |
|
||||
| -------- | --------------------- | ------ | ----------- |
|
||||
| CF11-015 | Python Runtime Bridge | 12-16h | Not Started |
|
||||
| CF11-016 | Python Core Nodes | 10-12h | Not Started |
|
||||
| CF11-017 | Claude/OpenAI Nodes | 10-12h | Not Started |
|
||||
| CF11-018 | LangGraph Agent Node | 12-16h | Not Started |
|
||||
| CF11-019 | Language Toggle UI | 6-8h | Not Started |
|
||||
|
||||
---
|
||||
|
||||
## What's NOT in This Phase
|
||||
|
||||
### Handled by Phase 5 TASK-007
|
||||
|
||||
- ❌ SQLite database adapter (TASK-007A)
|
||||
- ❌ Express backend server (TASK-007B)
|
||||
- ❌ CloudRunner adaptation (TASK-007C)
|
||||
- ❌ Basic trigger nodes (Schedule, DB Change, Webhook)
|
||||
- ❌ Schema management
|
||||
- ❌ Launcher integration
|
||||
|
||||
### Deferred to Future Phase
|
||||
|
||||
- ❌ External integrations (Slack, SendGrid, Stripe, etc.) - See `FUTURE-INTEGRATIONS.md`
|
||||
- ❌ Workflow marketplace/templates
|
||||
- ❌ Multi-user collaboration
|
||||
- ❌ Workflow versioning/Git integration
|
||||
- ❌ Queue/job system
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Functional
|
||||
|
||||
- [ ] Can create IF/ELSE workflows with visual expression builder
|
||||
- [ ] Can view complete execution history with node-by-node data
|
||||
- [ ] Can debug failed workflows by pinning execution to canvas
|
||||
- [ ] Can deploy workflows to Fly.io with one click
|
||||
- [ ] Can monitor workflow performance in real-time
|
||||
- [ ] Can create Python workflows for AI use cases
|
||||
- [ ] Can use Claude/OpenAI APIs in visual workflows
|
||||
|
||||
### User Experience
|
||||
|
||||
- [ ] Creating a conditional workflow takes < 3 minutes
|
||||
- [ ] Debugging failed workflows takes < 2 minutes
|
||||
- [ ] Deploying to production takes < 5 minutes
|
||||
- [ ] Setting up AI chat assistant takes < 10 minutes
|
||||
|
||||
### Technical
|
||||
|
||||
- [ ] Workflow execution overhead < 50ms
|
||||
- [ ] Execution history queries < 100ms
|
||||
- [ ] Real-time monitoring updates < 1 second latency
|
||||
- [ ] Can handle 1000 concurrent workflow executions
|
||||
|
||||
---
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| Risk | Impact | Mitigation |
|
||||
| -------------------------------- | ------------ | ------------------------------------------------- |
|
||||
| Phase 5 TASK-007 not complete | **BLOCKING** | Do not start Phase 11 until TASK-007 is done |
|
||||
| Python runtime complexity | High | Start with JS-only, add Python as separate series |
|
||||
| Deployment platform variability | Medium | Focus on Fly.io first, add others incrementally |
|
||||
| Execution history storage growth | Medium | Implement retention policies early |
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [Phase 5 TASK-007: Integrated Local Backend](../phase-5-multi-target-deployment/01-byob-backend/TASK-007-integrated-backend/README.md)
|
||||
- [Cloud Functions Revival Plan (Original)](./cloud-functions-revival-plan.md)
|
||||
- [Native BaaS Integrations](../../future-projects/NATIVE-BAAS-INTEGRATIONS.md)
|
||||
- [Phase 10: AI-Powered Development](../phase-10-ai-powered-development/README.md)
|
||||
|
||||
---
|
||||
|
||||
## Changelog
|
||||
|
||||
| Date | Change |
|
||||
| ---------- | ---------------------------------------------------- |
|
||||
| 2026-01-15 | Restructured to remove overlap with Phase 5 TASK-007 |
|
||||
| 2026-01-15 | Prioritized Execution History over Cloud Deployment |
|
||||
| 2026-01-15 | Moved integrations to future work |
|
||||
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user