Architecture Overview
TinySystems is built on a distributed, Kubernetes-native architecture that separates the control plane (Platform) from the execution plane (Modules in your clusters).
High-Level Architecture
+-----------------------------------------------------------------------+
| PLATFORM |
| +-----------------+ +-----------------+ +---------------------+ |
| | Web Editor | | Manager API | | Cluster Watcher | |
| | | | (gRPC) | | | |
| | - Flow design | | - Flow CRUD | | - Sync node status | |
| | - Debugging | | - Deployments | | - Watch TinyNodes | |
| | - Monitoring | | - AI Assistant | | - Real-time updates| |
| +-----------------+ +-----------------+ +---------------------+ |
+-----------------------------------------------------------------------+
|
Kubernetes API (kubeconfig)
|
+---------------------------+---------------------------+
v v v
+-------------------+ +-------------------+ +-------------------+
| CLUSTER A | | CLUSTER B | | CLUSTER C |
| | | | | |
| +---------------+ | | +---------------+ | | +---------------+ |
| | common-module | | | | http-module | | | | custom-module | |
| | http-module | | | | db-module | | | | | |
| +---------------+ | | +---------------+ | | +---------------+ |
| | | | | |
| TinyNode CRDs | | TinyNode CRDs | | TinyNode CRDs |
| TinyModule CRDs | | TinyModule CRDs | | TinyModule CRDs |
+-------------------+ +-------------------+ +-------------------+Component Details
Platform (Control Plane)
The platform provides the user interface and orchestration:
| Component | Technology | Purpose |
|---|---|---|
| Web Editor | Vue.js | Visual flow design and debugging |
| Manager API | Go + gRPC | Flow management, deployments, auth |
| Cluster Watcher | Go | Real-time sync with Kubernetes clusters |
| Database | PostgreSQL | Flow definitions, revisions, metadata |
| Cache | Redis | Sessions, job queues, real-time events |
Modules (Execution Plane)
Modules run in your Kubernetes clusters as operators:
+-------------------------------------------------------------------+
| MODULE OPERATOR |
| |
| +-----------------+ +-----------------+ +-----------------+ |
| | Controller | | Scheduler | | gRPC Server | |
| | | | | | | |
| | - Watch CRDs | | - Route messages| | - Cross-module | |
| | - Reconcile | | - Manage runners| | communication | |
| | - Update status | | - Handle errors | | | |
| +-----------------+ +-----------------+ +-----------------+ |
| |
| +-------------------------------------------------------------+ |
| | COMPONENT REGISTRY | |
| | +---------+ +---------+ +---------+ +---------+ ... | |
| | | Router | | Split | | Ticker | | Debug | | |
| | +---------+ +---------+ +---------+ +---------+ | |
| +-------------------------------------------------------------+ |
+-------------------------------------------------------------------+Each module:
- Deploys as a Kubernetes Deployment (via Helm)
- Watches TinyNode CRDs that reference its components
- Executes component logic when nodes receive messages
- Updates TinyNode status with port schemas and errors
Message Flow
When a flow executes, messages traverse the system:
+-----------------------------------------------------------------+
| MESSAGE EXECUTION FLOW |
+-----------------------------------------------------------------+
1. TRIGGER
+------------------+
| External Request | (HTTP, webhook, timer, manual)
+--------+---------+
|
v
2. SIGNAL CREATION
+------------------+
| TinySignal CRD | (Created in Kubernetes)
+--------+---------+
|
v
3. CONTROLLER PROCESSING
+------------------+
| Signal Controller| (Leader pod only)
+--------+---------+
|
v
4. SCHEDULER ROUTING
+------------------+
| Scheduler.Handle | (Find target runner)
+--------+---------+
|
v
5. COMPONENT EXECUTION
+------------------+
| Component.Handle | (Your component code)
+--------+---------+
|
v
6. OUTPUT CALLBACK
+------------------+
| output(port,data)| (Send to output port)
+--------+---------+
|
v
7. EDGE EVALUATION
+------------------+
| Expression Eval | (Transform data via expressions)
+--------+---------+
|
+--- Same module? --> Go channel (fast)
|
+--- Different module? --> gRPC call
|
v
8. NEXT NODE
+------------------+
| Repeat from step 4|
+------------------+Custom Resource Definitions
TinySystems uses CRDs to represent runtime state:
TinyNode
Represents a node instance:
yaml
apiVersion: operator.tinysystems.io/v1alpha1
kind: TinyNode
metadata:
name: router-abc123
labels:
tiny.systems/flow-id: "flow-xyz"
spec:
module: github.com/tiny-systems/common-module
component: router
version: "1.0.0"
edges:
- id: "edge-1"
port: "out_success"
to: "next-node"
toPort: "input"
configuration:
context: "{{$.result}}"
status:
moduleName: common-module
component: router
ports:
- name: input
schema: {...}
metadata:
custom-key: "custom-value"TinyModule
Registers a module for discovery:
yaml
apiVersion: operator.tinysystems.io/v1alpha1
kind: TinyModule
metadata:
name: common-module-v1
status:
address: "common-module-v1:50051"
version: "1.0.0"
components:
- name: router
- name: split
- name: tickerTinySignal
Triggers node execution:
yaml
apiVersion: operator.tinysystems.io/v1alpha1
kind: TinySignal
metadata:
name: trigger-abc
spec:
node: router-abc123
port: input
data:
message: "Hello World"Cross-Module Communication
When nodes in different modules need to communicate:
+-----------------+ gRPC +-----------------+
| common-module | <--------------------> | http-module |
| | | |
| Router node | ModuleService.Send | Server node |
| sends to | ---------------------> | receives |
| http-server | | message |
+-----------------+ +-----------------+
| |
+-------- ClientPool manages connections --+- Modules discover each other via TinyModule CRDs
- ClientPool maintains gRPC connections
- Messages are serialized for cross-module calls
- Same-module communication uses Go channels (no serialization)
Scalability
Each module supports horizontal scaling:
+-------------------------------------------------------------------+
| SCALED MODULE (3 replicas) |
| |
| +---------+ +---------+ +---------+ |
| | LEADER | | READER | | READER | |
| | | | | | | |
| | Updates | | Watches | | Watches | |
| | CRs | | CRs | | CRs | |
| +----+----+ +----+----+ +----+----+ |
| | | | |
| +--------------------+--------------------+ |
| | |
| Kubernetes Service |
| (load balancing) |
+-------------------------------------------------------------------+- Leader election via Kubernetes Leases
- Only leader updates CRD status
- All replicas handle incoming messages
- State shared via TinyNode metadata
Next Steps
- Quick Start - Create your first flow
- Developer Guide - Build custom modules