Cross-Module Communication
When messages need to reach components in different modules, TinySystems uses gRPC for transport. This page covers how cross-module communication works.
Overview
┌─────────────────────────────────────────────────────────────────────────────┐
│ CROSS-MODULE COMMUNICATION │
└─────────────────────────────────────────────────────────────────────────────┘
common-module Pod http-module Pod
┌────────────────────────────┐ ┌────────────────────────────┐
│ │ │ │
│ ┌──────────────────────┐ │ │ ┌──────────────────────┐ │
│ │ Router Component │ │ │ │ HTTP Server │ │
│ │ │ │ │ │ │ │
│ │ output() ───────────│──│──────────────▶│──│▶ Handle() │ │
│ │ │ │ gRPC │ │ │ │
│ └──────────────────────┘ │ │ └──────────────────────┘ │
│ │ │ │ │ │
│ ▼ │ │ │ │
│ ┌──────────────────────┐ │ │ ┌──────────────────────┐ │
│ │ Scheduler │ │ │ │ Scheduler │ │
│ │ │ │ │ │ │ │
│ │ Not local? │ │ │ │ Receives message │ │
│ │ → ClientPool │ │ │ │ → Route to instance │ │
│ └──────────────────────┘ │ │ └──────────────────────┘ │
│ │ │ │ ▲ │
│ ▼ │ │ │ │
│ ┌──────────────────────┐ │ │ ┌──────────────────────┐ │
│ │ ClientPool │ │ │ │ gRPC Server │ │
│ │ │ │ │ │ │ │
│ │ http-module-v1 ─────│──│── grpc.Dial ──│──│▶ Listen :50051 │ │
│ └──────────────────────┘ │ │ └──────────────────────┘ │
│ │ │ │
└────────────────────────────┘ └────────────────────────────┘When Cross-Module Happens
The Scheduler determines if a message is local or remote:
go
func (s *Scheduler) Handle(ctx context.Context, msg runner.Msg) error {
nodeName, portName := parseDestination(msg.To)
// 1. Check local instances
if runner, exists := s.instancesMap.Get(nodeName); exists {
return runner.MsgHandler(ctx, msg)
}
// 2. Not local - find the module that owns this node
moduleName := s.findModuleForNode(nodeName)
if moduleName == "" {
return fmt.Errorf("no module found for node: %s", nodeName)
}
// 3. Send via gRPC
return s.sendViaGRPC(ctx, moduleName, msg)
}Module Resolution
From TinyNode
Each TinyNode specifies its module:
yaml
apiVersion: operator.tinysystems.io/v1alpha1
kind: TinyNode
metadata:
name: http-server-abc123
spec:
module: http-module-v1 # Module identifier
component: github.com/tiny-systems/http-module/serverLookup Table
The Scheduler maintains a node-to-module mapping:
go
type Scheduler struct {
nodeModuleMap map[string]string // node-name → module-name
}
func (s *Scheduler) findModuleForNode(nodeName string) string {
return s.nodeModuleMap[nodeName]
}This map is updated when TinyNode CRs are reconciled.
Sending Cross-Module Messages
Serialize Message
go
func (s *Scheduler) sendViaGRPC(ctx context.Context, moduleName string, msg runner.Msg) error {
// Get connection from pool
conn, ok := s.clientPool.Get(moduleName)
if !ok {
return fmt.Errorf("module not discovered: %s", moduleName)
}
// Create gRPC client
client := pb.NewModuleServiceClient(conn)
// Serialize message data
data, err := serialize(msg.Data)
if err != nil {
return fmt.Errorf("serialization failed: %w", err)
}
// Send via gRPC
resp, err := client.Send(ctx, &pb.Message{
To: msg.To,
From: msg.From,
Data: data,
})
if err != nil {
return fmt.Errorf("gRPC send failed: %w", err)
}
if resp.Error != "" {
return errors.New(resp.Error)
}
return nil
}Message Format
protobuf
message Message {
string to = 1; // "node-name.port-name"
string from = 2; // "source-node.source-port"
bytes data = 3; // JSON or msgpack serialized
map<string, string> metadata = 4; // Context metadata
}Receiving Cross-Module Messages
gRPC Server Handler
go
func (s *Server) Send(ctx context.Context, msg *pb.Message) (*pb.Response, error) {
// Deserialize data
data, err := deserialize(msg.Data)
if err != nil {
return &pb.Response{Error: err.Error()}, nil
}
// Propagate context metadata
ctx = withMetadata(ctx, msg.Metadata)
// Route through Scheduler
result := s.scheduler.Handle(ctx, runner.Msg{
To: msg.To,
From: msg.From,
Data: data,
})
// Handle result
if err, ok := result.(error); ok {
return &pb.Response{Error: err.Error()}, nil
}
respData, _ := serialize(result)
return &pb.Response{Data: respData}, nil
}Blocking Across Modules
Cross-module calls are also blocking:
common-module Kubernetes http-module
│ Network │
│ │ │
│ output("http-server.req") │ │
│ ══════════════════════════╪═════════════════════════╪═══╗
│ │ gRPC Send │ ║
│ │ ─────────────────────▶ │ ║
│ │ │ ║
│ │ Handle() │ ║
│ │ │ │ ║
│ │ ▼ │ ║
│ │ (processing) │ ║
│ │ │ │ ║
│ │ ▼ │ ║
│ │ return │ ║
│ │ ◀───────────────────── │ ║
│ return │ gRPC Response │ ║
│ ◀═════════════════════════╪═════════════════════════╪═══╝
│ │ │
▼ ▼ ▼Error Handling
Network Errors
go
resp, err := client.Send(ctx, msg)
if err != nil {
// Check for specific gRPC errors
if status.Code(err) == codes.Unavailable {
// Module is down - could retry
return fmt.Errorf("module unavailable: %s", moduleName)
}
return err
}Remote Errors
go
if resp.Error != "" {
// Error from remote component
return fmt.Errorf("remote error: %s", resp.Error)
}Timeout
go
// Set timeout for cross-module calls
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
resp, err := client.Send(ctx, msg)
if err != nil {
if ctx.Err() == context.DeadlineExceeded {
return fmt.Errorf("cross-module call timed out")
}
}Monitoring
Metrics
Track cross-module communication:
go
var (
crossModuleCalls = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "tinysystems_cross_module_calls_total",
},
[]string{"source_module", "target_module", "status"},
)
crossModuleLatency = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "tinysystems_cross_module_latency_seconds",
Buckets: prometheus.DefBuckets,
},
[]string{"source_module", "target_module"},
)
)Logging
go
log.Info("cross-module message sent",
"from", msg.From,
"to", msg.To,
"targetModule", moduleName,
"durationMs", duration.Milliseconds(),
)Best Practices
1. Minimize Cross-Module Calls
Group related components in the same module when possible:
Good:
┌─────────────────────────────────────┐
│ http-module │
│ ┌─────────┐ ┌─────────┐ │
│ │ Server │──▶│ Parser │ │
│ └─────────┘ └─────────┘ │
│ │ │
│ ▼ │
│ ┌─────────┐ │
│ │ Router │──▶ (to other modules) │
│ └─────────┘ │
└─────────────────────────────────────┘
Avoid:
┌──────────┐ gRPC ┌──────────┐ gRPC ┌──────────┐
│ server- │ ──────▶│ parser- │ ──────▶│ router- │
│ module │ │ module │ │ module │
└──────────┘ └──────────┘ └──────────┘2. Handle Failures Gracefully
go
func (c *Client) Handle(ctx context.Context, output module.Handler, port string, msg any) error {
result, err := c.callRemoteService(ctx, msg)
if err != nil {
// Route to error port
return output(ctx, "error", ErrorOutput{
Error: err.Error(),
Request: msg,
})
}
return output(ctx, "success", result)
}3. Set Appropriate Timeouts
go
// Consider downstream processing time
ctx, cancel := context.WithTimeout(ctx, 60*time.Second)
defer cancel()4. Monitor Latency
Track cross-module latency separately from internal routing:
┌────────────────────────────────────────────────────────┐
│ Cross-Module Latency Dashboard │
│ │
│ common-module → http-module: avg 15ms, p99 45ms │
│ http-module → db-module: avg 25ms, p99 80ms │
│ common-module → api-module: avg 10ms, p99 30ms │
└────────────────────────────────────────────────────────┘Next Steps
- Client Pool - Connection management
- gRPC Fundamentals - gRPC details
- Module Discovery - How modules find each other