Skip to content

Under the hood

Flow-based programming was invented in the early 70s, and it perfectly fits into an event-driven system we so like to build. Each node of a graph follows the Unix philosophy when it does the certain job only and does it well.

Glossary

  • Node - instance of the certain component on the flow.
  • Flow - a set of nodes connected with each other.
  • Edge - connection between nodes.
  • Component - peace of the code solves a single purpose. Component has input and output ports.
  • Module - a program which contains sets of components which can be deployed to the cluster.
  • Project - set of the flows. Project is connected to a single cluster.
  • Cluster - Kubernetes cluster.

Here is the interface written with go which each component needs to implement.

go
// Component interface
type Component interface {
  // GetInfo provides basic information about the component, it's name, description and tags
  GetInfo() ComponentInfo
  //Handle handles incoming requests, you have to implement your own logic here
  Handle(ctx context.Context, output Handler, port string, message interface{}) error
  //Ports gets list of ports
  Ports() []NodePort
  //Instance creates new instance with default settings
  Instance() Component
}

Where NodePort is:

go
type NodePort struct {
  //Source true if that is an Input port 
  Source        bool
  //Status if true the information about this port will be displayed on node's dashboard 
  Status        bool
  //Settings if true, the structure of this port will be presented as a settings form for the node
  Settings      bool
  //Position which side of the node this port is displayed
  Position      Position
  // Name od the port, e.g. "req"
  Name          string
  // Human-readable name of the port e.g. "Request" 
  Label         string
  // Instance of a custom struct responsible for the data exchange
  Configuration interface{}
}

Each instance of the component (node) is represented in the cluster as a Custom Resource of a tinynode CRD. We use operator pattern to reconcile tinynodes in a cluster.

go
// TinyNode CRD specifications
type TinyNodeSpec struct {

	// Module name - container image repo + tag
	// +kubebuilder:validation:Required
	Module string `json:"module"`

	// Component name within a module
	// +kubebuilder:validation:Required
	Component string `json:"component"`

	// Port configurations
	// +kubebuilder:validation:Optional
	Ports []TinyNodePortConfig `json:"ports"`

	// Edges to send message next
	// +kubebuilder:validation:Optional
	Edges []TinyNodeEdge `json:"edges"`

}

By creating a new node on the flow - we create a new custom resource in a cluster which starts a new instance (goroutine) inside the corresponding module's container, so single container may host hundreds of the nodes.

under-the-hood-crd.svg


Instances of the components within the same module communicate with each other using golang channels. Instances of components from different modules communicate with each other by using gRPC protocol.

cross-module-communication.svg

We use Helm to install modules. With help of Tiny Systems Operator we can deploy any kind of module.