Search…
⌃K
Links

Architecture

This section describes Memphis' architecture

Connectivity Diagram

Memphis deployment comprised four components:
1. UI - The dashboard of Memphis.
2. Broker - Messaging Queue. Memphis broker is a fork of NATS.io, which is an existing and battle-tested messaging queue with Memphis improvements and tunings.
3. MongoDB - Only for UI state persistency (not used for storing messages). Will be replaced in the coming versions.
Consumers are pull-based. The pull interval and the batch size can be configured. Each consumer will consume all the messages residing inside a station. The user must create consumers within the same consumer group if a client requires a horizontal scale and split messages across different consitency group members.
MongoDB is not necessary for data traffic or standard broker behavior but rather responsible for UI state and metadata only.

Cluster mode component diagram (For production)

Full Kubernetes-based layout.

Ordering

Ordering is guaranteed only while working with a single consumer group.

Mirroring

Memphis is designed to run as a distributed cluster for a highly available and scalable system. The consensus algorithm responsible for atomicity within Memphis, called RAFT, and compared to Apache ZooKeeper, widely used by other projects like Kafka, does not require a witness or a standalone Quorum. RAFT is also equivalent to Paxos in fault tolerance and performance.
To ensure data consistency and zero loss within complete broker’s restarts, Memphis brokers should run on different nodes and try to do it automatically. To comply with RAFT requirements which are ½ cluster size + 1, On K8S environment, three Memphis brokers will be deployed. The minimum number of brokers is three to ensure at least one node failure.

Internal Protocol

Memphis forked and modified NATS as its core queue.
The NATS streaming protocol sits atop the core NATS protocol and uses Google's Protocol Buffers. Protocol buffer messages are marshaled into bytes and published as Memphis messages on the specific station.

Deployment sequence

Requirements

Kubernetes
Docker
Minimum Requirements (No HA)
Resource
Quantity
K8S Nodes
1
CPU
2 CPU
Memory
4GB RAM
Storage
12GB PVC
​
Recommended Requirements (HA)
Resource
Minimum Quantity
K8S Nodes
3
CPU
4 CPU
Memory
8GB RAM
Storage
12GB PVC Per node
Requirements (No HA)
Resource
Quantity
OS
Mac / Windows / Linux
CPU
1 CPU
Memory
4GB
Storage
6GB

Delivery Guarantee

  • At least once
This is achieved by the combination of published messages being persisted to the station as well as the consumer tracking delivery and acknowledgement of each individual message as clients receive and process them.
  • Exactly once
​
​