You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/replication/README.md
+9-6
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,7 @@
1
1
# Replication examples
2
2
3
+
These examples demonstrate the configurations and operations of a replica network using SwarmNL using Eventual Consistency and Strong Consistency models.
4
+
3
5
## Eventual consistency
4
6
5
7
To run this example, cd into the root of this directory and in separate terminals launch the following commands to launch three nodes:
@@ -38,7 +40,7 @@ Then the third node:
38
40
repl Papayas
39
41
```
40
42
41
-
Then in node 3, running the following command will return the values in its replication buffer (which contains data gotten from node 1 and 2):
43
+
Then in node 3, running the following command will return the values in its replication buffer (which contains data received from node 1 and 2):
42
44
43
45
```bash
44
46
read
@@ -77,7 +79,7 @@ cargo run --features=second-node
77
79
And the third node:
78
80
79
81
```bash
80
-
cargo run --features=first-node
82
+
cargo run --features=third-node
81
83
```
82
84
83
85
Now, submit the following commands to replicate data from nodes in the network, starting with the first node:
@@ -122,7 +124,7 @@ Hit `Ctrl+D` to exit.
122
124
123
125
## Peer cloning
124
126
125
-
In this example, we expect a node to clone the data in the buffer of the specified peer ID when it calls `clone`.
127
+
In this example, we expect a node to clone the data in the buffer of the specified replica peer when it calls `clone`.
126
128
127
129
To run this example, cd into the root of this directory and in separate terminals launch the following commands:
128
130
@@ -139,7 +141,7 @@ cargo run --features=second-node
139
141
And the third node:
140
142
141
143
```bash
142
-
cargo run --features=first-node
144
+
cargo run --features=third-node
143
145
```
144
146
145
147
Now, submit the following commands to replicate data from nodes in the network, starting with the first node:
@@ -162,11 +164,12 @@ repl Papayas
162
164
163
165
Then in node 3, run the following command to clone node 2's buffer (by passing in node 2's peer ID):
We expect node 2 to contain "Papayas" and "Apples" in its buffer which you can verify by submitting `read` to stdin from node 3's terminal to read it's buffer content:
171
+
We expect node 2 to contain "Papayas" and "Apples" in its buffer.
172
+
This can be verified by submitting `read` to stdin from node 3's terminal to read it's buffer content:
Copy file name to clipboardExpand all lines: examples/sharding/README.md
+8-7
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ cargo run --features=second-node
19
19
And the third node:
20
20
21
21
```bash
22
-
cargo run --features=first-node
22
+
cargo run --features=third-node
23
23
```
24
24
25
25
In the separate terminals where each node is running, submit the following commands:
@@ -34,21 +34,21 @@ shard mars mars_creatures.txt Inkls
34
34
```
35
35
36
36
According to the configuration in the example, node 1 and 2 belongs to the shard with key "mars". Node 3 beloings to a separate shard with key "earth".
37
-
To read the local data stored on node 1 (mars shard). Run the following command:
37
+
To read the local data stored on node 1 ("mars" shard), run the following command from the first terminal:
38
38
39
39
```bash
40
40
read
41
41
```
42
42
43
-
After that, we would query the "earth" shard for the data it holds. To do that, please run the following command:
43
+
After that, we would query the "earth" shard for the data it holds. To do that, run the following command:
44
44
45
45
```bash
46
46
fetch earth earth_creatures.txt
47
47
```
48
48
49
49
Here, we are sending a data request to the sharded network, telling it to read the file "earth_creatures.txt" on the shard "earth".
50
50
51
-
From node 3's terminal, you can also read what is stored in node 3 by submitting the `read` command. To request data stored in the "mars" shard, kindly run the following:
51
+
From node 3's terminal, you can also read what is stored in node 3 by submitting the `read` command. To request data stored in the "mars" shard, run the following:
52
52
53
53
```bash
54
54
fetch mars mars_creatures.txt
@@ -57,7 +57,7 @@ fetch mars mars_creatures.txt
57
57
In node 2's terminal, you can also run the following:
58
58
59
59
```bash
60
-
fetch earth earth_creatures.txt
60
+
fetch earth earth_creatures.txt
61
61
```
62
62
63
63
## Run with Docker
@@ -114,9 +114,10 @@ To read data stored locally on a particular node, run the following command:
114
114
```bash
115
115
read
116
116
```
117
-
Please note that once read is called, all the available data is removed from the replica buffer and consumed.
118
117
119
-
TO fetch a song placed in a particular shard, please run the following:
118
+
Note that once `read` is called, all the available data is removed from local storage and consumed.
119
+
120
+
To fetch a "song" placed in a particular shard, please run the following:
Copy file name to clipboardExpand all lines: research.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,7 @@ SwarmNL simplifies data replication across nodes, ensuring consistency and relia
57
57
The raw data received from a replica peer. This field contains a `StringVector`, which is a vector of strings representing the replicated payload.
58
58
59
59
-**`lamport_clock`**
60
-
A critical synchronization and ordering primitive in distributed systems. The Lamport clock is used internally in the replication buffer queue to order messages and data across the replication network. The clock is incremented whenever a node receives a message or sends data for replication. Each node maintains its own Lamport clock, updating it with the highest value received in messages. The replication buffer is implemented as a `BTreeSet`, ordered by this clock.
60
+
A critical synchronization and ordering primitive in distributed systems. The Lamport clock is used internally in the replication buffer queue to order messages and data across the replica network. The clock is incremented whenever a node receives a message or sends data for replication. Each node maintains its own Lamport clock, updating it with the highest value received in messages. The replication buffer is implemented as a `BTreeSet`, ordered by this clock.
61
61
62
62
```rust
63
63
/// Implement Ord.
@@ -142,7 +142,7 @@ Replication is governed by key primitives that define the behavior of individual
142
142
The interval (in seconds) between synchronization attempts for data in the buffer. This ensures efficient utilization of network resources while maintaining data freshness.
143
143
144
144
-**`consistency_model`**
145
-
Defines the level of consistency required for data replication and the behaviour to ensure it. This must be uniform across all nodes in the replication network to prevent inconsistent or undefined behavior.
145
+
Defines the level of consistency required for data replication and the behaviour to ensure it. This must be uniform across all nodes in the replica network to prevent inconsistent or undefined behavior.
146
146
147
147
-**`data_aging_period`**
148
148
The waiting period (in seconds) after data is saved into the buffer before it is eligible for synchronization. This allows for additional processing or validations if needed.
@@ -420,7 +420,7 @@ All nodes within a shard act as replicas of each other and synchronize their dat
420
420
421
421
By combining replication and sharding, SwarmNL offers a scalable and fault-tolerant framework for managing decentralized networks while giving developers the freedom to design shard configurations that align with their use case.
422
422
423
-
### **No Central point of failure**
423
+
### No central point of failure
424
424
425
425
SwarmNL is designed with resilience and fault tolerance at its core, ensuring that the network has no single point of failure. This is achieved by eliminating reliance on coordinator-based algorithms or centralized decision-making mechanisms. Instead, SwarmNL leverages fully decentralized algorithms to handle all network operations, enhancing the robustness and scalability of the system.
0 commit comments