Skip to content

Commit f801b2e

Browse files
committed
2 parents 8e4f4ca + 9049b7d commit f801b2e

File tree

23 files changed

+90
-70
lines changed

23 files changed

+90
-70
lines changed

.gitignore

+4-1
Original file line numberDiff line numberDiff line change
@@ -14,4 +14,7 @@ Cargo.lock
1414
*.pdb
1515

1616
# Ignore the client dir. It's for local experimental testing
17-
# client
17+
# client
18+
19+
# Ignore VS Code settings
20+
.DS_Store

examples/http-client/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ repl amazing
3434
```
3535

3636
When the replicated data arrives on a node and reaches the application layer, it is immediately sent to a remote server.
37-
You should see the message `Successfully sent POST request.` printed to the terminal afte this operation.
37+
You should see the message `Successfully sent POST request.` printed to the terminal after this operation.
3838

3939
## Run with Docker
4040

examples/http-client/src/main.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ async fn run_node(
111111
let mut node = setup_node(ports_1, &keypair[..], bootnodes).await;
112112

113113
// Join replica network
114-
println!("Joining replication network...");
114+
println!("Joining replica network...");
115115
if let Ok(_) = node.join_repl_network(REPL_NETWORK_ID.into()).await {
116116
println!("Replica network successfully joined");
117117
} else {

examples/ipfs/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ cargo run --features=second-node
1717
And the third node:
1818

1919
```bash
20-
cargo run --features=first-node
20+
cargo run --features=third-node
2121
```
2222

2323
In the separate terminals where each node is running, submit the following commands:
@@ -31,7 +31,7 @@ repl Oranges
3131
repl Papayas
3232
```
3333
When the replicated data arrived on a node and reaches the application layer, it is immediately uploaded to the IPFS network for persistence.
34-
After the operation, should see a message printed to the stdout of nodes 1 and 2 such as:
34+
After the operation, you should see a message printed to the stdout of each node such as:
3535

3636
```bash
3737
File successfully uploaded to IPFS with hash: QmX7epzCn2jD8nPUDiehmZDQs69HxKYcmM

examples/ipfs/run_nodes.sh

+1-5
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ tmux select-layout tiled
1010

1111
# Give the nodes some time to start
1212
echo "Waiting for all three nodes to start..."
13-
sleep 7
13+
sleep 60
1414

1515
# Send commands to each pane
1616
# Pane 0 (first node)
@@ -25,9 +25,5 @@ sleep 2
2525
tmux send-keys -t rust-nodes:0.2 "repl Papayas" C-m
2626
sleep 2
2727

28-
# Read and fetch commands
29-
tmux send-keys -t rust-nodes:0.2 "read" C-m
30-
tmux send-keys -t rust-nodes:0.1 "read" C-m
31-
3228
# Attach to the session so you can observe the output
3329
tmux attach-session -t rust-nodes

examples/ipfs/src/main.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ async fn run_node(
113113
let mut node = setup_node(ports_1, &keypair[..], bootnodes).await;
114114

115115
// Join replica network
116-
println!("Joining replication network...");
116+
println!("Joining replica network...");
117117
if let Ok(_) = node.join_repl_network(REPL_NETWORK_ID.into()).await {
118118
println!("Replica network successfully joined");
119119
} else {

examples/replication/README.md

+9-6
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# Replication examples
22

3+
These examples demonstrate the configurations and operations of a replica network using SwarmNL using Eventual Consistency and Strong Consistency models.
4+
35
## Eventual consistency
46

57
To run this example, cd into the root of this directory and in separate terminals launch the following commands to launch three nodes:
@@ -38,7 +40,7 @@ Then the third node:
3840
repl Papayas
3941
```
4042

41-
Then in node 3, running the following command will return the values in its replication buffer (which contains data gotten from node 1 and 2):
43+
Then in node 3, running the following command will return the values in its replication buffer (which contains data received from node 1 and 2):
4244

4345
```bash
4446
read
@@ -77,7 +79,7 @@ cargo run --features=second-node
7779
And the third node:
7880

7981
```bash
80-
cargo run --features=first-node
82+
cargo run --features=third-node
8183
```
8284

8385
Now, submit the following commands to replicate data from nodes in the network, starting with the first node:
@@ -122,7 +124,7 @@ Hit `Ctrl+D` to exit.
122124

123125
## Peer cloning
124126

125-
In this example, we expect a node to clone the data in the buffer of the specified peer ID when it calls `clone`.
127+
In this example, we expect a node to clone the data in the buffer of the specified replica peer when it calls `clone`.
126128

127129
To run this example, cd into the root of this directory and in separate terminals launch the following commands:
128130

@@ -139,7 +141,7 @@ cargo run --features=second-node
139141
And the third node:
140142

141143
```bash
142-
cargo run --features=first-node
144+
cargo run --features=third-node
143145
```
144146

145147
Now, submit the following commands to replicate data from nodes in the network, starting with the first node:
@@ -162,11 +164,12 @@ repl Papayas
162164

163165
Then in node 3, run the following command to clone node 2's buffer (by passing in node 2's peer ID):
164166

165-
```clone
167+
```bash
166168
clone 12D3KooWFPuUnCFtbhtWPQk1HSGEiDxzVrQAxYZW5zuv2kGrsam4
167169
```
168170

169-
We expect node 2 to contain "Papayas" and "Apples" in its buffer which you can verify by submitting `read` to stdin from node 3's terminal to read it's buffer content:
171+
We expect node 2 to contain "Papayas" and "Apples" in its buffer.
172+
This can be verified by submitting `read` to stdin from node 3's terminal to read it's buffer content:
170173

171174
```bash
172175
read

examples/replication/eventual-consistency/src/main.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ async fn run_node(
104104
let mut node = setup_node(ports_1, &keypair[..], bootnodes).await;
105105

106106
// Join replica network
107-
println!("Joining replication network");
107+
println!("Joining replica network");
108108
if let Ok(_) = node.join_repl_network(REPL_NETWORK_ID.into()).await {
109109
println!("Replica network successfully joined");
110110
} else {

examples/replication/peer-cloning/run_nodes.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ tmux send-keys -t rust-nodes:0.2 "repl Papayas" C-m
2626
sleep 2
2727

2828
# Clone and read
29-
tmux send-keys -t rust-nodes:0.2 "clone 12D3KooWQDpMufFJytG2xQuz7JzfK2vBH2g3XXBJ9v2xY7SegRUk" C-m
29+
tmux send-keys -t rust-nodes:0.2 "clone 12D3KooWFPuUnCFtbhtWPQk1HSGEiDxzVrQAxYZW5zuv2kGrsam4" C-m
3030

3131
sleep 4
3232

examples/replication/peer-cloning/src/main.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ async fn run_node(
107107
let mut node = setup_node(ports_1, &keypair[..], bootnodes).await;
108108

109109
// Join replica network
110-
println!("Joining replication network");
110+
println!("Joining replica network");
111111
if let Ok(_) = node.join_repl_network(REPL_NETWORK_ID.into()).await {
112112
println!("Replica network successfully joined");
113113
} else {

examples/replication/strong-consistency/run_nodes.sh

+3-3
Original file line numberDiff line numberDiff line change
@@ -15,15 +15,15 @@ sleep 60
1515
# Send commands to each pane
1616
# Pane 0 (first node)
1717
tmux send-keys -t rust-nodes:0.0 "repl Apples" C-m
18-
sleep 2
18+
sleep 4
1919

2020
# Pane 1 (second node)
2121
tmux send-keys -t rust-nodes:0.1 "repl Oranges" C-m
22-
sleep 2
22+
sleep 4
2323

2424
# Pane 2 (third node)
2525
tmux send-keys -t rust-nodes:0.2 "repl Papayas" C-m
26-
sleep 2
26+
sleep 4
2727

2828
# Read and fetch commands
2929
tmux send-keys -t rust-nodes:0.2 "read" C-m

examples/replication/strong-consistency/src/main.rs

+2-2
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ async fn run_node(
102102
let mut node = setup_node(ports_1, &keypair[..], bootnodes).await;
103103

104104
// Join replica network
105-
println!("Joining replication network");
105+
println!("Joining replica network");
106106
if let Ok(_) = node.join_repl_network(REPL_NETWORK_ID.into()).await {
107107
println!("Replica network successfully joined");
108108
} else {
@@ -155,7 +155,7 @@ async fn run_node(
155155
// confirmations are complete
156156
if let Some(repl_data) = node.consume_repl_data(REPL_NETWORK_ID).await {
157157
println!(
158-
"Data gotten from replica: {} ({} confirmations)",
158+
"Data received from replica: {} ({} confirmations)",
159159
repl_data.data[0],
160160
repl_data.confirmations.unwrap()
161161
);

examples/sharding/README.md

+8-7
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ cargo run --features=second-node
1919
And the third node:
2020

2121
```bash
22-
cargo run --features=first-node
22+
cargo run --features=third-node
2323
```
2424

2525
In the separate terminals where each node is running, submit the following commands:
@@ -34,21 +34,21 @@ shard mars mars_creatures.txt Inkls
3434
```
3535

3636
According to the configuration in the example, node 1 and 2 belongs to the shard with key "mars". Node 3 beloings to a separate shard with key "earth".
37-
To read the local data stored on node 1 (mars shard). Run the following command:
37+
To read the local data stored on node 1 ("mars" shard), run the following command from the first terminal:
3838

3939
```bash
4040
read
4141
```
4242

43-
After that, we would query the "earth" shard for the data it holds. To do that, please run the following command:
43+
After that, we would query the "earth" shard for the data it holds. To do that, run the following command:
4444

4545
```bash
4646
fetch earth earth_creatures.txt
4747
```
4848

4949
Here, we are sending a data request to the sharded network, telling it to read the file "earth_creatures.txt" on the shard "earth".
5050

51-
From node 3's terminal, you can also read what is stored in node 3 by submitting the `read` command. To request data stored in the "mars" shard, kindly run the following:
51+
From node 3's terminal, you can also read what is stored in node 3 by submitting the `read` command. To request data stored in the "mars" shard, run the following:
5252

5353
```bash
5454
fetch mars mars_creatures.txt
@@ -57,7 +57,7 @@ fetch mars mars_creatures.txt
5757
In node 2's terminal, you can also run the following:
5858

5959
```bash
60-
fetch earth earth_creatures.txt
60+
fetch earth earth_creatures.txt
6161
```
6262

6363
## Run with Docker
@@ -114,9 +114,10 @@ To read data stored locally on a particular node, run the following command:
114114
```bash
115115
read
116116
```
117-
Please note that once read is called, all the available data is removed from the replica buffer and consumed.
118117

119-
TO fetch a song placed in a particular shard, please run the following:
118+
Note that once `read` is called, all the available data is removed from local storage and consumed.
119+
120+
To fetch a "song" placed in a particular shard, please run the following:
120121

121122
```bash
122123
# Run this in node 1's terminal

examples/sharding/hash-based/run_nodes.sh

+3-2
Original file line numberDiff line numberDiff line change
@@ -15,17 +15,18 @@ sleep 60
1515
# Send commands to each pane
1616
# Pane 0 (first node)
1717
tmux send-keys -t rust-nodes:0.0 "shard mars mars_creatures.txt Boggles" C-m
18-
sleep 1
18+
sleep 2
1919

2020
# Pane 1 (second node)
2121
tmux send-keys -t rust-nodes:0.1 "shard earth earth_creatures.txt Unicorns" C-m
2222
sleep 2
2323

2424
# Pane 2 (third node)
2525
tmux send-keys -t rust-nodes:0.2 "shard mars mars_creatures.txt Inkls" C-m
26+
sleep 2
2627

2728
# Read and fetch commands
28-
tmux send-keys -t rust-nodes:0.2 "read mars_creatures.txt" C-m
29+
tmux send-keys -t rust-nodes:0.0 "read mars_creatures.txt" C-m
2930
tmux send-keys -t rust-nodes:0.2 "fetch mars mars_creatures.txt" C-m
3031
tmux send-keys -t rust-nodes:0.1 "fetch earth earth_creatures.txt" C-m
3132

examples/sharding/range-based/run_nodes.sh

+5-4
Original file line numberDiff line numberDiff line change
@@ -15,15 +15,15 @@ sleep 60
1515
# Send commands to each pane
1616
# Pane 0 (first node)
1717
tmux send-keys -t rust-nodes:0.0 "shard 150 song --> Give It Away" C-m
18-
sleep 7
18+
sleep 2
1919

2020
# Pane 1 (second node)
2121
tmux send-keys -t rust-nodes:0.1 "shard 250 song --> Under the Bridge" C-m
22-
sleep 7
22+
sleep 2
2323

2424
# Pane 2 (third node)
2525
tmux send-keys -t rust-nodes:0.2 "shard 55 song --> I Could Have Lied" C-m
26-
sleep 7
26+
sleep 2
2727

2828
tmux send-keys -t rust-nodes:0.0 "shard 210 song --> Castles Made of Sand" C-m
2929
sleep 2
@@ -34,7 +34,8 @@ sleep 2
3434

3535
# Read and fetch commands
3636
tmux send-keys -t rust-nodes:0.2 "read" C-m
37-
tmux send-keys -t rust-nodes:0.1 "fetch 150 song" C-m
37+
tmux send-keys -t rust-nodes:0.1 "fetch 210 song" C-m
38+
tmux send-keys -t rust-nodes:0.0 "fetch 50 song" C-m
3839

3940
# Attach to the session so you can observe the output
4041
tmux attach-session -t rust-nodes

research.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ SwarmNL simplifies data replication across nodes, ensuring consistency and relia
5757
The raw data received from a replica peer. This field contains a `StringVector`, which is a vector of strings representing the replicated payload.
5858

5959
- **`lamport_clock`**
60-
A critical synchronization and ordering primitive in distributed systems. The Lamport clock is used internally in the replication buffer queue to order messages and data across the replication network. The clock is incremented whenever a node receives a message or sends data for replication. Each node maintains its own Lamport clock, updating it with the highest value received in messages. The replication buffer is implemented as a `BTreeSet`, ordered by this clock.
60+
A critical synchronization and ordering primitive in distributed systems. The Lamport clock is used internally in the replication buffer queue to order messages and data across the replica network. The clock is incremented whenever a node receives a message or sends data for replication. Each node maintains its own Lamport clock, updating it with the highest value received in messages. The replication buffer is implemented as a `BTreeSet`, ordered by this clock.
6161

6262
```rust
6363
/// Implement Ord.
@@ -142,7 +142,7 @@ Replication is governed by key primitives that define the behavior of individual
142142
The interval (in seconds) between synchronization attempts for data in the buffer. This ensures efficient utilization of network resources while maintaining data freshness.
143143

144144
- **`consistency_model`**
145-
Defines the level of consistency required for data replication and the behaviour to ensure it. This must be uniform across all nodes in the replication network to prevent inconsistent or undefined behavior.
145+
Defines the level of consistency required for data replication and the behaviour to ensure it. This must be uniform across all nodes in the replica network to prevent inconsistent or undefined behavior.
146146

147147
- **`data_aging_period`**
148148
The waiting period (in seconds) after data is saved into the buffer before it is eligible for synchronization. This allows for additional processing or validations if needed.
@@ -420,7 +420,7 @@ All nodes within a shard act as replicas of each other and synchronize their dat
420420

421421
By combining replication and sharding, SwarmNL offers a scalable and fault-tolerant framework for managing decentralized networks while giving developers the freedom to design shard configurations that align with their use case.
422422

423-
### **No Central point of failure**
423+
### No central point of failure
424424

425425
SwarmNL is designed with resilience and fault tolerance at its core, ensuring that the network has no single point of failure. This is achieved by eliminating reliance on coordinator-based algorithms or centralized decision-making mechanisms. Instead, SwarmNL leverages fully decentralized algorithms to handle all network operations, enhancing the robustness and scalability of the system.
426426

swarm-nl/doc/core/Replication.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,6 @@ After configuring your node for replication, you can participate in replication
44

55
- [`Core::replicate`]: Replicates data across replica nodes on the network.
66
- [`Core::replicate_buffer`]: Clone a remote node's replica buffer.
7-
- [`Core::join_repl_network`]: Join a replication network.
8-
- [`Core::leave_repl_network`]: Exit a replication network.
7+
- [`Core::join_repl_network`]: Join a replica network.
8+
- [`Core::leave_repl_network`]: Exit a replica network.
99
- Etc.

swarm-nl/src/core/replication.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -738,7 +738,7 @@ impl ReplicaBufferQueue {
738738
peer: replica_node,
739739
};
740740

741-
// Try to query the replica node and insert data gotten into buffer
741+
// Try to query the replica node and insert data received into buffer
742742
let mut queue = self.queue.lock().await;
743743
match queue.get_mut(&repl_network) {
744744
Some(local_state) => {

swarm-nl/src/core/sharding.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ where
8282
// Free `Core`
8383
drop(shard_state);
8484

85-
// Join the shard network (as a replication network)
85+
// Join the shard network (as a replica network)
8686
let _ = core.join_repl_network(shard_id.to_string()).await;
8787

8888
// Inform the entire network about our decision

swarm-nl/src/core/tests/layer_communication.rs

+2-2
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ fn echo_for_node1_query_network() {
133133
.await
134134
{
135135
if let AppResponse::Echo(echoed_response) = result {
136-
// Assert that what was sent was gotten back
136+
// Assert that what was sent was received
137137
assert_eq!(echo_string, echoed_response);
138138
}
139139
}
@@ -159,7 +159,7 @@ fn echo_for_node1_send_and_receive() {
159159
.await
160160
{
161161
if let AppResponse::Echo(echoed_response) = result {
162-
// Assert that what was sent was gotten back
162+
// Assert that what was sent was received
163163
assert_eq!(echo_string, echoed_response);
164164
}
165165
}

0 commit comments

Comments
 (0)