Skip to content

feat: show banner #47

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Sep 4, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,8 @@ The indexer supports the following command-line arguments for configuring the in

- `-f, --from-slot <FROM_SLOT>`: It allows you to specify the starting slot for indexing ignoring the default behavior, which is starting from the latest slot stored in the database.

- `-n, --num-threads <NUM_THREADS>`: It allows you to specify the number of threads that will be utilized to parallelize the indexing process. If the argument is not provided, the number of cores of the machine will be used.
- `-s, --slots-per-save <SLOTS_PER_SAVE>`: It allows you to specify the number of slots to be processed before saving the latest slot in the database.
- `-n, --num-threads <NUM_THREADS>`: It allows you to specify the number of threads that will be utilized to parallelize the indexing process. Default: the number of CPU cores.
- `-s, --slots-per-save <SLOTS_PER_SAVE>`: It allows you to specify the number of slots to be processed before saving the latest slot in the database. Default: 1000

### Example usage

Expand Down
4 changes: 2 additions & 2 deletions src/args.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,6 @@ pub struct Args {
pub num_threads: Option<u32>,

/// Amount of slots to be processed before saving latest slot in the database
#[arg(short, long)]
pub slots_per_save: Option<u32>,
#[arg(short, long, default_value_t = 1000)]
pub slots_per_save: u32,
}
30 changes: 28 additions & 2 deletions src/indexer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,39 @@ use crate::{
utils::exp_backoff::get_exp_backoff_config,
};

pub fn print_banner(args: &Args, env: &Environment) {
let num_threads = args.num_threads.unwrap_or_default();
let sentry_dsn = env.sentry_dsn.clone();
println!("____ _ _ ");
println!("| __ )| | ___ | |__ ___ ___ __ _ _ __ ");
println!("| _ \\| |/ _ \\| '_ \\/ __|/ __/ _` | '_ \\ ");
println!("| |_) | | (_) | |_) \\__ \\ (_| (_| | | | |");
println!("|____/|_|\\___/|_.__/|___/\\___\\__,_|_| |_|");
println!("");
println!("Blobscan indexer (EIP-4844 blob indexer) - blobscan.com");
println!("=======================================================");
if num_threads == 0 {
println!("Number of threads: auto");
} else {
println!("Number of threads: {}", num_threads);
}
println!("Slot chunk size: {}", args.slots_per_save);
println!("Blobscan API endpoint: {}", env.blobscan_api_endpoint);
println!("CL endpoint: {}", env.beacon_node_endpoint);
println!("EL endpoint: {}", env.execution_node_endpoint);
println!("Sentry DSN: {}", sentry_dsn.unwrap_or_default());
println!("");
}

pub async fn run(env: Environment) -> Result<()> {
let args = Args::parse();

let max_slot_per_save = args.slots_per_save.unwrap_or(1000);
let slots_processor_config = args
.num_threads
.map(|threads_length| SlotsProcessorConfig { threads_length });

print_banner(&args, &env);

let context = match Context::try_new(ContextConfig::from(env)) {
Ok(c) => c,
Err(error) => {
Expand Down Expand Up @@ -100,7 +126,7 @@ pub async fn run(env: Environment) -> Result<()> {
);

while unprocessed_slots > 0 {
let slots_chunk = min(unprocessed_slots, max_slot_per_save);
let slots_chunk = min(unprocessed_slots, args.slots_per_save);
let chunk_initial_slot = current_slot;
let chunk_final_slot = current_slot + slots_chunk;

Expand Down