Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How did you compute a metric like images/second? #9

Open
skalyan opened this issue Mar 10, 2019 · 1 comment
Open

How did you compute a metric like images/second? #9

skalyan opened this issue Mar 10, 2019 · 1 comment

Comments

@skalyan
Copy link

skalyan commented Mar 10, 2019

I am trying to use standardized metric such as images/sec to arrive at relative training speeds for different frameworks(e.g. PyTorch and TF). Have you computed such a metric, if not, what do you think of this approach.

modified run.py to compute images_per_sec rate. lines modified highlighted with "[KAL]"

=========================================

`

def train(model, loader, epoch, optimizer, criterion, device, dtype, batch_size, log_interval, scheduler):

   model.train()
   correct1, correct5 = 0, 0 
   batch_time = AverageMeter()
   images_per_sec = AverageMeter()                                                                                                                                                           
   for batch_idx, (data, target) in enumerate(tqdm(loader)):
         if isinstance(scheduler, CyclicLR):
             scheduler.batch_step()
         data, target = data.to(device=device, dtype=dtype), target.to(device=device)

         # [KAL] Take timestamp
         end = time.time()                                                                                                                                                                        
         optimizer.zero_grad()
         output = model(data)                                                                                                                                                                   
         loss = criterion(output, target)
         loss.backward()
         optimizer.step()                                                                                                                                                                          
         corr = correct(output, target, topk=(1, 5))
         correct1 += corr[0]
         correct5 += corr[1]            

         # [KAL] compute processing time for batch                                                                                                                                                 
         batch_time.update(time.time() - end)       

         # [KAL] Based on batch size, calculate the images/sec rate                                                  
         images_per_sec.update(batch_size / (time.time() - end)) `
@Randl
Copy link
Owner

Randl commented Mar 10, 2019

If you run the code you'll see tqdm progress bar which shows average time per batch, elapsed time and approximate time to finish. Second progress bar shows same for epochs.

However you might prefer to use a distributed framework for pytorch, which is supposed to provide better performance even on a single PC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants