Skip to content

How to disable printings about GPU/TPU #3431

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
7rick03ligh7 opened this issue Sep 9, 2020 · 13 comments
Closed

How to disable printings about GPU/TPU #3431

7rick03ligh7 opened this issue Sep 9, 2020 · 13 comments
Labels
question Further information is requested

Comments

@7rick03ligh7
Copy link

How to disable this printings?

image

@7rick03ligh7 7rick03ligh7 added the question Further information is requested label Sep 9, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Sep 9, 2020

Hi! thanks for your contribution!, great first issue!

@SkafteNicki
Copy link
Member

@rohitgr7
Copy link
Contributor

Check #2757.

@7rick03ligh7
Copy link
Author

@rohitgr7
Thanks

These two lines helped me

import logging
logging.getLogger('lightning').setLevel(0)

@goldmyu
Copy link

goldmyu commented Feb 21, 2021

This is still an issue today, especially when working with multiple GPUs and num_of_workers>0...
The amount of output log is super high and redundant

@LingxiaoShawn
Copy link

This is still an issue. This actually can be fixed easily hope someone can do this.

@max0x7ba
Copy link

Another way to disable these messages:

pytorch_lightning.utilities.distributed.log.setLevel(logging.ERROR)

@0xangelo
Copy link

0xangelo commented Sep 9, 2021

It seems that the base logger for lightning has changed since @7rick03ligh7's last comment. The following does the job now

logging.getLogger("pytorch_lightning").setLevel(logging.WARNING)

@Hockey86
Copy link

As of lightning v1.9.4,

logging.getLogger("lightning.pytorch.utilities.rank_zero").setLevel(logging.WARNING) disables the following output:
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs

logging.getLogger("lightning.pytorch.accelerators.cuda").setLevel(logging.WARNING) disables the following output:
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

@pfk-beta
Copy link

@rohitgr7 Thanks

These two lines helped me

import logging
logging.getLogger('lightning').setLevel(0)

Let's remove all warnings, and let's use some meaningless value 0.

@youurayy
Copy link

as of PL 2.2.4:

logger = logging.getLogger('pytorch_lightning.utilities.rank_zero')
logger.setLevel(logging.ERROR)
... your code here ...
logger.setLevel(logging.INFO)

or

logger = logging.getLogger('pytorch_lightning.utilities.rank_zero')
class IgnorePLFilter(logging.Filter):
    def filter(self, record):
        return 'available:' not in record.getMessage()
logger.addFilter(IgnorePLFilter())

@G1nkgo7
Copy link

G1nkgo7 commented Jan 15, 2025

Customize the keywords according to your needs.

import logging
class IgnorePLFilter(logging.Filter):
    def filter(self, record):
        keywords = ['available:', 'CUDA', 'LOCAL_RANK:']
        return not any(keyword in record.getMessage() for keyword in keywords)
    
logging.getLogger('pytorch_lightning.utilities.rank_zero').addFilter(IgnorePLFilter())
logging.getLogger('pytorch_lightning.accelerators.cuda').addFilter(IgnorePLFilter())

@JackCaster
Copy link
Contributor

With lightning 2.5.0 I had to do:

def device_info_filter(record):
    return "PU available: " not in record.getMessage()

logging.getLogger("lightning.pytorch.utilities.rank_zero").addFilter(device_info_filter)

modified from #13378 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests