Description
Expected Behaviour
I would expect that the logging levels for code in aws_lambda_powertools
library can be adjusted independently of the main application code.
For example, if I explicitly configure the aws_lambda_powertools
module to have an INFO level, I wouldn't get messages from the library higher than info, e.g.:
logging.getLogger('aws_lambda_powertools').setLevel(logging.INFO)
I would also expect the name
attribute of the log messages to be the module, not the service name.
Current Behaviour
Currently the copy_config_to_registered_loggers
uses the passed in source logger (the logger configured for the top level application) to log its own configuration messages, so if my source logger is set to DEBUG, I get a bunch of debug messages from this code:
source_logger.debug(f"Logger {logger} reconfigured to use {source_handler}")
e.g.
{"level":"DEBUG","location":"_configure_logger:102","message":"Logger <Logger concurrent (WARNING)> reconfigured to use <StreamHandler <_io.FileIO name=6 mode='rb+' closefd=True> (NOTSET)>","timestamp":"2024-07-08T19:55:26.427-0400","service":"publisher","name":"publisher"}
{"level":"DEBUG","location":"_configure_logger:102","message":"Logger <Logger asyncio (WARNING)> reconfigured to use <StreamHandler <_io.FileIO name=6 mode='rb+' closefd=True> (NOTSET)>","timestamp":"2024-07-08T19:55:26.427-0400","service":"publisher","name":"publisher"}
This is problematic because:
a) I have no way to adjust the verbosity of this function short of setting my entire application log level
b) It shows the log messages as originating from my application and not from the powertools modules.
Ideally, the module should configure a regular Python logger and use that for its own internal logging.
Code snippet
See above
Possible Solution
No response
Steps to Reproduce
See above
Powertools for AWS Lambda (Python) version
latest
AWS Lambda function runtime
3.11
Packaging format used
PyPi
Debugging logs
No response
Metadata
Metadata
Assignees
Type
Projects
Status