Trainer
class features an extendable logging mechanism that can be used to log metrics to various backends.
On this page, you will learn about how to set up logging to the console via the Logging
class as well as how to add Logger
classes to the Trainer
as well.
Callback
mechanism.
Trainer
exposes a logger
attr which returns a Python logger object which can be used to log various messages to the console with different levels.
For example,
logger
can be configured by passing in a Logging
object to the Trainer
’s constructor.
logger
has been configured to print INFO
messages to the console by default.
See Control Logging Frequency for an explanation of the log_steps
argument.
Trainer
is to construct and pass in Logger
subclasses.
Included out-of-the-box are
ProgressLogger
: Logs progress metrics to the console
TensorBoardLogger
: Logs metrics to a TensorBoard event file.
Logger
subclasses can be constructed and passed into the trainer via the loggers
argument:
trainer.log_metrics
to log some metric to all loggers.
loss
is being logged to the TensorBoardLogger
at the current global step.
name_scope
mechanism for logging which is intended to be used to group related logs together.
train/loss
and train/accuracy
.
log_steps
to the Logging
class.
log_metrics
is called every step, only every 10 steps does the metric actually get logged.
To query whether or not current step is a log step, you can call trainer.is_log_step
.
Logger
class and how it’s integrated into the Trainer
class, it is fairly straightforward to write your own custom loggers.
To write your own custom Logger class, all you need to do is inherit from the base Logger
class and override the following methods:
log_metrics
: Logs the provided metrics at the provided step.
flush
: Flushes the logs
Trainer
as follows: