.. | ||
format | ||
target | ||
v2 | ||
.gitignore | ||
.travis.yml | ||
config.go | ||
const.go | ||
filter.go | ||
formatter.go | ||
go.mod | ||
go.sum | ||
levelcache.go | ||
levelcustom.go | ||
levelstd.go | ||
LICENSE | ||
logger.go | ||
logr.go | ||
logrec.go | ||
metrics.go | ||
README.md | ||
target.go | ||
timeout.go |
logr
Logr is a fully asynchronous, contextual logger for Go.
It is very much inspired by Logrus but addresses two issues:
Logr is fully asynchronous, meaning that all formatting and writing is done in the background. Latency sensitive applications benefit from not waiting for logging to complete.
Logr provides custom filters which provide more flexibility than Trace, Debug, Info… levels. If you need to temporarily increase verbosity of logging while tracking down a problem you can avoid the fire-hose that typically comes from Debug or Trace by using custom filters.
Concepts
entity | description |
---|---|
Logr | Engine instance typically instantiated once; used to configure
logging.lgr := &Logr{} |
Logger | Provides contextual logging via fields; lightweight, can be created
once and accessed globally or create on
demand.logger := lgr.NewLogger() logger2 := logger.WithField("user", "Sam") |
Target | A destination for log items such as console, file, database or just about anything that can be written to. Each target has its own filter/level and formatter, and any number of targets can be added to a Logr. Targets for syslog and any io.Writer are built-in and it is easy to create your own. You can also use any Logrus hooks via a simple adapter. |
Filter | Determines which logging calls get written versus filtered out. Also
determines which logging calls generate a stack
trace.filter := &logr.StdFilter{Lvl: logr.Warn, Stacktrace: logr.Fatal} |
Formatter | Formats the output. Logr includes built-in formatters for JSON and
plain text with delimiters. It is easy to create your own formatters or
you can also use any Logrus
formatters via a simple adapter.formatter := &format.Plain{Delim: " \| "} |
Usage
// Create Logr instance.
:= &logr.Logr{}
lgr
// Create a filter and formatter. Both can be shared by multiple
// targets.
:= &logr.StdFilter{Lvl: logr.Warn, Stacktrace: logr.Error}
filter := &format.Plain{Delim: " | "}
formatter
// WriterTarget outputs to any io.Writer
:= target.NewWriterTarget(filter, formatter, os.StdOut, 1000)
t .AddTarget(t)
lgr
// One or more Loggers can be created, shared, used concurrently,
// or created on demand.
:= lgr.NewLogger().WithField("user", "Sarah")
logger
// Now we can log to the target(s).
.Debug("login attempt")
logger.Error("login failed")
logger
// Ensure targets are drained before application exit.
.Shutdown() lgr
Fields
Fields allow for contextual logging, meaning information can be added to log statements without changing the statements themselves. Information can be shared across multiple logging statements thus allowing log analysis tools to group them.
Fields are added via Loggers:
:= &Logr{}
lgr // ... add targets ...
:= lgr.NewLogger().WithFields(logr.Fields{
logger "user": user,
"role": role})
.Info("login attempt")
logger// ... later ...
.Info("login successful") logger
Logger.WithFields
can be used to create additional
Loggers that add more fields.
Logr fields are inspired by and work the same as Logrus fields.
Filters
Logr supports the traditional seven log levels via
logr.StdFilter
: Panic, Fatal, Error, Warning, Info, Debug,
and Trace.
// When added to a target, this filter will only allow
// log statements with level severity Warn or higher.
// It will also generate stack traces for Error or higher.
:= &logr.StdFilter{Lvl: logr.Warn, Stacktrace: logr.Error} filter
Logr also supports custom filters (logr.CustomFilter) which allow fine grained inclusion of log items without turning on the fire-hose.
// create custom levels; use IDs > 10.
:= logr.Level{ID: 100, Name: "login ", Stacktrace: false}
LoginLevel := logr.Level{ID: 101, Name: "logout", Stacktrace: false}
LogoutLevel
:= &logr.Logr{}
lgr
// create a custom filter with custom levels.
:= &logr.CustomFilter{}
filter .Add(LoginLevel, LogoutLevel)
filter
:= &format.Plain{Delim: " | "}
formatter := target.NewWriterTarget(filter, formatter, os.StdOut, 1000)
tgr .AddTarget(tgr)
lgr:= lgr.NewLogger().WithFields(logr.Fields{"user": "Bob", "role": "admin"})
logger
.Log(LoginLevel, "this item will get logged")
logger.Debug("won't be logged since Debug wasn't added to custom filter") logger
Both filter types allow you to determine which levels require a stack trace to be output. Note that generating stack traces cannot happen fully asynchronously and thus add latency to the calling goroutine.
Targets
There are built-in targets for outputting to syslog, file, or any
io.Writer
. More will be added.
You can use any Logrus hooks via a simple adapter.
You can create your own target by implementing the Target interface.
An easier method is to use the logr.Basic
type target and build your functionality on that. Basic handles all the
queuing and other plumbing so you only need to implement two methods.
Example target that outputs to io.Writer
:
type Writer struct {
.Basic
logr.Writer
out io}
func NewWriterTarget(filter logr.Filter, formatter logr.Formatter, out io.Writer, maxQueue int) *Writer {
:= &Writer{out: out}
w .Basic.Start(w, w, filter, formatter, maxQueue)
wreturn w
}
// Write will always be called by a single goroutine, so no locking needed.
// Just convert a log record to a []byte using the formatter and output the
// bytes to your sink.
func (w *Writer) Write(rec *logr.LogRec) error {
, stacktrace := w.IsLevelEnabled(rec.Level())
_
// take a buffer from the pool to avoid allocations or just allocate a new one.
:= rec.Logger().Logr().BorrowBuffer()
buf defer rec.Logger().Logr().ReleaseBuffer(buf)
, err := w.Formatter().Format(rec, stacktrace, buf)
bufif err != nil {
return err
}
, err = w.out.Write(buf.Bytes())
_return err
}
Formatters
Logr has two built-in formatters, one for JSON and the other plain, delimited text.
You can use any Logrus formatters via a simple adapter.
You can create your own formatter by implementing the Formatter interface:
(rec *LogRec, stacktrace bool, buf *bytes.Buffer) (*bytes.Buffer, error) Format
Handlers
When creating the Logr instance, you can add several handlers that get called when exceptional events occur:
Logr.OnLoggerError(err error)
Called any time an internal logging error occurs. For example, this can happen when a target cannot connect to its data sink.
It may be tempting to log this error, however there is a danger that logging this will simply generate another error and so on. If you must log it, use a target and custom level specifically for this event and ensure it cannot generate more errors.
Logr.OnQueueFull func(rec *LogRec, maxQueueSize int) bool
Called on an attempt to add a log record to a full Logr queue. This generally means the Logr maximum queue size is too small, or at least one target is very slow. Logr maximum queue size can be changed before adding any targets via:
:= logr.Logr{MaxQueueSize: 10000} lgr
Returning true will drop the log record. False will block until the log record can be added, which creates a natural throttle at the expense of latency for the calling goroutine. The default is to block.
Logr.OnTargetQueueFull func(target Target, rec *LogRec, maxQueueSize int) bool
Called on an attempt to add a log record to a full target queue. This generally means your target’s max queue size is too small, or the target is very slow to output.
As with the Logr queue, returning true will drop the log record. False will block until the log record can be added, which creates a natural throttle at the expense of latency for the calling goroutine. The default is to block.
Logr.OnExit func(code int) and Logr.OnPanic func(err interface{})
OnExit and OnPanic are called when the Logger.FatalXXX and Logger.PanicXXX functions are called respectively.
In both cases the default behavior is to shut down gracefully,
draining all targets, and calling os.Exit
or
panic
respectively.
When adding your own handlers, be sure to call
Logr.Shutdown
before exiting the application to avoid
losing log records.