When it comes to figuring out what is going on those days you have many options and many companies evangelizing for their own category. Observability, monitoring, events, traces, logs but at the end of the story we need to know what our system does.

I always start with structured logs. I am sure I am not saying anything new to you, structured logs are just logs with a format that can be parsed, the most famous one is probably the Apache log format

[Fri Sep 09 10:42:29.902022 2011] [core:error] [pid 35708:tid 4328636416] [client 72.15.99.187] File does not exist: /usr/local/apache2/htdocs/favicon.ico

In order to search and aggregate on logs line the database where we store them needs to tokenize and index each line. With structured logs in practice we are doing the tokenization upfront.

Why?

I think structured logs are a good starting point because we are used to adding logs when developing, and at the beginning this is everything we do. We write and modify code. So logs as transport method for monitoring or observability is comfortable.

When you should stop?

Never!

I am joking!

A breakpoint for logging is usually scalability, because they are expensive to store and index. That said as you can see from how I wrote it down the problem is with storing and indexing not with your application per se. Often it is a budget-related issue at the storage site, it is not an issue with the transport layer per se, it is a challenge for how you store and where you store them.

You can mitigate this risks with a solid retention strategy, and you should maintain and take care of your logging ingestion pipeline.