(Team Loggers) Detecting anomalies in the platform logs

Within the Onesait Platform Revolution initiative, the Stranger Team, made up of Adrián, Visquel and Juan and mentored by Irene, opted for the detection of anomalies in the platform logs.

The challenge:

It consists of the detection of log anomalies, analyzing the volume of events and their syntax. With this, it is possible to understand what is happening in the environment through the logs, quickly and automatically. In general, finding the anomalous events that the logs contain is complicated, so the need arises to create a service in charge of filtering those events that are really anomalous, so that an analyst can, at a glance, know what is happening .

We could have used a traditional method in which it is necessary to know the syntax of the most anomalous events, but we have chosen instead to use unsupervised machine learning and artificial intelligence techniques to facilitate the entire process. Apart from finding anomalies according to the syntax, it searches for anomalies that have to do with the volumetry by severity and module, according to their distribution over time. In addition, a section is available for those events that have severity equal to ERROR.


  • Loggers_parser: performs a first exploration of the environment, that is to say, it collects the available logs, analyzes volumetrics and creates models.
  • Loggers_quantile: creates the model for the detection of anomalies according to the volumetry based on statistical data of severity and time.
  • Loggers_realtime: execution in real time. It contains a function that is responsible for monitoring the logs, reading the lines that are added and following their rotation. It also contains the functions to detect anomalies by volumetry and by syntax. Besides, it is responsible for filtering by severity ERROR.

– For the detection of anomalies according to their syntax, we use an Autoencoder, which is an artificial neural network.

– For detection based on volumetry, we use an unsupervised machine learning algorithm based on sliding time windows to detect whether there is too much volume of a severity type at a given time.


  • loggers_severity: stores data about the volumetry of the events.
  • loggers_anomalies: stores the anomalies found.
  • loggers_errors: stores events with severity of Error.

Dashboard: Volume analysis

Platform components used:

  • Notebooks: used for the development and execution of algorithms.
  • Dashboards: used to represent the data stored in the ontologies.
  • Semantic Models: used to store the results of algorithms.
  • Digital Brokers: used to carry out queries and insertions in ontologies from the notebooks.

What was achieved:

  • GitHub Repository: Where you can find the code of the web project, the JSON schema of the ontologies and the JSON exported notebooks of the control panel.
  • Website Project:

  • Notebooks:




✍🏻 Author(s)

Leave a Reply

Your email address will not be published. Required fields are marked *