Country, region and language selection

International websites

Europe

Americas / Oceania

Africa and Middle East

Asia / Pacific

Illustration of a pump control station
9 min read

Machine learning: How algorithms are trained at KSB

Is it possible to collect operating data from pumps and other machines, then process it using smart algorithms to reliably detect operating anomalies and make valid predictions about problems that may arise in the future? By harnessing machine learning, KSB is opening the door to a fascinating research field. Learn more about the current state of technology, the methodologies KSB is using to explore the topic and the opportunities offered by machine learning.

Is it possible to collect operating data from pumps and other machines, then process it using smart algorithms to reliably detect operating anomalies and make valid predictions about problems that may arise in the future? By harnessing machine learning, KSB is opening the door to a fascinating research field. Learn more about the current state of technology, the methodologies KSB is using to explore the topic and the opportunities offered by machine learning.

Detect machine damage before it happens

It started with a low humming sound that could not be heard with the motor running. A short time later, slight vibrations began to happen in the system which also went unnoticed – until the moment when the bearing suddenly broke and the entire pumping station went down. The result: a complete system shutdown followed by a frantic rush to buy spare parts and difficulties getting them installed quickly. An expensive incident, but one which will soon be preventable!

In practice, pumps are often continuously operated in a highly transient manner and even beyond their operating limits due to incorrect sizing or changing framework conditions – with well-known negative consequences: reduced efficiencies, higher wear, shorter service life and a higher risk of unforeseeable premature failures. Unfortunately, it is rare for operators to carry out regular monitoring and a precise analysis of the operating data.

But what if there was a monitoring and diagnosis system that could closely monitor your pumps’ operating status, evaluate their service life, detect anomalies and predict failures at an early stage? This would ensure that the pump sets could be operated safely and reliably under all operating conditions and within the limits of the defined operating range. It would minimise the risk of unscheduled downtimes and enable individual pump maintenance schedules to be entirely based on what is actually needed – making a huge contribution towards greater efficiency and saving resources!

Reliably detecting and evaluating anomalies

Such a monitoring and diagnosis system is based on two pillars: data acquisition using sensors and intelligent processing of this data – and that is where the magic happens.

The magic word here is ‘anomaly detection’ – a well-known research task in the field of machine learning. Besides intrusion fraud detection, predictive maintenance of technical systems is one of the main applications of anomaly detection. Typical areas of application include predicting, diagnosing and classifying any problems that occur as well as analysing wear and remaining service life.

Various system parameters are used to do this, such as temperature, pressure, voltage, rotational speed, vibrations and acoustic signals. For pumps, the first concepts have been published for detecting cavitation and clogging as well as for using vibration data to predict service life. However, to date, only comparatively simple statistical methods have been used to diagnose the causes of damage.

How do you turn simple data into useful information?

Or to put it another way, how can anomalies be detected and analysed to enable predictions to be made? In order to answer this question, a sophisticated methodology and an ambitious work plan are required. To enable such an application and generate real added value, the development of algorithms and mathematical models must be worked on parallel to data strategy and infrastructure issues. During such a development process, the individual work packages are clearly defined:

  1.  Creation of the basic overall system architecture
  2.  Development of a data platform
  3. Detection of fault statuses (anomalies)
  4.  Classification of fault statuses
  5.  Prediction of fault statuses

Due to dependencies within these work packages, an iterative approach is essential. This is especially the case for developing the data strategy and the necessary infrastructure.


Reading pump data using KSB Guard

Unscheduled pump downtimes can be really expensive. However, with the KSB Guard monitoring service, such failures are far less likely.

Work package 1 Creation of the basic overall system architecture

 First, a detailed architecture must be designed for the overall system and interfaces described between the systems (cloud and local). With local systems, the interfaces between sensors, pre-processing and the algorithms need to be defined. Apart from the cloud interface, the further processing of the information in the cloud and the path back to the sensor must also be defined. The type, frequency and formats of the input and results data need to be defined to allow further analyses and visualisations. This is because many connections can only be identified through the correlated visualisation of multiple data.

Work package 2 Development of a data platform

The development of a data platform is closely linked with the development of the infrastructure. Data from different sources (e.g. from test stands, the KSB Guard monitoring solution, static information, meta or weather data) are collected in a cloud-based so-called ‘data lake’. In order to store and retrieve this data efficiently in a high-performance way, the data formats must be precisely evaluated and identified. And to meet the requirements of a defined user pool, it is necessary to create and define user roles, permissions and account security facilities.

Setting up and maintaining a development infrastructure (DEV), test infrastructure (QA) and a production infrastructure are also extremely important. Software and new features that are rolled out in these systems are subject to automated testing and versioning. Furthermore, a tool must be developed to enable the data measured on the machines to be enriched with meta-information (e.g. pump status and operating conditions). This information significantly enhances the data and lays the foundation for training the algorithms.

Work package 3 Detection of fault statuses (anomalies)

In this work package, system models are trained to individually detect deviations from normal operation. This is done using algorithms from the field of machine learning. It also involves limit algorithms. To develop these, KSB draws on its expert knowledge as well as relevant standards (e.g. DIN 10816). Fixed or flexible limits are defined to indicate anomalies. Detected anomalies are annotated according to the degree of deviation from normal operating conditions, their duration and urgency.

To ensure resource efficiency in the long term, the focus is on the suitability and precision of the algorithms or limit value implementations and their continuous optimisation. The quantity and quality of the input data required for anomaly detection is optimised through selection, combination and adaptation.

Work package 4 Classification of fault statuses

The monitored pumps’ anomalies are classified using the implemented machine learning algorithms. To train the algorithms, real, simulated or generated data from abnormally operating pump sets is required. Existing expert knowledge is also used here. The developed algorithms are documented and implemented in the infrastructure.

Work package 5 Prediction of fault statuses

The goal is to be able to make forecasts (predictions) about anomalies based on current operating parameters, allowing precautions to be taken before an unscheduled downtime occurs. Rather than detecting anomalies when they actually happen, these algorithms are designed to predict when an anomaly is highly likely to occur. Long-term trends are modelled using expert knowledge and are independently learned by machine learning algorithms. To evaluate these prediction algorithms, real or simulated anomaly data from pump sets are needed.

Visualisation of vibration data

On the best path to predictive maintenance: Visualisation of vibration data detected by KSB Guard

Summary and conclusion

Only reliable automatic anomaly detection and prediction make true predictive maintenance possible – with all of its potential benefits. But the path there still involves large areas of unknown terrain and a field of research which is currently being explored. The KSB Guardmonitoring service makes KSB one of the pioneers in this field. The company is working hard to advance the topic of predictive maintenance for its customers. Especially in light of the digitalisation of large areas of industry and endeavours to implement the principles of Industry 4.0, intelligent and beneficial handling of data is becoming ever-more important. With significantly growing data volumes, it is becoming increasingly difficult to extract valuable information using simple data analyses. Machine learning helps to filter out crucial information, recognise patterns and make predictions. This supports companies in making the right decisions and significantly improves their processes and efficiency..

Suitable products

More articles on this topic

Don’t hesitate to contact us.

Would you like further information on KSB? Or have you got any questions surrounding our pumps, valves, services and spare parts? Just contact us. We would love to talk to you.