TALK

← Back to the schedule

Deploying AI for Near Real-Time Manufacturing Decisions

Calendar icon

Wednesday 14th

Time icon

17:05 | 17:45

Location icon

Theatre 25

Technology

Keywords defining the session:

- Predictive Maintenance

- Machine Learning

Takeaway points of the session:

- Common challenges for building and deploying machine learning models in manufacturing are described.

- A full lambda architecture is presented with workflows designed for domain experts.

Description:

A full workflow for deploying machine learning models for predicting equipment failures in a manufacturing environment is presented. This includes model development and validation on large historical data sets and additional considerations for implementing the model on new, incoming data. The architectural design for a cloud-based platform that operationalizes these advanced analytics using a typical lambda architecture consisting of both the batch processing methods and stream processing is discussed in detail. As these architectures and software stacks mature in areas like manufacturing, it is increasingly important to enable engineers and domain experts in this workflow. This talk will also highlight the value of exposing the functionality through API layers, designed to make these systems more secure and extensible and better enable manufacturing decisions.

There are unique challenges associated with building and deploying machine learning models for predictive maintenance. First, you must have failure data to train a good model. In many cases, you have lots of historical data to work with, but not always. Equipment failures can be expensive to introduce for the sake of building a data set, and you would need to repeat the failure for enough data to be confident. For these cases, simulation data can be used to create synthetic data sets for training the model with a variety of failure conditions. Model development and validation on large amounts of simulated data is discussed. Data preprocessing, analysis, and modeling are performed at-scale in the batch layer on a distributed storage system and leverages Apache Spark.

The value of simulation data is further showcased with a real world digital twin built to scale across the cloud platform for diagnostics and prognostics of the asset. This provides a faithful representation of the asset deployed in the field to enable experimentation and informed decision-making at any time.
For real-time prediction, analytics are executed on streams of data using Apache Kafka based data pipelines. Machine learning inference models are executed with low latency and high performance to enrich incoming streams of data as the data itself is persisted on a distributed storage system.
This introduces another common challenge as these systems involve data from many different sensors, collecting and reporting data at different times. The data must be time-aligned to apply many calculations, which can make it difficult to design a streaming architecture. These challenges are addressed through a simple stream processing framework that incorporates time-windowing and handles out-of-order data with Apache Kafka. The sensor data are then easily synchronized in MATLAB for further signal processing and passed to the machine learning model.

Together, by bringing analytics to the Speed, Batch and Serving layers – a full lambda architecture is described and demonstrated for this problem. The workflows to operate these stacks are tuned to serve the needs of a domain expert providing streamlined capabilities to users of this infrastructure with tooling ranging both across textual modeling languages such as MATLAB, Graphical User interfaces, public cloud services (AWS, Azure), and model-based design platforms such as Simulink.

In conclusion, the value of bringing the capabilities of scaling advanced analytics workflows is becoming increasingly critical as software technology stacks mature in areas like manufacturing. Exposing the functionality of such a platform both to engineers and end users alike is executed via API layers designed to make such systems secure and extensible. This composability of architectural design increases flexibility for end-users and provides a future proof platform in a continuously evolving landscape.

MEDIA

Keynote