Skip to content

Is Engineering Going To Be Displaced By Predictive Analytics?

The new trend is predictive analytics for industrial machinery. Unfortunately it doesn’t consider engineering disciplines that have been dealing with this topic for decades, like reliability engineering.

Some of my industry colleagues reminds me every now and then, the high volume of contacts they get with regards predictive analytics tools and how it could give them the edge to their business. This could be the case, but needs to be executed properly and based on the right assumptions. How to find the needle in the haystack that fits your case?

Humans are the scarce resource

As a minimum, we should avoid common pitfalls from early experiences. For example have a clear scope with regards what to do with:

  • Data collection strategies
  • Data dredging
  • Single points of failure
  • Security & privacy
  • Intellectual property rights (of the data collected)
  • Locking power of the vendor.

I’d like to briefly talk about the data collection strategy.

Data collection strategy

The Data collection strategy needs to be defined. One approach could be, look backwards from the problem we are trying to solve/improve, check those needs, define the ideal measurements requirements, check the measurements available and other variables like sample size, resolution and range.

One important decision to make, with regards where and how to gather and process the data, is to decide if it will be done on premise or in the cloud. In this regard, a new term is emerging called edge computing, which as I will argue, reflects some old engineering practices.

Edge computing

The idea behind edge computing is to bring the “intelligence” of the cloud to the edge of the network. This is to execute some logic locally (on premise) instead of in the cloud. This might be due to technical, economical or political constraints. For example the impossibility to collect all the data in the cloud, due to network bandwidth.

“Edge computing” system from Gram & Juhl

Edge computing has been done for almost a decade in vibration systems for wind turbines, such as Gram & Juhl. This is due to the massive amount of data that the accelerometers (typically seven) can collect; from 10 to 1000 samples per second each sensor. Although the important aspect here, is that the measurements to monitor are deterministic. Hence, the system knows were to look for certain cases and decide what to process locally and what to send back to the cloud (normally a simplification of a local computation). This is a process design, monitor and execute by a engineer. They are working backwards and in a deterministic form, which problems do they want to deal with. Then configure the system accordingly.

Sum up

I beleive the bottleneck here is the engineering knowledge to configure those measurements, understand the trades-offs and consequences, and the capability to operate such system. In other words, I will argue that the human is the scarce resource, not the edge computing capability. If so, shouldn’t we talk about the specific engineering case that we are trying to solve?