Skip to main content

Using R and H2O to identify product anomalies during the manufacturing process.

Note. This article was left as reference, for an improved version, go to: http://laranikalranalytics.blogspot.com/2021/03/updated-using-r-and-h2o-to-identify.html



Introduction:

We will identify anomalous products on the production line by using measurements from testing stations and deep learning models. Anomalous products are not failures, these anomalies are products close to the measurement limits, so we can display warnings before the process starts to make failed products and in this way the stations get maintenance.

 Before starting we need the next software installed and working:

- R language installed.
- All the R packages mentioned in the R sources.( On my GitHub )
- Testing stations data, I suggest to go station by station.
- H2O open source framework.
- Java 8 ( For H2O ). Open JDK: https://github.com/ojdkbuild/contrib_jdk8u-ci/releases
- R studio.

Get your data.

About the data: Since I cannot use my real data, for this article I am using SECOM Data Set from UCI Machine Learning Repository   

How many records?: 
Training data set - In my real project, I use 100 thousand test passed records, it is around a month of production data.
Testing data set - I use the last 24 hours of testing station data.


Let's the fun begin

Deep Learning Model Creation And Testing.



# Load libraries
library( h2o )

h2o.init( nthreads = -1, max_mem_size = "5G", port = 6666 )

h2o.removeAll() ## Removes the data from the h2o cluster in preparation for our final model.

# Reading SECOM data file
allData = read.csv( "secom.data", sep = " ", header = FALSE, encoding = "UTF-8" )

# fixing the data set, there are a lot of NaN records
if( dim(na.omit(allData))[1] == 0 ){
  for( colNum in 1:dim( allData )[2] ){

    # Get valid values from the actual column values
    ValidColumnValues = allData[,colNum][!is.nan( allData[, colNum] )]

    # Check each value in the actual active column.
    for( rowNum in 1:dim( allData )[1] ){

    cat( "Processing row:", rowNum, ", Column:", colNum, "Data:", allData[rowNum, colNum], "\n" )

    if( is.nan( allData[rowNum, colNum] ) ) {

      # Assign random valid value to our row,column with NA value
      getValue = ValidColumnValues[ floor( runif(1, min = 1, max = length( ValidColumnValues ) ) ) ]

      allData[rowNum, colNum] = getValue
    }
  }
 }
}

# spliting all data, the fiirst 90% for training and the rest 10% for testing our model.
trainingData = allData[1:floor(dim(allData)[1]*.9),]
testingData = allData[(floor(dim(allData)[1]*.9)+1):dim(allData)[1],]
# Convert the training dataset to H2O format.
trainingData_hex = as.h2o( trainingData, destination_frame = "train_hex" )

# Set the input variables
featureNames = colnames(trainingData_hex)

# Creating the first model version.
trainingModel = h2o.deeplearning( x = featureNames
                                                         , training_frame = trainingData_hex
                                                         , model_id = "Station1DeepLearningModel"
                                                         , activation = "Tanh"
                                                         , autoencoder = TRUE
                                                         , reproducible = TRUE
                                                         , l1 = 1e-5
                                                         , ignore_const_cols = FALSE
                                                         , seed = 1234
                                                         , hidden = c( 400, 200, 400 ), epochs = 50 )


# Getting the anomalies from training data to set the min MSE( Mean Squared Error )
# value before setting a record as anomally.
trainMSE = as.data.frame( h2o.anomaly( trainingModel
                                                                 , trainingData_hex
                                                                 , per_feature = FALSE ) )

# Check the first 30 descendent sorted trainMSE records to see our outliers
head( sort( trainMSE$Reconstruction.MSE , decreasing = TRUE ), 30)
# 0.020288603 0.017976305 0.012772556 0.011556780 0.010143009 0.009524983 0.007363854
# 0.005889714 0.005604329 0.005189614[11] 0.005185285 0.005118595 0.004639442 0.004497609
# 0.004438342 0.004419993 0.004298936 0.003961503 0.003651326 0.003426971 0.003367108
# 0.003169319 0.002901914 0.002852006 0.002772110 0.002765924 0.002754586 0.002748887
# 0.002619872 0.002474702

# Ploting errors of reconstructing our training data, to have a graphical view
# of our data reconstruction errors.

plot( sort( trainMSE$Reconstruction.MSE ), main = 'Reconstruction Error', ylab = "MSE Value." )





# Seeing the chart and the first 30 decresing sorted MSE records, we can decide .01
# as our min MSE before setting a record as anomally, because we see Just a few
# records with two decimals greater than zero and we can set those as outliers.
# This value is something you must decide for your data.

# Updating trainingData data set with reconstruction error < .01
trainingDataNew = trainingData[ trainMSE$Reconstruction.MSE < .01, ]

h2o.removeAll() ## Remove the data from the h2o cluster in preparation for our final model.

# Convert our new training data frame to H2O format.
trainingDataNew_hex = as.h2o( trainingDataNew, destination_frame = "train_hex" )

# Creating the final model.
trainingModelNew = h2o.deeplearning( x = featureNames
                                                                , training_frame = trainingDataNew_hex
                                                                , model_id = "Station1DeepLearningModel"
                                                                , activation = "Tanh"
                                                                , autoencoder = TRUE
                                                                , reproducible = TRUE
                                                                , l1 = 1e-5
                                                                , ignore_const_cols = FALSE
                                                                , seed = 1234
                                                                , hidden = c( 400, 200, 400 ), epochs = 50 )

################################
# Check our testing data for anomalies.
################################

# Convert our testing data frame to H2O format.
testingDataH2O = as.h2o( testingData, destination_frame = "test_hex" )

# Getting anomalies found in testing data.
testMSE = as.data.frame( h2o.anomaly( trainingModelNew
                                                               , testingDataH2O
                                                               , per_feature = FALSE ) )

# Binding our data.
testingData = cbind( MSE = testMSE$Reconstruction.MSE , testingData )

# Filter testing data using the MSE value set as minimum.
anomalies = testingData[ testingData$MSE >= .01,  ]

# When anomalies detected, send a notice to maintenance area. 
if( dim(anomalies)[1]  > 0 ){
  cat( "Anomalies detected in the sample data, station needs maintenance." )
}


Here is the code on github:
https://github.com/LaranIkal/ProductAnomaliesDetection


Enjoy it!!!.

Carlos Kassab

More information about R:










Popular posts from this blog

UPDATED: Using R and H2O to identify product anomalies during the manufacturing process.

Note.  This is an update to article:  http://laranikalranalytics.blogspot.com/2019/03/using-r-and-h2o-to-identify-product.html - It has some updates but also code optimization from  Yana Kane-Esrig(  https://www.linkedin.com/in/ykaneesrig/ ) , as she mentioned in a message: The code you posted has two nested for() {} loops. It took a very long time to run. I used just one for() loop. It was much faster   Here her original code: num_rows=nrow(allData) for(i in 1:ncol(allData)) {   temp = allData [,i]   cat( "Processing column:", i, ", number missing:", sum( is.na(temp)), "\n" )    temp_mising =is.na( allData[, i])    temp_values = allData[,i][! temp_mising]    temp_random = sample(temp_values, size = num_rows, replace = TRUE)      temp_imputed = temp   temp_imputed[temp_mising]= temp_random [temp_mising]   # describe(temp_imputed)   allData [,i] = temp_imputed      cat( "Processed column:", i, ", number missing:", sum( is.na(allData [,i

Installing our R development environment on Ubuntu 20.04

  Step 1: Install R,  Here the link with instructions:  How to instal R on Ubuntu 20.04 Adding the steps I followed because sometimes the links become unavailable: Add GPG key: $ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9 Output: Executing: /tmp/apt-key-gpghome.NtZgt0Un4R/gpg.1.sh --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9 gpg: key 51716619E084DAB9: public key "Michael Rutter " imported gpg: Total number processed: 1 gpg: imported: 1 Add repository: $ sudo add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu focal-cran40/' Output: Hit:1 https://deb.opera.com/opera-stable stable InRelease Hit:2 http://us.archive.ubuntu.com/ubuntu focal InRelease Hit:3 http://archive.canonical.com/ubuntu focal InRelease

Using R and H2O Isolation Forest anomaly detection for data quality, further analysis.

Introduction: This is the second article on data quality, for the first part, please go to:  http://laranikalranalytics.blogspot.com/2019/11/using-r-and-h2o-isolation-forest-for.html Since Isolation Forest is building an ensemble of isolation trees, and these trees are created randomly, there is a lot of randomness in the isolation forest training, so, to have a more robust result, 3 isolation forest models will be trained for a better anomaly detection. I will also use Apache Spark for data handling. For a full example, testing data will be used after training the 3 IF(Isolation Forest) models. This way of using Isolation Forest is kind of a general usage also for maintenance prediction. I am working with data from file: https://www.kaggle.com/bradklassen/pga-tour-20102018-data NOTE: There was a problem with the data from the link above, so I created some synthetic data that can be downloaded from this link:  Golf Tour Synthetic Data # Set Java parameters, enough memo