Getting data from BMA220 with Jetson Nano

In last week’s blog post, we worked on connecting an accelerometer (BMA220) to the jetson nano thru I2C. This is just one of our steps towards the goal to eventually create a predictive maintenance model that can take an input of various different types. In this post we will continue on this path by getting the x, y, and z data from our accelerometer with python.

Using SMBus:

To communicate with our sensor, we will be using SMBus, or System Management Bus. SMBus is a two-wire bus made for working with I2C. The documentation can be found here. To use SMBus in our python file, first pip install smbus, then add the following code to the top:

from smbus import SMBus
import time #this allows us to use time-related functions we will need later.
  • Getting the BMA220’s address

To communicate with the accelerometer, we will need to specify the address of it, and what bus it is on. We can use the i2cdetect command below to scan a specific bus to find the accelerometer’s address. 

i2cdetect -y -r 1 #scans bus number 1

You should get an output similar to this:

This tells us that the accelerometer is located on bus 1, at the address 0x0a. We can take these pieces of data and create two variables in a python file:

i2cbus = SMBus(1)  # Create a new I2C bus
i2caddress = 0x0a  # Address of BMA220
  • Reading data output

We will next use SMBus to read the x, y, and z data from the corresponding register address. Below is a global memory map that shows the BMA220’s I2C register addresses and their functions. The full documentation of the BMA220 can be found here.

The useful addresses for us here are 0x4, 0x6, and 0x8. These correspond to the x, y, and z data, respectively. 

To communicate with these addresses we will use the “read_byte_data” function (which takes the parameters ‘i2c address’ and ‘register to read’) inside a loop that runs forever. We will get the x, y, and z data and assign them to a variable. We will print out the three values once every second. This is the code to implement this: 

while (True):
        xdata = i2cbus.read_byte_data(
            i2caddress, 0x4)  # read the value of x data
        ydata = i2cbus.read_byte_data(
            i2caddress, 0x6)  #read the value of y data
        zdata = i2cbus.read_byte_data(
            i2caddress, 0x8)  #read the value of z data
        print(xdata, ydata, zdata)  # print the value of x y and z data

After running the python file, you should get an output similar to this:

If you physically move the accelerometer, you can see the values change as they are updated every second. 

Potential problems:

  • Strange addresses on buses

If you run i2cdetect on bus 0, or bus 2, you may see addresses there although you don’t have any other I2C devices connected to said bus. These are actually internal I2C devices that the jetson nano uses for internal communication. This was confusing for me when locating my i2c device, as I wasn’t sure if it was on the bus I was thinking of. A full discussion of the problem can be found here.

Other useful resources:

  • Eclipse UPM sensor repository

There exists a repository created by the Eclipse foundation which is supposed to provide drivers for a variety of sensors, including the BMA220. Personally, I could not figure out how to use it, but there is valuable information in the /src/bma220 folder pertaining to the BMA220’s registers, specifically in the .hpp file. The project can be found here.

  • I2C guide

This blog provides a good overview of I2C, and a brief tutorial for using an I2C device with a Raspberry Pi. The code given can be used on the jetson nano with only a few small tweaks.

  • Nvidia Developer Forums

If you have a unique problem using I2C with the jetson nano, chances are there is someone else who has had the same problem and posted about it on the Nvidia developer forums, found here.

Next week, we will take a look at setting up an experiment to use the accelerometer(s) to get useful data. 

I2C Input on Jetson Nano

In last week’s blog post, we finished creating our first machine learning model using a public repository dataset. While this is a major step towards our end goal, it is missing a critical aspect: being able to use our own data on the model to train it and test it. The goal for the model is to eventually be able to apply it to many different types of data, and to easily be able to change in between use cases. 

Collecting our own data:

To use our own data on the model, we first need to collect some data. In this case we will be using an accelerometer that outputs x, y, and z data. We are using the SEN0168 accelerometer with a BMA220 chip. This accelerometer uses an I2C or inter-integrated circuit interface to connect to the jetson nano.

  • What is I2C?

I2C is actually not eye-two-C, but I-squared-C. This stands for “inter-integrated circuit interface”. I2C is a single-ended serial communication bus that is widely used for connecting integrated circuits (which can come on all kinds of sensors, motors, etc.) to processors and microcontrollers. Regardless of how many pins you have on your microcontroller, you can connect as many I2C devices as you’d like. I2C even supports connecting multiple microcontrollers to allow for more than one controller to communicate with all peripheral devices on the bus. 

  • Connecting to the Jetson

The jetson nano comes with a 40-pin header. The pins functions are shown in the image below.

For the I2C connection, we will only need to use 4 pins: VDC (power- in this case 3.3v), GND (ground), SCL (clock), and SDA (data). These pins on the jetson nano (pins 1, 3, 5, and 6) will be plugged into the corresponding pins on the BMA220.

Checking the connection:

If the BMA220 has been properly connected, it should light up. We can now check for the signal on the jetson nano. 

Open the command line and run “ i2cdetect -y -r busnumber”. I am connected to I2C bus 1, and if you followed the same pin setup described above you should be as well. 

We can see that we are detecting our I2C device on 0x0a, or port 10. If you connect to other buses, you can check that as well by changing the number of bus in the i2cdetect command. 

We now have a properly connected I2C device to our Jetson Nano. In the next blog post, we will cover any configuration necessary and write some code to actually access the accelerometer’s data. 

The Roadmap to Model Development Part 3

In last week’s blog post, we finished up the third step (Data Preprocessing for Prediction) and began creating our first models in the final step (Model Development). Today, we will create four more models, and test each of them to compare their accuracy and runtimes.

Creating the models

Last blog post, we created only the Logistic Regression model and the K-Nearest Neighbors model. We will add the following four models today: Support Vector Machine, Decision Trees, Naive Bayes, and Random Forest. We will use the Sci-kit learn library for all of these.

  • Support Vector Machine

In this step, we create the Support Vector Machine model and name it svc. We then append its full name to our ‘classifier’ array, and the instance to our ‘imported_as’ array. The Support Vector Machine model works by separating the data points into ‘hyperplanes’ that have optimal decision boundaries.

  • Decision Tree

Here is the code implemented to create the Decision Tree model. It also appends the name and instance to the ‘classifier’ and ‘imported_as’ arrays, respectively. The Decision Tree model works by creating a ‘tree’ with multiple branches, with a binary question, leading to more branches.

  • Naive Bayes

In this code block, we create the Naive Bayes model and append its name/instance to the ‘classifier’ and ‘imported_as’ arrays. Naive Bayes is a model that attempts to perform classification by assuming all features are independent of each other. 

  • Random Forest Classifier

This is the code to create the Random Forest Classifier Model. We then append it to the ‘classifier’ array and the ‘imported_as’ array. The Random Forest Classifier works by creating multiple decision trees and weighing all their results before a decision.

Testing and measuring accuracy

We can now create our main class, that we will use to create an object with all the models so we can easily compare their accuracies. 

We create a class called ‘Modeling’ in which we have a few functions, ‘fit’ and ‘results’ these two can be called on the ‘Modeling’ class object to fit the models, and to get the results, respectively. Now, we will make an array that holds all the models we want to test. We could also use the ‘imported_as’ array, but for simplicity we will make a new array and name it ‘models_to_test’.

Finally, we can create an object of the ‘Modeling’ class and use our ‘fit’ and ‘results’ functions on it. We will use our previously made arrays of x and y training data, x and y testing data, and our ‘models_to_test’ array as the arguments. 

If it runs properly, you should be returned with ‘name of model’ has been fit for each of the models, and a pandas dataframe that holds the accuracy and runtime of each model. 

The Random Forest Classifier seems to be the most accurate. If you want to test this multiple times to see if the RFC model is ever beat, you can use a simple for loop to get the top model as many times as you’d like.

You can run it as many times as you like, and you’ll see that the Random Forest Classifier is always the most accurate. However, this comes with a price of a significantly higher runtime than the other models (0.4 seconds vs 0.01 and lower). This is why we trained multiple models, so we could compare which one would deliver enough accuracy while still being appropriate for the hardware it will run on. From these models, you can choose which one is most appropriate for your use case.

What’s next?

We now have a few accurate models to choose from for the purpose of predictive maintenance. However, the data used to train and test them is not our own. This project is meant to be easily adaptable to different sets of data, so next we will collect our own data for predictive maintenance and attempt to employ the model(s) on that dataset.

The Roadmap to Model Development Continued

Last week we took a look at the first two major steps in the process of model development, Data Preprocessing and Exploratory Data Analysis. In today’s article we will continue on the same roadmap, looking at two more steps that are arguably more interesting. These include the final data preprocessing to prep for model development, and developing the first few models.

Data Preprocessing for Prediction

What is the difference between the first data preprocessing step and the second? In the first step, you will remember that all we did was drop unwanted features. This is solely to prepare the data so we can perform Exploratory Data Analysis. In the second preprocessing step, we do all the other necessary actions for model deployment. 

  1. Encoding categorical features

Label encoding refers to converting the labels into a numeric form so as to convert them into the machine-readable form. We will use it on our training data, and using the “.fit” sci-kit method it will figure out the unique values and assign a value to it, returning the encoded labels.

This is what the code looks like:

  1. Splitting Test and Training Data

Every machine learning model needs test and training data to learn and improve. In this step, we separate the test and training data from the dataset consisting of ten thousand values. We use the “train_test_split” module from sci-kit and “.drop” from the pandas library to get four datasets, x_train, y_train, x_test, and y_test.

This is what the code looks like:

  1. Feature Scaling

Feature Scaling is a technique to normalize/standardize the independent features present in the data in a fixed range. It is done to handle highly varying magnitudes or values/units. Without feature scaling, a machine learning algorithm will weigh greater values higher and almost disregard smaller values, not taking into account the units. Note that we only do this to the x datasets, because the output does not need to be normalized.

This is what the code looks like:

Here is an image that shows how the distribution of the data is changed after feature scaling. 

Model Development

One of the final and most important in any model’s development, the actual creation of the model itself. For the best accuracy possible, we will be creating multiple models and comparing their accuracy to choose the best one. In this week’s post we will look at a couple and next week we will look at the rest. We will first create a place to store our model’s names and their actual instances.

Here is the code I used to do this:

  • The Logistic Regression Model

The Logistic Regression model is a supervised learning algorithm that investigates the relationship between a dependent and independent variable and produces results in a binary format (fail or not fail). This is used to predict failure.

Here is the code for the linear regression model, and for appending it to our arrays.

  • K- Nearest Neighbors Model

K-Nearest Neighbors is also a supervised machine learning algorithm which is mostly used to solve classification problems. It stores all the available cases and classifies new cases based on similarity. “K” is the number of nearest neighbors.

This image shows how k-nearest neighbors uses the location of data distribution to predict.

This is the code to implement the KNN model and append it:

In the next week’s blog post, we will finish the final step by creating more models, including Support Vector Machine, Random Forest, Naive Bayes, and Decision Trees. We will then test the accuracy of all of them to choose the most accurate one.

The Roadmap to Model Development

Last week, we looked at how Machine Learning can be useful in industry. Today we will explore the path to machine learning model development and the individual steps that go into that. The above roadmap outlines the four major steps in yellow on the left, and the smaller substeps on the right.

What is Data Preprocessing?

Data Preprocessing is one of the most important steps in tackling a machine learning problem. Depending on your dataset, you may have problems like missing values, useless features, and other types of noise. In the data preprocessing step, you focus on removing features that hold useless data, and addressing the missing values. We will use the Pandas library to work with our data. This is what our raw, unprocessed data looks like.

We can see that there are some features that would provide nothing of value to the machine learning model. These are features like the UDI and Product ID. They are just labels used to name each entry, so they are not actually useful. After removing them, our data is much more model-ready.

In our specific dataset, there were no missing or null values. In the real world this is usually not the case, and you should remove the features(columns) with null values entirely.

What is Exploratory Data Analysis?

Exploratory Data Analysis is a key step that involves the initial investigating of your dataset to find anything unusual or helpful. For example, when performing EDA, you may see that Feature X correlates directly with the output data. This can be helpful when selecting a model, and the inputs that go into the model. Exploratory Data Analysis involves a lot of charting and graphing to gain a better understanding of the summary of your dataset.

Using the .describe()  function from the Pandas library is useful to quickly view the stats of your data. You can see the mean for each numerical feature, the standard deviation, min and max, and percentiles.

Another important part of EDA is analyzing the skewness of your data. Using .skew() from the Pandas library, we can see just how skewed each feature is. If the skewness is between -0.5, and 0.5, the data is almost symmetrical. If it’s between -1 and 0.5 (negatively skewed) or between 0.5 and 1 (positively skewed), the data is slightly skewed. And lower than -1 or greater than 1 the data is extremely skewed.

Using Plotly, you can make many different graphs to view your data.

You can mix and match features to see if there is correlation. Here we compare air temperature and failure type.

Keep in mind that Exploratory Data Analysis is meant just that, Exploratory. There is no need to compare every single feature, it is only your initial exploration. EDA is not only useful to you, but it’s good practice so other engineers can look at your notebooks and easily understand it as they read along.

In the next blog post we will return for the third step in our Roadmap To Model Development which includes more data processing. Then, we will finally train and test our model(s).