Getting data from BMA220 with Jetson Nano

In last week’s blog post, we worked on connecting an accelerometer (BMA220) to the jetson nano thru I2C. This is just one of our steps towards the goal to eventually create a predictive maintenance model that can take an input of various different types. In this post we will continue on this path by getting the x, y, and z data from our accelerometer with python.

Using SMBus:

To communicate with our sensor, we will be using SMBus, or System Management Bus. SMBus is a two-wire bus made for working with I2C. The documentation can be found here. To use SMBus in our python file, first pip install smbus, then add the following code to the top:

from smbus import SMBus
import time #this allows us to use time-related functions we will need later.
  • Getting the BMA220’s address

To communicate with the accelerometer, we will need to specify the address of it, and what bus it is on. We can use the i2cdetect command below to scan a specific bus to find the accelerometer’s address. 

i2cdetect -y -r 1 #scans bus number 1

You should get an output similar to this:

This tells us that the accelerometer is located on bus 1, at the address 0x0a. We can take these pieces of data and create two variables in a python file:

i2cbus = SMBus(1)  # Create a new I2C bus
i2caddress = 0x0a  # Address of BMA220
  • Reading data output

We will next use SMBus to read the x, y, and z data from the corresponding register address. Below is a global memory map that shows the BMA220’s I2C register addresses and their functions. The full documentation of the BMA220 can be found here.

The useful addresses for us here are 0x4, 0x6, and 0x8. These correspond to the x, y, and z data, respectively. 

To communicate with these addresses we will use the “read_byte_data” function (which takes the parameters ‘i2c address’ and ‘register to read’) inside a loop that runs forever. We will get the x, y, and z data and assign them to a variable. We will print out the three values once every second. This is the code to implement this: 

while (True):
        xdata = i2cbus.read_byte_data(
            i2caddress, 0x4)  # read the value of x data
        ydata = i2cbus.read_byte_data(
            i2caddress, 0x6)  #read the value of y data
        zdata = i2cbus.read_byte_data(
            i2caddress, 0x8)  #read the value of z data
        print(xdata, ydata, zdata)  # print the value of x y and z data
        time.sleep(1)

After running the python file, you should get an output similar to this:

If you physically move the accelerometer, you can see the values change as they are updated every second. 

Potential problems:

  • Strange addresses on buses

If you run i2cdetect on bus 0, or bus 2, you may see addresses there although you don’t have any other I2C devices connected to said bus. These are actually internal I2C devices that the jetson nano uses for internal communication. This was confusing for me when locating my i2c device, as I wasn’t sure if it was on the bus I was thinking of. A full discussion of the problem can be found here.

Other useful resources:

  • Eclipse UPM sensor repository

There exists a repository created by the Eclipse foundation which is supposed to provide drivers for a variety of sensors, including the BMA220. Personally, I could not figure out how to use it, but there is valuable information in the /src/bma220 folder pertaining to the BMA220’s registers, specifically in the .hpp file. The project can be found here.

  • I2C guide

This blog provides a good overview of I2C, and a brief tutorial for using an I2C device with a Raspberry Pi. The code given can be used on the jetson nano with only a few small tweaks.

  • Nvidia Developer Forums

If you have a unique problem using I2C with the jetson nano, chances are there is someone else who has had the same problem and posted about it on the Nvidia developer forums, found here.

Next week, we will take a look at setting up an experiment to use the accelerometer(s) to get useful data. 

I2C Input on Jetson Nano

In last week’s blog post, we finished creating our first machine learning model using a public repository dataset. While this is a major step towards our end goal, it is missing a critical aspect: being able to use our own data on the model to train it and test it. The goal for the model is to eventually be able to apply it to many different types of data, and to easily be able to change in between use cases. 

Collecting our own data:

To use our own data on the model, we first need to collect some data. In this case we will be using an accelerometer that outputs x, y, and z data. We are using the SEN0168 accelerometer with a BMA220 chip. This accelerometer uses an I2C or inter-integrated circuit interface to connect to the jetson nano.

  • What is I2C?

I2C is actually not eye-two-C, but I-squared-C. This stands for “inter-integrated circuit interface”. I2C is a single-ended serial communication bus that is widely used for connecting integrated circuits (which can come on all kinds of sensors, motors, etc.) to processors and microcontrollers. Regardless of how many pins you have on your microcontroller, you can connect as many I2C devices as you’d like. I2C even supports connecting multiple microcontrollers to allow for more than one controller to communicate with all peripheral devices on the bus. 

  • Connecting to the Jetson

The jetson nano comes with a 40-pin header. The pins functions are shown in the image below.

For the I2C connection, we will only need to use 4 pins: VDC (power- in this case 3.3v), GND (ground), SCL (clock), and SDA (data). These pins on the jetson nano (pins 1, 3, 5, and 6) will be plugged into the corresponding pins on the BMA220.

Checking the connection:

If the BMA220 has been properly connected, it should light up. We can now check for the signal on the jetson nano. 

Open the command line and run “ i2cdetect -y -r busnumber”. I am connected to I2C bus 1, and if you followed the same pin setup described above you should be as well. 

We can see that we are detecting our I2C device on 0x0a, or port 10. If you connect to other buses, you can check that as well by changing the number of bus in the i2cdetect command. 

We now have a properly connected I2C device to our Jetson Nano. In the next blog post, we will cover any configuration necessary and write some code to actually access the accelerometer’s data. 

Roadblocks to the first Deployment (K3s)

Last week, we looked at setting up a Kubernetes cluster on three Jetson Nanos, to prepare them for application deployment. That’s what we tackled this week, and in today’s blog post we will look at the obstacles we’ve encountered.

For these examples, we have been trying to deploy a test application using the instructions from the Rancher docs. This uses a pre-built container provided by Rancher. To recreate my deployment do the following.

  1. Create a file called testdeploy.yaml and paste the following inside:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysite
  labels:
    app: mysite
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysite
  template:
    metadata:
      labels:
        app: mysite
    spec:
      containers:
        - name: mysite
          image: kellygriffin/hello:v1
          ports:
            - containerPort: 80
  1. Once you’ve saved the file, use the following command to run it. 
kubectl apply -f testdeploy.yaml
  1. View your deployment using:
kubectl get pods

The CrashLoopBackOff error:

If you are running the above steps on a Jetson Nano cluster as well, you may see the common CrashLoopBackOff error.

NAME                      READY   STATUS             RESTARTS      AGE
mysite-57b5b46f97-rfcgx   0/1     CrashLoopBackOff   5 (39s ago)   3m30s

This error is well documented for Kubernetes in general, and is often caused by insufficient resources, trying to access a locked file, or a locked database. However, in the case of the Jetson Nano cluster, I believe the reason for the CrashLoopBackOff error is the ARM_64 architecture it runs on and shares with the new M1 Macbook Pro. A lot of pre-made containers may be designed specifically to be run on x64 architectures or even x86 making them incompatible with the ARM_64 architecture. 

To test this theory, I tried to deploy the Nginx service on the cluster. Nginx should work as it is made to work with ARM_64 architectures. I ran the following, to create a deployment of Nginx using the Nginx image. 

kubectl create deployment nginx --image=nginx

After running kubectl get pods, we can see that the Nginx service is running fine on the cluster, while the testdeploy continues to restart over and over after crashing.

NAME                      READY   STATUS             RESTARTS        AGE
mysite-57b5b46f97-rfcgx   0/1     CrashLoopBackOff   7 (2m31s ago)   13m
nginx-85b98978db-gc6ms    1/1     Running            0               20s

Many premade containers available to use for testing may be designed for different architectures from the ARM_64, which caused the frustrating CrashLoopBackOff error. As long as your Nginx deployment works, your cluster should be working, it will just require learning and much trial and error to deploy properly. 

The ImagePullBackOff error:

The ImagePullBackOff error is a very finicky error. I encountered it the first few times trying to deploy the Nginx pod using the steps outlined above. 

NAME                      READY   STATUS             RESTARTS      AGE
mysite-57b5b46f97-rfcgx   0/1     ImagePullBackOff   0 (9s ago)   4m30s

At first I had no idea how to get around it, and I assumed it was a problem with the place it was pulling the image from. Maybe the image was locked, or corrupted I thought. But after retrying multiple times, I let it run for 8 or so minutes. It retried pulling multiple times, and then one time it just worked. If you’re encountering this error, I recommend letting it run for at least 10 or so minutes before trying something else. 

Those are a few of the more problematic errors I encountered when trying to get to the first deployment. The next step will be to deploy a machine learning model on the cluster. This will require a lot more in terms of containerization, and we will explore that containerization and the creation of the model itself next week.

K3S Cluster on Jetson Nano

Jetson Nano Development Board

Setting up a Kubernetes Cluster on Jetson Nano (with k3s)

The Jetson Nano is an easily accessible, yet powerful single board computer built to deploy machine learning applications and more. Kubernetes is the most popular orchestration system used to manage and automate your application deployment, through a Kubernetes Cluster. K3s is the more lightweight version of Kubernetes.

This week we look at setting up a Kubernetes cluster on two Jetson Nanos, although you can do it with as many worker nanos as you’d like. It can be tricky to do, especially with no guide that outlines how to do it specifically for the Jetson Nano’s unique architecture. Although there are many other guides out there, this one is specifically for the Nano and will address any specific issues that come with that.

What We Will Use:

  • 2 fresh Jetson Nanos running Ubuntu 18.04, with Jetpack SDK 4.5 installed.

Preliminary Steps:

The first thing we need to decide is which jetson will be our master node, and which one(s) will be our worker nodes. The master node is the nano that you will deploy the cluster from, and the worker node(s) will join the cluster. Name them accordingly. I have named mine master and node1.

Then, use SSH to work on all the Nanos easily. Use this:

ssh user@ <target ip address>

And then login as you normally would on the nano you are SSHing into.

We will need curl for this, so Use this to install curl on all nanos.

sudo apt-get install curl

To make things easier, I recommend running “sudo su” to avoid having to type sudo before everything.

1. Installing Master Node:

We will now configure the Master Node. On your master nano only, run:

curl -sfL https://get.k3s.io | sh -s - --no-deploy traefik --write-kubeconfig-mode 644 --node-name k3s-master-01

 This installs k3s and starts it, deploys a cluster, and sets this node as the master.

You can view that your master node is online by running:

kubectl get nodes

You should see your “k3s-master-01” node is the only one in the cluster.

For the next step, which is installing the worker nodes, we will need the master node’s token. To get it, run this:

cat /var/lib/rancher/k3s/server/node-token

And copy the token for the next step.

2. Installing Worker Node(s):

We will now configure the Worker Nodes. Do this on all the worker nodes you have.

curl -sfL https://get.k3s.io | K3S_NODE_NAME=k3s-worker-01 K3S_URL=https://<IP>:6443 K3S_TOKEN=<TOKEN> sh -

Replace <IP> and <TOKEN> with the master node’s ip address (you can get this by running ifconfig) and the token you previously saved.

Now, when you run “kubectl get nodes” on the master node, you can see that the worker has joined.

3. Bringing up the dashboard

At this step, you’re pretty much done, your cluster is up, and you can begin deploying containers. I will show you now how to bring up the dashboard to view all your containers once they are deployed.

First, run this on the master node. This will deploy the kubernetes dashboard.

GITHUB_URL=https://github.com/kubernetes/dashboard/releases
VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
sudo k3s kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml

Now we have to create a few files:

  • Dashboard.admin-user.yml (do vim dashboard.admin-user.yml), press i to enter insert mode, and paste the following.
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

Press esc then :x to exit the vim editor and save.

  • Dashboard.admin-user-role.yml (do vim dashboard.admin-user-role.yml), press i to enter insert mode, and paste the following.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Save the file the same way as the previous one.

Now we will deploy the admin-user configuration. Run:

k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml

Now we will access the token needed to access the dashboard locally in a web browser. Run:

k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep '^token'

And keep note of the very long token.

Now we will create a secure channel to the cluster. To do this, run:

k3s kubectl proxy

You should see:

Starting to serve on 127.0.0.1:8001

This means that the dashboard is being served at 127.0.0.1, on port 8001. At this link you will find your dashboard:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

It will prompt you for the token we copied in the previous step. Paste it here, and you will have access to the kubernetes dashboard.

And you’re done! Once you deploy containerized apps, you will be able to see and manage them in the dashboard.

Other useful commands

To shut off your cluster, run:

k3s-killall.sh

To delete your dashboard, run:

sudo k3s kubectl delete ns kubernetes-dashboard
sudo k3s kubectl delete clusterrolebinding kubernetes-dashboard
sudo k3s kubectl delete clusterrole kubernetes-dashboar
d

To restart the cluster later, run:

sudo systemctl restart k3s

In next week’s blog post, we will look at containerizing apps and deploying them. We will also look at managing them between nodes, and using the dashboard more.