IoT and the Intelligent Edge

In previous posts, I have discussed using a Raspberry Pi device connected to Microsoft Azure IoT. While this gives you a good example of how smart devices can communicate with the cloud and deliver useful information, it is really just treating the devices as communication tools, and doesn’t take advantage of any of the processing power that exists within the Raspberry Pi. When you think of a complete “Internet of Things” (IoT), you can picture the “thing” as being a device that bridges the gap between on premises activity and the cloud. At Microsoft, we call this the “Intelligent Edge”,  and we see this as a major opportunity to bring the power of the cloud to the world of IoT.

image

The Intelligent Edge

The heart of the Intelligent Edge is the new Azure IoT Edge service. In short, this service is built for those who want to push some of the cloud processing power down to the edge devices, so that they can independently analyze data (think about deploying an AI model to the edge and allowing the device to perform a portion of the work instead of having to rely on sending everything to the cloud and waiting for the results).

image

The IoT Edge service consists of cloud components, such as Azure IoT Hub, Azure Stream Analytics, and other Azure Services. The IoT Edge also includes a runtime service that is deployed to the device. This service is based on Docker, and is designed to simplify the deployment and management of code modules that are deployed to the device, as well as facilitate communication both with the cloud, and with any downstream devices that might be part of the environment (Think of the Edge device as being a controller that monitors the local environment, performs analysis there, and then reports status upstream to the cloud). The overall architecture of a connected IoT Edge device looks like this:

IoT Edge runtime sends insights and reporting to IoT Hub

The IoT Edge runtime supports both Windows and Linux devices, and simplifies the deployment and management of code modules.

Next Steps

This post has been an introduction to the Intelligent Edge and is really meant to provide an overview of the term and the services necessary to implement it. In follow-on posts to this topic, I will walk through how to configure a Raspberry Pi device to become an Intelligent Edge device, and will walk through deploying intelligent modules to the device.

Flight Tracking, the Raspberry Pi, and the Internet of Things

As part of my day-to-day job responsibilities, I look for ways to help my customers understand how new technologies and methodologies can be applied to their business processes. One major component of this is incorporating the vast amount of volatile data that they generate into useful business information. This can be an interesting conversation, because when many people hear the phrase, “Internet of Things” or “IoT”, they immediately dismiss it as not something that would help them, since they aren’t in manufacturing, or don’t deal with “sensors”. Over the last year, I’ve found myself struggling somewhat to come up with an IoT story that a majority of people could relate to and yet would still be relevant to their business operations. What follows is a breakdown of such a story, and how I decided to tackle the problem at hand.

The Problem

I live in a relatively rural area NE of Phoenix, Arizona called the Rio Verde Foothills. It’s an area that used to be a working cattle ranch, that was turned into home sites with acreage many years ago. The area is popular with “horse people”, and there are many world-class horse ranches out here along with people who have decided to move to a more rural area and enjoy a bit of a slower pace, while at the same time staying relatively close to a major city and all the amenities that offers. The area is roughly 30 square miles, and if you were to view it on an aerial map it would look like this:

image

An aerial view of the Rio Verde Foothills

We are about 10 miles away from the Scottsdale Airport, and about 15 miles from the Phoenix Deer Valley Airport, both of which have a healthy pilot-training industry. This means that our community is an ideal target for a practice area and is used relatively extensively for flight training operations.  The US Federal Aviation Administration (FAA) is very specific in many rules, but one rule that is often open to interpretation is the Code of Federal Regulations (CFR) 91.119 which details the minimum safe altitude that aircraft may operate. Generally-speaking, aircraft must remain 1000’ above ground in “congested areas” and 500’ in “other than congested”, unless it’s a “sparsely populated” area, in which case the pilot must maintain 500’ separation between any person, vessel, vehicle or structure. The FAA has never published an official definition of “congested area” or “sparsely populated” area, so each violation of this CFR is taken on a case by case basis. During flight training operations, it’s very common to perform “engine out” maneuvers, where you simulate an engine failure and practice setting up for a safe landing, which involves gliding to the chosen area, and then when the instructor is satisfied that the student has successfully performed the task, applying full power and climbing back to altitude. Typically, this results in a large burst of engine noise relatively close to the ground. This is a concern for people who have horses and don’t want them “spooked”, and is also a concern for people who moved to the area expecting it to be quiet and peaceful. (Personal note: I’m a pilot and I love the engine noise, so I don’t fall into the “concerned” category)

As our community grows, and as flight schools become more active, we see more and more complaints about the engine noise and flight operations. Since I consider myself a Data Scientist, I figured it would be an interesting endeavor to collect real data about flight operations in our area, and provide a data-driven analysis of actual flight operations above our community.

The Mechanics

In order to be certified for flight, the FAA requires certain equipment to be onboard aircraft (There are exceptions to the following, however for the purposes of this article, they are irrelevant). One of the requirements is that the aircraft have a transponder. Simply put, the transponder is a device that allows air traffic control (ATC) to query the airplane and learn things about it. Currently, in order to operate in controlled airspace (In the United States, airspace is broken down into several classes and each class has a specific requirement as to how you operate within it. For example, Class C airspace extends from the surface to 4000’ above an airport and within a 5 nautical mile radius, and from 1200’ to 4000’ within a 10 nautical mile radius – usually depicted on a chart by a cyan arc)

image

A portion of the Phoenix Sectional Chart, showing the airspace designations

aircraft are required to have what is known as a Mode C transponder, which transmits the altitude of the aircraft along with other flight information to ATC. This transmission occurs at 1090 Mhz and is not an encrypted signal. The transmission does not include any location information, but ATC can correlate radar returns with transponder information in order to get an accurate position and altitude of an aircraft.

As part of the FAA’s Next Generation Airspace initiative, a new technology known as Automatic Dependent Surveillance – Broadcast or ADS-B, will be required on all aircraft operating in controlled airspace. ADS-B is already being used by many European countries, and the infrastructure is already in place throughout the United States. ADS-B essentially extends existing transponder functions by including GPS location information, and is already being installed on many aircraft. (Basically all commercial airliners are equipped with it, and most general aviation aircraft with “modern” avionics have it as well). ADS-B uses the same unencrypted 1090Mhz signal, so it is relatively easy to capture with inexpensive radio receiver technology.

The advent of ADS-B technologies have afforded an opportunity for companies that provide flight tracking information, such as FlightAware and flightradar 24.

image

image

Examples of FlightAware (top) and flightradar24 (bottom)

These companies can collect the ADS-B information transmitted by aircraft and provide extremely accurate flight tracking information. If you haven’t used their sites or apps yet, do yourself a favor and check them out. They offer free service, but also have commercial tiers where you pay for ad-free browsing, or have other features that are only available to paid accounts.

One challenge that the above companies and others like them are faced with is the fact that not all areas are covered by Government-sponsored ADS-B receivers, meaning that there can be large gaps in their flight-tracking coverage.

In order to solve the coverage problem, these companies have made it very easy for hobbyists and the general public to participate in the ADS-B system by providing easy-to-use solutions that allow for collection of ADS-B data and transmission to these sites. The added advantage to this is that when there are multiple ADS-B receivers in a given area, they can use a technique known as “multilateration” (MLAT) to pinpoint the location and altitude of aircraft that are using non-ADS-B equipped transponders. Basically anyone with the desire and a little technical ability can construct an ADS-B receiver that can then be used to transmit the information to the sites, enhancing their coverage and MLAT accuracy. In return for doing this, the sites offer you free membership (which in the case of Flightradar24, is worth about US$800 per year and removes all advertising from their app and website)

In any given 24 hour period, in my area there are just over 2000 aircraft within range of my ADS-B receiver, and those aircraft report about 500,000 positions. That is a fair amount of data, and if harnessed for more than just tracking of individual flights, could be used for all sorts of analytics, including the ability to actually quantify the amount of air traffic in our area, along with the altitude, speed, etc… When collected and analyzed using a tool such as Microsoft Power BI, this data can prove to be very useful.

image

An example of analytics possible with the ADS-B data and Power BI

This is where IoT can prove to be a very useful tool to answer questions outside of the typical manufacturing or sensor-driven use case.

For the remainder of this post, I’ll describe how to build an ADS-B receiver for under US$80 using the popular Raspberry Pi computing platform and will discuss how to connect it to FlightAware and flightradar24. In follow-on posts, I’ll describe how to feed the data into Microsoft Azure IoT Suite, and finally will describe how to analyze historical information with Power BI.

Procuring the Hardware

There are many different ways to build an ADS-B receiver, several of which are probably more effective than what I’m going to detail (If you really want to get serious about this, take a look at the Virtual Radar Server project for example), but the way shown here results in very little expense and is relatively easy to do, even if you don’t consider yourself a hard-core geek. The shopping list (along with links to the item on Amazon.com) for this project is as follows:

  • Raspberry Pi 3 Model B – The Raspberry Pi 2 can also be used, but the 3 is faster and has integrated WiFi. I used a starter kit from Vilros that I purchased on Amazon as it had the case and integrated heat sinks.
  • A fast SDHC Card at least 8GB in size – You don’t want to skimp on this, as the faster the card, the better MLAT along with the local web interface will work. I used this one from SanDisk.
  • A USB ADS-B Receiver – FlightAware makes a nice one that includes an integrated filter for under US$20.
  • A 1090 Mhz antenna – There are several of these on the market, but the simple version to start with can be found here. This one, when placed in a window, will receive ADS-B signals from up to 50 miles away. Once you decide to get more serious about collecting the data, you can use a more effective antenna which can reach out to more than 250 miles, but will need to be externally-mounted.

Once you have the hardware, you will need to download and install an operating system to run the Raspberry Pi. You can follow the directions here to download, install and configure Raspbian Jessie (I use the desktop version, but the Lite version will work as well).

If you need more detail on setting up the Raspberry Pi, you can follow the steps (through step 3) from this earlier blog post that I wrote on the topic.

Installing the FlightAware Software

After you have installed and configured the operating system, and plugged in the FlightAware receiver with antenna, you will want to ensure that everything is up to date with the latest patches and repository information. To do so, connect to the Pi (either via SSH from another machine, or open a terminal session on the desktop of the Pi) and issue the sudo apt-get update and sudo apt-get upgrade commands.

image

apt-get update

image

apt-get upgrade

The following has been documented using the following version of Raspbian Jessie:

image

Once this is complete, you will install the PiAware application, which collects data from the ADS-B receiver and transmits it to FlightAware. There are very detailed instructions listed here, but the following steps will work just fine:

  • Install the FlightAware repository so that the package installer can find the piaware source.
wget http://flightaware.com/adsb/piaware/files/packages/pool/piaware/p/piaware-support/piaware-repository_3.5.1_all.deb sudo dpkg -i piaware-repository_3.5.1_all.deb

image

    • Update the repositories
sudo apt-get update

image

    • Install the Dump1090-fa software (This is the software that decodes the 1090Mhz signal into the digital ADS-B information needed to transmit to FlightAware.
sudo apt-get install dump1090-fa

image

    • Install the FlightAware piaware software.
sudo apt-get install piaware

image

    • Configure piaware to support updates
sudo piaware-config allow-auto-updates yes sudo piaware-config allow-manual-updates yes

image

    • Reboot the Pi to allow the configuration to take effect.
sudo reboot now

Viewing the Results

Once the Pi has rebooted, open a browser (either on another machine, or directly on the desktop of the Pi) and browse to <ip address of Pi>:8080 – (For example if the IP address of your Pi is 192.168.1.10 : http://192.168.1.10:8080) This will open the PiAware Skyview web application. If everything is working fine, and if there are aircraft nearby that are transmitting ADS-B signals, you should see them represented on your screen, along with a link asking you to connect to Flightaware and claim your receiver. Click the link and claim your receiver (if you don’t already have a FlightAware account, use the link to register for one). Once you claim your receiver, your account will be upgraded to an Enterprise premium account, which is normally worth over $1000.00 per year.

image

The Skyview application showing aircraft detected.

Once you have claimed your receiver, you can verify functionality by checking the piaware log file, located at /var/log/piaware.log. Use the following command: sudo cat /var/log/piaware.log to view the entire log, or, sudo tail /var/log.piaware.log to view just the end of the file.

image

Conclusion

In this post, we have discussed the use of ADS-B signals to collect information on nearby aircraft, and have demonstrated how to build an ADS-B receiver that will transmit the information to FlightAware.

Future posts in this series will discuss how to extend this solution to other sites, as well as collecting the information via Microsoft Azure IoT Suite to make it available for historical analysis.

The IoT Journey — Visualizing IoT Streaming Data with Power BI

In my previous posts in this series(see posts one, two, three, four, five and six) I discussed the construction of a system to collect sensor data from a device and send that data to the Microsoft Azure IoT Suite. The final step in this journey is to build an application that will use the data that we’re sending to the cloud. There are many approaches to building the visualization layer (for a complete discussion on this topic, see the IoT Suite Remote Monitoring solution here: https://www.microsoft.com/en-us/server-cloud/remotemonitoring/Index.html ), but I wanted to incorporate the use of Microsoft Power BI to demonstrate a Software as a Service (SaaS) solution to visualize the output of the sensor platform.

Obviously this is overkill for temperature and humidity data for a single sensor, but imagine having a worldwide network of these sensors reporting data. The beauty of the cloud and the SaaS platform for visualization, is that there is virtually an unlimited amount of capacity available to you with very little work on the front end to build the solution.

The first step in this process is to obtain and provision a Microsoft Power BI subscription if you don’t already have one. Power BI is available in a free tier that will work for this example, so you do not need to purchase a subscription in order to build the solution.

Step One – Obtain a Power BI Subscription

Sign up for Microsoft Power BI at www.powerbi.com and select the Get Started now button. Follow the instructions on the screen to sign up. Note that you must use a corporate email address (not an @gmail, @outlook or @hotmail address). If you want, you can sign up for a 30-day trial of Office 365, or sign up for a $5 per month plan and then use that address as your Power BI login. The good news there is that after the 30-day trial, Power BI will continue to function normally.  Once you’ve signed up for Power BI and logged in, you’ll see the following screen:

image

Once Power BI is successfully provisioned, the next step is to configure the Azure IoT Hub to send data to Power BI so that we can add it to a dataset.

Step Two – Send IoT Data to Power BI

One of the services available in the Azure IoT Suite is Azure Stream Analytics (ASA). ASA is a fully-managed cloud service that enables real-time processing of streaming data. It is a very powerful service, and when coupled with Azure Event Hubs, can scale to millions of devices.

For the purpose of this post, we will use ASA to receive data from the IoT Hub that we created in the previous post, and then output the data to a data set that is read by Power BI to build a report and dashboard to represent the data being sent by our sensors. As mentioned earlier, this is overkill for a single sensor, but it will give you an idea of how simple building this solution is, and of course it can be easily scaled to a virtually unlimited amount of sensors.

As you recall, in the last post we created an Event Hub (in my case it was named FedIoTHubDemo) and we configured it to receive data from our Raspberry Pi device. Now we will use that hub to send data to ASA so that it can be viewed in Power BI. You will need to open the Azure Management Portal and navigate to the Event Hub that you created in the previous post.

image

Make sure that you make note of the name and region where the resources are located.

To connect ASA to the Event Hub, we will perform the following steps:

  • Create a new Stream Analytics Job
  • Configure an Input to ASA
  • Write a Query using the Azure Stream Analytics Query Language
  • Configure an Output to Power BI (as of the time of this post, this is not supported in the management portal, we will need to use the legacy portal to perform this step)
  • Start the Job

In the Azure Management Portal, select New in the upper left, and then select everything in the Marketplace pane, and then type Stream Analytics Job in the search box:

image

Then select Stream Analytics Job from the search results:

image

 

Then select Create to open the ASA blade:

image

Give the job a name, select the resource group that you used for the Hub and then select a location for the resources. Then select Create to create the Job. This will start the deployment process and you will receive a message in the portal when the deployment is completed:

image

Once the deployment is complete, click on the job to open the dashboard and settings blade:

image

Note that there are no inputs or outputs configured. For our example, we will configure an Input that uses the IoT Hub that we previously created.

Click on the cloud icon in the essentials section of the pane to open the Quick Start Wizard:

image

Then click Add inputs. Give the input a name, and then select IoT Hub as the source. Make sure the drop-downs are filled with the appropriate names from the IoT Hub that you created earlier. Use the Shared Access Key for the iothubowner policy that you copied to the text file in the last post (or copy it again from the IoT Hub that you created).

image

Once all of the fields are filled out (don’t worry about the Consumer Group dialog unless you chose a name for it previously. It is named $default unless you chose a name) click Create to create the Input. The system will create the input and then test it to ensure that it is properly configured.

Once the input is created, click on Step 2 of the quick start wizard, Develop a Query. This will open the query pane with a sample query:

image

The ASA Query Language is very similar to T-SQL with some extensions specifically for streaming data. In our scenario, the message that is sent from the Raspberry Pi to the Azure IoT Hub is very basic, consisting of 7 fields(deviceID, temperatureF, temperatureC, humidity, latitude, longitude and timestamp):

This message is sent to Azure approximately every 3 seconds. In our case, we will want to create a query that collects the data in the stream, groups it appropriately and then assigns a time window to the collection times so that we know what the groupings refer to. The most appropriate groupings are deviceID, latitude & longitude, and the time window is controlled by the timestamp value. In theory this will be every 3 seconds, but there is no guarantee that the Pi will send the data on that schedule, so we will create a Tumbling Window to represent a 5 second interval. (in production we would likely change this to have a wider window, as we have no driving need to see the temperature every 5 seconds). The resulting query will look like this:

image

Click Save to save the query. Make sure that the FROM table you reference in the query is the same name that you gave the Input earlier. Typically you would test the query at this time, but currently testing is not supported in the Azure Management Portal. (It should be available soon, and I’ll update this post when it is)

image

Once the query is saved, you will need to launch the legacy Azure Management Portal (http://manage.windowsazure.com ) as the current portal does not support creating a Power BI output sink for ASA.

 image

Launch the legacy portal and navigate to the Stream Analytics Job that you created.

image

Select the OUTPUTS tab, and add a new output. Select Power BI and then click the next arrow:

image

Then click the Authorize Now button and sign in using the credentials that you used to provision your Power BI environment. This will bring you to the configuration dialog for the output:

image

Note that the values you use for the DataSet and Table will appear in Power BI, so choose a name that is meaningful. Click the OK CheckMark to create and test the output.

image

Now that the output has been configured, you can exit the legacy portal and return to the new portal where you will see the output you just created listed in the outputs section of the job:

image

Next, in the ASA Job blade, click the Start button, select Now, and click the Start command. This will start the ASA Job, but will take several minutes to complete.

Once the job is started, you will see a message in the portal.:

image

Now that the job has been started, we will need to start the sensor application on the Raspberry Pi and start sending data to Azure.

Step Three – Send the Data to Azure

This is the easiest step, since we’ve already built and tested the application. Follow the steps from the previous post to use an SSH client to connect to your Raspberry Pi and start the DHT11Test application to start sending data to Azure:

image

Let the application run for a few minutes before proceeding to the next step.

Step Four – Build the Power BI Dashboard

After the application has been running for awhile, launch Power BI and expand the workspace on the left side of the screen (note the arrow icon at the bottom left)

image

You will see the DataSet name that you provided above in the Datasets section. Click the dataset to open the designer. Note the table name that you specified earlier, and the fields from the Raspberry Pi application:

image

For the purposes of this post, we will build a simple report and dashboard to visualize our environmental data. Click on the deviceid, temperaturef, temperaturec and timestamp fields. Note that you will have a table displayed with each of these values:

image

Note that the numeric values are summarized at the bottom of the table. Since we do not want this, choose each of the numeric fields in the Values section and select Don’t Summarize in the drop down list. Your table should look like this:

image

Since we want to build a nice dashboard instead of a table of values, lets switch to a gauge to display each of the values. In the Visualizations pane, select the Gauge icon. Drag the temperaturec field to the Values box, and in the drop down, select Average. Leave the other textboxes blank. Resize the Gauge to make it look nice. Your canvas should now look like this:

image

Now click on a blank area in the report canvas, and then select the humidity field. Click on the Gauge icon in visualizations, and repeat the above step to visualize the average humidity. Repeat for the temperaturef field as well.

image

Now click on a blank area under the gauges, and select the timestamp field. In the visualizations select a line chart, and drag the temperaturef field to the values and the timestamp field to the axis. Resize the graph and then click on an open area in the canvas. Repeat these steps choosing the humidity field, and then resize the graph. It should look like this:

image

Now, click a blank area under the line graphs and select the Map visualization. Drag the devicelatitude field to the latitude box, the devicelongitude field to the longitide box, and the temperaturef field to the size box. Resize the graph to fit the canvas. It should look like this:

image

You have now built a report, using data from the sensors on the Raspberry Pi and streamed to Power BI via Azure Streaming Analytics!

On the report canvas, choose File and then Save and then give the report a name. Once the report is saved, click the Pin icon on each graph to pin to the dashboard (the first pin will require you to name the dashboard). Once everything is pinned, select the dashboard in the left side and resize the elements as desired.

Now you have a dashboard connected to the report. Notice that when you click on a dashboard element, it drills down to the report page that contains the visualization.

Spend some time experimenting with various visualizations, and explore the power of Power BI.

Conclusion

In this post we completed the journey by connecting the data collected from sensors on our Raspberry Pi to Power BI by using the Azure IoT Suite.

This showcases how powerful the Azure IoT Suite can be, while remaining relatively easy to develop solutions that utilize the suite.

If you purchased the sensor kit from Sunfounder that I mentioned in earlier posts, you’ll find that you have several additional sensors to play with that can be added to your Raspberry Pi environment and connected to Power BI.

The IoT Journey: Connecting to the Cloud

In the previous posts in this series (see posts one, two, three, four and five) we’ve walked through designing and building an application that reads a temperature and humidity sensor connected to a Raspberry Pi. In this post, we’ll create the cloud-based components necessary to receive the information from the Pi, and we’ll modify our application to transmit the data to the Azure IoT Suite.

The Azure IoT Suite is a comprehensive set of cloud services and Application Programming Interfaces that enable you to construct a highly-scalable Internet of YOUR Things.

image

The use-cases for the Azure IoT Suite are basically limited only by your imagination. Many organizations are turning to IoT technologies to gain more and better insight on how their products and services are being used, along with creating tools and applications to make the lives of their customers better. (While there is a huge debate about the privacy concerns, one great example of this in my mind is how the OnStar service works; I have OnStar activated in my vehicle, and once per month I receive an email that gives me diagnostic information, such as tire pressure, oil life, upcoming maintenance, mileage, etc.. I also have the ability to use the service to locate my vehicle in a parking lot, or remote start.. This is all made possible by the fact that my vehicle is “connected” to the cloud).

The first step in connecting the Raspberry Pi to Azure IoT Suite is to provision an instance of the IoT suite in an Azure account. If you do not already have an Azure account, you can sign up for a free account here: https://azure.microsoft.com/en-us/free/

The free account will give you a $200 credit for one month that allows you to use any of the resources available in Microsoft Azure, and after the month if you choose not to pay for a subscription, you can still use free services including the IoT Suite. (Details are available at the link above)

Once you have an Azure account setup, you are ready to provision an instance of the IoT Suite.

Step One – Provision the IoT Suite

This is strangely the easiest part of the whole journey, even though the technology behind the scene is fairly complex. Browse to the Azure Management Portal (http://portal.azure.com ) and select the New item in the upper-left, then select Internet of Things, and then select IoT Hub:

image

This will open the IoT Hub Configuration blade. You will need to give the hub a name, select a pricing level (the Free tier will work just fine for our purposes here) and then provide a name for a resource group that will be a container to hold all of the services that comprise the IoT Hub. Then select a location close to you:

 

image

Once you’ve selected the required elements, click the Create button to create the IoT Hub. This will take a few minutes to complete. During the process you can click the bell icon on the management portal to receive status updates of the provisioning process:

image

Once the deployment completes, you will see the following status message and you should also see a new icon in your management dashboard that represents the IoT Hub that you created.

image

Click on the new icon on the management portal (if the icon does not appear, use the browse option on the left and then choose all/IoT Hubs and you will see it listed) to open the management blade for the new IoT Hub that you created:

image

Once the site is provisioned, you will need to obtain the connection string and authorization key in order to allow client applications to send data to the hub.

Step Two – Provision Your Device

The Azure IoT Suite is designed from the ground up with security in mind. Nothing can be sent to, or received from, the IoT environment that you’ve provisioned without proper credentials. In our case, we simply want to connect a single device (our Raspberry Pi sensor platform) and send data to the hub. This will involve provisioning a device and obtaining the appropriate connection string / shared access key for the device.

For the purposes of this tutorial, we’re going to take a simple path and not configure access roles or develop a full-featured application to manage the provisioning of devices on the hub (there is currently no mechanism to manually provision devices in the Azure IoT Hub, you must provision the device from the device itself).

In order to provision a device, we will need to create a simple application to provision a device. In order to build this application, we need the following information from the hub we just created:

  • Host Name
  • Connection String
  • Shared Access Signature that allows provisioning of devices

These values can be found in the Azure Management Portal. From the management blade that you opened above, click the Hostname value that is listed in the upper-center of the management blade (in the example above, the value is FedIoTHubDemo.azure-devices.net ) and then copy the value to the clipboard. Save this value to a text file (open Notepad and paste the value) as you will need to retrieve it later. Next click on Shared access policies in the settings blade to open the policies, and then select the iothubowner policy:

image

Copy the Primary Key and Connection string – primary key to the text file you created above. You will need these as well. Note that we are using the owner key, which gives us full access to the IoT Hub environment. In a production application we would not use the owner key here, but would rather create appropriate policies for the device and then use those keys. Since this is a simple “getting started” tutorial, we are using the simple path to test the application.  I highly recommend that you read the IoT Hub Developer Guide to understand the ramifications of using the owner key before attempting to build a production application using the Azure IoT Hub.

The process to provision a new device in the Azure IoT Hub is:

  1. Connect to the hub using an appropriate Shared Access Signature
  2. Read the device registry to ensure that the device isn’t already provisioned
  3. Add a new device entry to the registry
  4. Obtain the new device Shared Access Signature

Typically the code to accomplish the above would be built into a client application that executes on the device. To simplify matters, we will build a separate application to register the device and will copy the Shared Access Signature into the application that we’ve previously developed on the Raspberry Pi.

To provision the device, start Visual Studio and create a new Windows Console application named ProvisionPiHub:

image

Once the solution is created, open the NuGet Package Manager (Project/Manage NuGet Packages) and then select the Browse tab. Type Microsoft.Azure.Devices into the search box, and then select the Microsoft.Azure.Devices package. Click Install, and then accept the license agreement when asked. This will add the necessary components to the project to connect to Azure IoT Hub..

image

Once the package is installed, open the Program.cs file and add using statements for Microsoft.Azure.Devices and Microsoft.Azure.Devices.Common.Exceptions to the top of the file.

image

This will add references to the Azure IoT SDK.  Next you will add static fields to the Program class that represent the RegistryManager as well as the connection string that you copied earlier as follows:

image

 

Next you will want to add an async method to register your device as follows (make sure you choose an appropriate name for your device):

image

Now in the Main method, add code to invoke the Registry Manager as well as the method above:

image

Run the program and note the generated Shared Access Signature that is displayed. Press Enter, and then Mark/Copy the generated signature. Paste it into the text file you created earlier so that you have it saved. (to Mark click the mouse pointer in the upper left corner and then select Mark. use the mouse to highlight the device key, and then press Enter to copy it to the clipboard). Once the device key is copied, you can press Enter to exit the application.

image

 

If for some reason you aren’t able to copy the key from the console application, you can refresh the Management Portal, and then select the devices blade and select the new device, and copy the Primary key from the details pane.

image

Now that you have created the device and copied the Shared Access Signature, you are ready to extend the application that was created in the last post to send the sensor data to the Azure IoT Hub.

Step Three – Extend the Sensor Application

Now that the device has been registered, we can extend the application that we developed in the previous post in this series to send sensor data to the cloud.

Since the application has already been created to collect the sensor data that we want to use, we will simply extend the application to transmit the data to Azure as well as writing it to the console. The process to communicate with Azure is relatively simple:

  • Create an instance of the Azure Device Client class
  • Use the appropriate Shared Access Signature to connect to the Azure IoT Hub
  • Create a telemetry data point, using data from the sensors
  • Add the telemetry to a message, and serialize it to JSON message
  • Add the message to the Client class, and transmit to Azure

Remember that this is a simple tutorial, so there is no real exception or retry logic involved. For production applications, be sure you understand the Transient Fault Handling as you will encounter transient faults.

To extend the DHT11Test application, open the solution in Visual Studio, and go to the NuGet package manager (Project / Manage NuGet Packages) and install the Microsoft.Azure.Devices and Microsoft.Azure.Devices.Client packages. Since we will be executing this application on the Raspberry Pi with Mono, we will also want to add the Mono.Security package. Once these packages are added, open the Program.cs file and add using statements for Microsoft.Azure.Devices.Client and Newtonsoft.Json.

image

Then, add static fields to the program class to represent your device and client. Note that part of the telemetry payload will include a location for the device. Since we do not have GPS enabled on our device, we manually lookup our geolocation and add it. For the ConnectionString and HubURI, make sure you use the values that you saved earlier, not the values that are present in the device information.

image

Then, in the main method, add a line to instantiate the device client. Add the code after you have started the DHT11 sensor.

image

Then, create an async method to send Device to Cloud messages. This will be called every time the DHT11 sensor returns data. We will also write the message to the console so that we can see what is being transmitted.

image

Then, in the DHT11NewData event handler, call the SendDeviceToCloudMessagesAsync method and pass the DHT11 sensor data:

image

This will ensure that messages are sent when the DHT11 sensor reports new data (which happens every 3 seconds in our example). Build the application and repair any errors that might have cropped up. Pay attention to the NuGet packages and make sure that you have all of the appropriate packages added.

Now that the application has been extended, you will need to deploy the application to the Raspberry Pi.

Step Four – Deploy the New Application

In earlier examples, deployment has been relatively easy because the libraries that we have used have been for the most part already present on the Raspberry Pi. In this case, however, there are several .dlls that we will have to deploy as part of our application. If you examine the output folder for the build, you’ll notice that there are many files that have been generated.

image

We will need to copy the .exe, along with all .dll files and the .xml files as well to the Raspberry Pi.

Use whatever copy program that you’ve used in previous examples (I use FileZilla) to copy the files to a folder on the Raspberry Pi. I made a new folder to hold the new version of the application, but it is entirely up to you how you want to store the program files on the Pi.

image

Once the application is deployed, you will need to ensure that the Pi is properly configured for https communication. Some versions of the OS have incomplete certificates configured for https communication, so it’s important to ensure that the Pi is ready.

Use an SSH client to connect to the Pi, and then execute the following command:

image

This will download and install the latest root certificates into the local client store. You will need to do this twice, once as a normal user, and once using sudo to ensure both certificate stores are updated.

Now that the application is deployed and the certificate store updated, execute the application (don’t forget to execute with sudo) and watch the messages transmitted to Azure.

image

This will send a telemetry payload message containing the temperature in Fahrenheit and Celsius, as well as humidity and lat/long every 3 seconds. Leave the app running, and switch back to the Azure Portal and notice the Usage counter in the diagnostics now shows data being received by your device.

Conclusion

Congrats! You’ve now built a sensor platform that collects data from a sensor and transmits it to the cloud every 3 seconds (if you let the app run long enough, you will eventually run into an unhandled exception due to the nature of cloud communications. As mentioned above, we did not build in any retry logic or exception handling, so this will happen at some point)

In the next post of this series, we will take the final step on this journey and connect Microsoft Power BI to the IoT hub so that we can visualize the results of our sensor platform.

The IoT Journey: Working With Specialized Sensors

In previous posts in this series (see posts one, two, three and four) we’ve discussed the Raspberry Pi and it’s Linux-based operating system, along with the GPIO interface and using Visual Studio to develop applications in C# that execute on Linux using the Mono framework. Generally-speaking, this works well when the sensors you’re working with do not require critical timing or other “close to the hardware” code. Some sensors, however, have very time-critical components that require critical timing that just isn’t available in the Mono framework on Linux. An example is the DHT-11 Temperature and Humidity Sensor. This sensor is extremely popular because it’s very lightweight and simple from the hardware perspective, and requires only one channel to deliver both temperature and humidity readings. The DHT-11 can also be daisy-chained to provide a relatively wide-range of coverage.

Unfortunately this simplicity from a hardware perspective comes at an expense: In order to interact with the DHT-11 and read the data that it provides, your code is extremely time-sensitive and must be able to precisely measure the digital signal state within just a few microseconds. If you look at the DHT-11 datasheet, you’ll find the following diagram:

image

This diagram explains how to read the DHT-11. Basically the process is:

  1. Send a “Start” Signal to the DHT-11 (Set the pin to LOW for 18 microseconds and then set it high for at least 20 microseconds)
  2. Wait for the DHT-11 to respond with a LOW signal for 80 microseconds followed by a HIGH signal for 80 microseconds
  3. Wait for the DHT-11 to send a “Start to transmit” signal (LOW for 50 microseconds)
  4. Time the HIGH pulse from the DHT-11
    1. If the signal width is between 26 and 28 microseconds, record a 0 for that bit
    2. If the signal width is 70 microseconds, record a 1 for that bit
  5. Repeat steps 3 and 4 until 40 bits of data have been transmitted (2 bytes for Temperature, 2 bytes for Humidity and 1 byte for the Checksum)

(I did warn in earlier posts that this series was going to get extremely geeky, but don’t worry, this is about as deep as it gets!)

Unfortunately my coding skills with C# and the Mono framework could not coax the DHT-11 into sending me any useful data. I believe that the timing is so critical that I could not get the managed code to respond with enough resolution to properly detect and read the signal transitions. (This is a major challenge when using PWM on a single signal without a buffer to communicate)

Fortunately though, we have access to C and C++ on the Raspberry Pi which will allow us to write code that will respond quickly enough to detect the transitions, and we can wrap that code in a library so that the majority of our code can still be written in C#. Before we get to that though, we will need to connect the DHT-11 to the Raspberry Pi.

If you are using the Sensor Kit from Sunfounder as discussed in the second post in this series, you will have an already-mounted DHT-11 sensor with 3 pins. If not, there are several places where you can buy one, however if you’re going to play with IoT technologies and sensors, the best deal is the Sunfounder kit.

image

Step One – Wire the Circuit

For this exercise, we will use the pushbutton circuit that we created in the last exercise with the RGB LED, as we will use the LED to report temperature and humidity status. On the DHT-11, the signal pin is the one closest to the S (far-left on the picture above). Connect this pin to GPIO 16 (physical pin 36) on the Pi, and then connect the middle pin (Vcc) to 3.3v on the breadboard, and then connect the last pin to ground on the breadboard. If you’re as messy with breadboard wiring as I am, your result will look like this:

20160501_005403774_iOS

We won’t use the pushbutton for this experiment, but no need to remove it from the breadboard. Now that the circuit is wired, we will need to code the solution.

Step Two – Prepare the Raspberry Pi

In order for this solution to work properly, we will need to develop some code in C++ that will interact directly with the DHT-11 sensor. You can do this in Visual Studio on Windows, however you will not be able to compile or debug the code there. In my experience, when developing new low-level code, it’s best to do it on the platform where the code will be executed. For this reason, we will want to use an IDE on the Pi directly. There are many developer GUIs available on the Pi, but the one that I’ve found easiest to use and does what I need is called Geany. In order to install and configure Geany, open an SSH session to your Pi and use apt-get install to install Geany. (Don’t forget to use sudo):

image

Answer Yes at the prompt. This will take several minutes to complete. Once Geany is installed, you will want to use VNC to connect to the Pi’s GUI environment. (See post one in this series for instructions on how to setup the VNC server – or you can follow the instructions here: http://www.howtogeek.com/141157/how-to-configure-your-raspberry-pi-for-remote-shell-desktop-and-file-transfer/all/)

image

Once you are connected to the remote desktop session, you can start Geany (it will be available in the Programming menu) you are ready to begin coding the library necessary to communicate with the DHT-11 sensor.

Step Three – Develop the Sensor Library

The process outlined here will be the same for any time-sensitive code that you develop to interact with a sensor or peripheral device on the Pi. As it turns out the DHT-11, while seemingly complicated, is relatively straightforward once you understand the timing of the pulse and how to deal with it in code. If we were going to simply develop the entire program in C or C++, we wouldn’t need to create a library but would just write the code directly. Since we’re going to be wrapping this code with C#, we’ll create a shared library in C++ that will then be used by our main program.

Since we are creating a library, we will need to define an interface that the code will use to interact with the library. We will do this by creating a header file. In my case I will be calling my library DHT11Library, so the header file will be called DHT11Library.h. In Geany, create a new c source file by selecting New with Template from the file menu and then select main.c as the template. Save the new file as DHT11Library.h and then add the following interface definition (I will add the full code to both files at the end of the article):

image

Once the interface is defined, create a new file using the main.c template and name it DHT11Library.cpp. In this file we will add the includes that we will need (don’t forget to include the DHT11Library.h file that we created previously, as well as setting up some global variables that we will need. Because we eventually want to use the DHT11 library with other sensors that have their own libraries, we will need a way to tell the wiringPi library that it has already been initialized. We will do that by calling the InitDHT method with a boolean that determines if you have already initialized the library. Because the WiringPi library is mostly static, we can only initialize WiringPi once. The DHT11Library that we will create here will allow for multiple instances to be instantiated, so we will track the status of the library by using a global variable called isinit. This looks like:

image

Next you will create the InitDHT method, where you will either initialize the WiringPi library as well as set the GPIO pin that the DHT11 is connected to. You will also test to see if the WiringPi library is already initialized:

image

Next we will create methods to retrieve the temperature and humidity from the DHT11 registers. The DHT11 was designed with 4 byte registers that hold the values. For temperature the first byte contains the integer portion and the second byte contains the decimal portion. However, as it turns out, while the DHT11 was designed this way, it actually doesn’t represent decimal numbers (the registers are always zero). I decided to treat the code as if the registers would be populated.. These methods look like:

image

Next we will create the method that actually reads data from the DHT11 registers. As discussed above, the DHT11 relies entirely on timing of the pulse-width in order to communicate data over the output back to the GPIO interface. For this method, we will need to setup a series of variables to use temporarily while everything is pulled together, and then send the start signal and wait for the sensor to respond:

image

Then, we verify that the checksum byte matches the result we expect and if so, return a valid read.

image

Once the values have been read and populated into the global variables that represent the byte registers, the code execution is complete.

Now that the code is complete, we will need to compile this into a shared library. The easiest way to do this is via the command line. Use the gcc command (and since you’re going to be creating a shared library, don’t forget sudo) as follows:

image

This will create the shared library on the Pi that will be available to the C# wrapper that we’ll create next.

Step Four – Develop the C# Wrapper Class

Now we will use Visual Studio on Windows in order to develop a wrapper class for the DHT-11 that will be used in our main program. Basically this class will follow the design principles that were laid out in post two in this series where we discussed the WiringPi wrapper class. The difference in this case is that we’ll develop this class on our own as opposed to using someone else’s work. Essentially what we will do is create a stand-alone class that will act as an entry point to the shared libDHT11.so that we just created. This class will simply mirror the interface that we specified, and will then implement the methods to read the temperature and humidity. We will also create a timer that will read the sensor on a regular basis.

Open Visual Studio and create a new Windows C# Console application (Since we will be implementing the new class, we will just do it all in a single solution – for production purposes, you’ll likely want to create this class standalone to keep the namespace separate). Name the application DHT11Test.

image

Once the solution has been created, add a new class named DHT11Library.

image

Once the new class has been added to the solution, we will want to add the environment information necessary to instantiate the DHT11Library. At a high level, the new class will consist of the following:

  • An embedded class to hold the temperature and humidity values. We will use this as the basis for an event that will let us know when the DHT11 has reported new data. Since the actual reading of data from the sensor is a time-sensitive process, we will need a place to store the data when it arrives, even if we’re not quite ready to use it.
  • A wrapper for the C++ library that we created (effectively the interface)
  • Global variables that represent the pin that the sensor is connected to, the desired delay between reads, data from the sensor and whether the GPIO interface is already initialized
  • A timer that will initiate the read process from the sensor
  • A delegate that will allow us to asynchronously read the sensor
  • A constructor for the class
  • Start and Stop methods for the timer
  • A method to return the sensor data

In the namespace declaration of the new class you added, create the class that will hold the temperature and humidity values.  These values will be float types (although we could get away with integer values due to the resolution of the sensor). They will need constructors as well. You also need to add using statements for System.Timers and System.Runtime.InteropServices

image

Then you will need to add the wrapper as well as the private variables for the class:

image

Once this is complete you will need to create the delegate for the async call as well as the event to read data. You also need to create the constructor for the class along with the stub for the elapsed timer:

image

Next, you will need to implement the elapsed handler for the timer:

image

Then we need to create the start and stop methods for the timer, along with the method to read from the sensor.

image

Once this is complete, we can save the class file and move to the program that we will use to instantiate the class.

Step Five – Develop the Application to Read from the Sensor

Finally, we can get to the actual implementation of all the code.  As mentioned above, we will use the RGB Led that we worked with in the last post in this series along with the sensor to provide a visual status of the operation. The flow of the program will be:

  1. Initialize the GPIO Ports on the Raspberry Pi and setup Pulse Width Modulation for the RGB LED
  2. Turn the RGB LED Red
  3. Instantiate the DHT11 Class
  4. Turn the RGB LED Green
  5. Setup the event handler for new data
  6. Start the timer
  7. Read the Sensor and convert temperature to Fahrenheit
  8. Turn the RGB LED to Blue during the read operation (and delay for 1/2 second so you can witness the change)
  9. Turn the RGB LED Green on successful read, or Red if there is a problem
  10. Write the temp and humidity to the console
  11. Check for a keypress, and if detected, close down the sensor and exit

The code for the above flow looks like this:

using System;
using WiringPi;

namespace DHT11Test
{
class Program
{
const int DHTPin = 36; //DHT Signal on GPIO 16 (Physical pin 36)
const int Delay = 3; // 3 second delay between readings
const int LedPinRed = 16; // Red Pin of RGB LED
const int LedPinGreen = 18; // Green Pin of RGB LED
const int LedPinBlue = 22; // Blue Pin of RGB LED

        static void Main(string[] args)
{
if (Init.WiringPiSetupPhys() != -1) // Call the physical pin numbering instead of the confusing WiringPi numbering
{

                GPIO.SoftPwm.Create(LedPinRed, 0, 100); // Setup the Pulse-Width-Modulation libary and initialize the RGB LEDs
GPIO.SoftPwm.Create(LedPinGreen, 0, 100);
GPIO.SoftPwm.Create(LedPinBlue, 0, 100);

                SetLedColor(255, 0, 0); // Set the RGB to Red and turn it on

                Console.WriteLine(“GPIO Initialized”);
Console.WriteLine(“Initializing DHT11 Sensor!”);
DHT11Library DHT = new DHT11Library(DHTPin, true, Delay); // Instantiate the DHT11 Class and tell it that the WiringPi library is already initialized
Console.WriteLine(“Sensor Initialized on Pin: ” + DHTPin.ToString());
Console.WriteLine(“By default, readings occur every 3 seconds”);
DHT.EventNewData += DHT_EventNewData;
DHT.Start();
SetLedColor(0, 255, 0); // Set the RGB to Green
Console.WriteLine(“Press any key to exit”);
Console.ReadKey();
DHT.Stop(); // Stop the sensor readings
GPIO.SoftPwm.Stop(LedPinRed);  // Shut down the PWM
GPIO.SoftPwm.Stop(LedPinGreen);
GPIO.SoftPwm.Stop(LedPinBlue);
}
}

        private static void DHT_EventNewData(object sender, DHT11Data e)
{
SetLedColor(0, 0, 255); // Turn the RGB LED to Blue
double TempF = e.Temperature * 1.8 + 32;
Console.WriteLine(“Temperature: ” + e.Temperature + ” Degrees (C), ” + TempF + ” Degrees (F), Humidity: ” + e.Humidity + “%”);
System.Threading.Thread.Sleep(500); // Delay for 1/2 second so you can see the LED change color
SetLedColor(0, 255, 0); // Turn the RGB LED to Green
}
private static void SetLedColor(int r, int g, int b)
{
GPIO.SoftPwm.Write(LedPinRed, r);
GPIO.SoftPwm.Write(LedPinGreen, g);
GPIO.SoftPwm.Write(LedPinBlue, b);
}
}
}

Once this is complete, build the solution and prepare to copy it to the Pi

Step Six – Deploying and Testing the Application

As we have done in previous posts, you will want to create a deployment folder on your Pi and copy the executable and DLL to the Pi. I use FileZilla for the copy operation. You will need to copy the libDHT11.so file that you created on the Pi into the destination folder as well. For simplicities sake, I use FileZilla to copy the file from the Pi into the project folder on my development machine so that I always have it available when I deploy, but you can also simply copy the file on the Pi as well. You need a total of 3 files on the Pi in order to execute this application.

image

Once the code is deployed, execute using the sudo command:

image

You should see the following. Note that the temperature readings do not always appear at 3 second intervals. Sometimes the timing call to the DHT does not properly retrieve the temperature, so you wait a cycle or two for it to write data. This is just the nature of the 3 second delay. In a true production application, we likely would not be reading the temperature every 3 seconds.

image

Congrats! You have just developed and deployed an application that uses PWM to communicate with a sensor to retrieve data.

In the next post in this series, we will begin the process of communicating the results of this data to the Azure IoT Suite so that we can use it.

The IoT Journey: Additional Peripheral Protocols

In the last post of this series, I discussed digital signaling and hardware interrupts. The purpose of these posts is to educate on the basics of IoT technologies so that you will have a foundation to work from in order to build new IoT projects. In this post, we will discuss the concept of Pulse-Width Modulation (PWM) and how to use PWM to drive a tri-color LED with code on the Raspberry Pi.

Pulse Width Modulation

When controlling physical “things” from electronics, you have to be able to do more than simply power on and off a device, or react to the press of a button. The challenge there is that digital signals are either “on” or “off”, and buttons are either pressed or not. When working with electronic circuitry, one technique used to encode a message into a digital signal is called Pulse Width Modulation, or PWM. Basically what this means is that you define a “width” of the “on” signal (the amount of time that the signal is on) as equal to a binary value, and then you define the amount of time that you will “read” the total signal to obtain the message.

Probably the most common use of PWM in electronics is controlling the speed of a motor. Instead of using an analog circuit and adjustable resistor (which can generate a lot of heat and wasted energy), PWM is used to control the “duty cycle” of the motor (the amount of time that full power is applied versus the amount of time power is off). A simplistic example would be if we wanted to run a motor at 50% of it’s maximum speed, we could set the duty cycle to 50%, meaning that the PWM signal would be on 50% of the time and off 50% of the time.

There are many other examples of using PWM in electronics, and one of the easiest to work with is the multi-color LED light. A multi-color LED, or RGB LED, has a common cathode (or anode, depending on type) and then 3 inputs representing each of the primary colors. By using PWM and controlling the amount of time power is applied to each of the primary color inputs, we can create virtually unlimited output colors on the LED.

The Raspberry Pi 3 has 2 available PWM hardware outputs, but one of those is shared with the audio out, so in practice there is only a single channel available. Given this, projects that require more than a single PWM channel (such as the example above of an RGB LED) utilize a software library that “bit bangs” the output of a GPIO pin to create the PWM signal. This works well for LEDs and servo motors, where the use of the PWM signal is limited, but it’s much harder to implement when controlling a motor that requires a more constant PWM signal ( this is because for the duration of the PWM signal, the CPU is effectively latched performing that task).

For the purposes of this example, we’ll control an RGB LED from the Raspberry Pi, using the SoftPWM capabilities exposed in the WiringPi library that we’ve used in previous examples…

Step One – Wire the Circuit

Since we will require 3 GPIO channels, we’ll choose GPIO Pins 23, 24 and 25 (physical pins 16, 18 and 22) since they are convenient and aren’t used in the ISR solution that we created in the previous post. The Red pin will connect to GPIO 23, the Green pin to GPIO 24 and the Blue pin to GPIO 25. Connect the ground (-) pin to GND on the breadboard.

If you purchased the Sunfounder Sensor kit mentioned in previous posts, you’ll have an RGB LED that is already mounted to a circuit board with headers, otherwise you’ll need to obtain one and connect wires to the leads in order to plug it into the breadboard properly. The LED assembly looks like this:

20160428_164001105_iOS

 

The breadboard configuration will look like this:

20160428_165115470_iOS

Once the circuit is wired, we’re ready to extend the previous code solution to include the RGB LED.

Step Two – Develop the Solution

For the sake of simplicity, we’ll simply extend the previous button solution that we created and add the RGB LED. We will write code that will toggle the state of the LED between Red, Green and Blue as the button is pressed.

The flow of the program is basically:

  • Initialize the SoftPWM library and connect each of the GPIO outputs to the appropriate color pin on the LED
  • Initialize the ISR and set the callback method used
  • Ensure that the RGB LED is off
  • Wait for either a button press or a keyboard press
    • If a button press is detected, toggle the solitary LED, and then check the current state of the RGB LED and then toggle through the color choices
    • If a keyboard press is detected, close the PWM library and exit the program

Open the ISR solution in Visual Studio that we previously created (See the previous post in this series) and add the supporting constants:

image

Then add code to initialize the SoftPWM Library. We will use the default of 100 microseconds for the pulse width to balance CPU utilization with PWM functionality.

image

Then add a method to the class that sets the RGB LED saturation values:

image

Once that is done, add a method that detects the current state of the RGB LED and toggles it appropriately. (In this example we’re just using Red, Green and Blue as colors, however you can mix/match the RGB values to create any color combination that you want).

image

Finally, add code to the ISR method that invokes the SetRGB() method.

image

This will ensure that the LED switches through the RGB colors (or colors that you define) with a button press. Keep in mind that we are not debouncing the switch, so you will likely see some spurious color changes when you press the button.

The complete program.cs code is as follows:

using System;
using System.Threading.Tasks;
using WiringPi;

namespace ISR
{
class Program
{
const int redLedPin = 29; // GPIO 5, physical pin 29
const int buttonPin = 31; // GPIO 6, physical pin 31

        const int rgbLedR = 16; // GPIO 23, physical pin 16
const int rgbLedG = 18; // GPIO 24, physical pin 18
const int rgbLedB = 22; // GPIO 25, physical pin 22

        static string LedColor = “None”; // This is just a string variable that tells us what color the RGB LED is currently

        // The color saturation value is how “on” that particular color is. So when the redvalue is 255 and green and blue are 0, the
// Led will be Red. If the RedValue is 255 and GreenValue is 255, then the resulting color will be Red+Green or Yellow
// If Red is 0 and Green and Blue are both 255, then the resulting color will be Blue + Green or Cyan.
// See the RGB Model Wikipedia for more info:
https://en.wikipedia.org/wiki/RGB_color_model

        static void buttonPress()
{
if (GPIO.digitalRead(redLedPin) == (int)GPIO.GPIOpinvalue.Low) // Check to see if LED is on
GPIO.digitalWrite(redLedPin, (int)GPIO.GPIOpinvalue.High); // If so, turn it off
else
GPIO.digitalWrite(redLedPin, (int)GPIO.GPIOpinvalue.Low); // Otherwise turn it on

            SetRGB(); // Light the RGB LED with the appropriate color
}
static void Main(string[] args)
{
Console.WriteLine(“Initializing GPIO Interface”);
if (Init.WiringPiSetupPhys() < 0) //Initialize the GPIO Interface with physical pin numbering
{
throw new Exception(“Unable to Initialize GPIO Interface”); // Any value less than 0 represents a failure
}
GPIO.pinMode(redLedPin, (int)GPIO.GPIOpinmode.Output); // Tell the Pi we will be writing to the GPIO
GPIO.digitalWrite(redLedPin, (int)GPIO.GPIOpinvalue.Low); // Turn LED On

            GPIO.SoftPwm.Create(rgbLedR, 0, 100); // Setup the RGB LED pins as PWM Output and set the Pulse Max duration of 100 microseconds.
GPIO.SoftPwm.Create(rgbLedG, 0, 100); // This is software-based PWM since the Pi only has a single PWM hardware channel
GPIO.SoftPwm.Create(rgbLedB, 0, 100);

            Console.WriteLine(“Initializing Interrupt Service Routine”);
// We will fire the interrupt on the falling edge of the button press. Note that we are not “debouncing” the switch, so we will likely
// see some extra button presses during operation
if (GPIO.PiThreadInterrupts.wiringPiISR(buttonPin, (int)GPIO.PiThreadInterrupts.InterruptLevels.INT_EDGE_FALLING, buttonPress) < 0) // Initialize the Interrupt and set the callback to our method above
{
throw new Exception(“Unable to Initialize ISR”);
}
Console.WriteLine(“Press the button to toggle LED or press any key (and then press button) to exit”);
Console.ReadKey(); // Wait for a key to be pressed
GPIO.digitalWrite(redLedPin, (int)GPIO.GPIOpinvalue.High); // Turn off the static LED

            GPIO.SoftPwm.Stop(rgbLedR); // Turn off the RGB LED
GPIO.SoftPwm.Stop(rgbLedG);
GPIO.SoftPwm.Stop(rgbLedB);
}

        private static void SetRGB()
{
switch(LedColor) // Remember that we aren’t debouncing the switch, so the color changes may not be as elegant as the code would show
{
case “None”: // This will be the value when we first execute the code and the button is pressed
SetLedColor(255, 0, 0); // Turn the LED On Red
LedColor = “Red”; // Set the variable to Red so we know what color is showing
break;
case “Red”: // This will happen after the second button press
SetLedColor(0, 255, 0); // Set the color to Green
LedColor = “Green”;
break;
case “Green”: // This will happen after the third button press
SetLedColor(0, 0, 255); // Set the color to Blue
LedColor = “Blue”;
break;
case “Blue”: // This will happen after the fourth button press
SetLedColor(255, 0, 0); // Set the color to Red
LedColor = “Red”;
break;
default: // We should never reach this point, but if we do, handle it
LedColor = “None”;
break;
}

        }
private static void SetLedColor(int r, int g, int b) // Pulse the RGB pins with the appropriate saturation values
{
GPIO.SoftPwm.Write(rgbLedR, r);
GPIO.SoftPwm.Write(rgbLedG, g);
GPIO.SoftPwm.Write(rgbLedB, b);
}
}
}

Now that the code is developed, build it and fix any errors that you find. Once it is built without error, we can deploy it to the Pi and test.

Step Three Deploy and Execute the Code

Using a file transfer program (as mentioned before, I use FileZilla) connect to the Pi and copy the ISR.exe and WiringPi.dll files to the Pi. Overwrite the files that you previously copied.

image

Once the files are copied, you can execute the ISR.exe program by typing the command sudo mono ISR.exe:

image

Press the button several times and then note that the RGB LED switches through the colors.

20160428_174018231_iOS20160428_174024557_iOS20160428_174031228_iOS

Congrats! You now have developed and deployed a solution that utilizes Pulse Wide Modulation to control an LED!!

The IoT Journey : Interacting with the Physical World

In the first two posts of this series (See Post One and Post Two) I detailed the process of configuring a Raspberry Pi 3 to enable development and testing of Internet of Things (IoT) projects. The first post covered the basics of the Raspberry Pi, and the second detailed the basics of the GPIO interface and demonstrated how to turn an LED on and off with code. In this post, we will extend the project that we created earlier and add interaction with the physical world through the use of a mechanical button press.

When interacting with sensors and devices, one of the more important things to understand is how interrupts work, and how you code interrupt service routines.

Understanding Hardware Interrupts

This is a topic that you could spend many hours on and not understand all of the nuances, yet it’s very important to understand the basics of how interrupts work if you’re going to develop IoT applications that utilize sensors or other electronic components that work with the Raspberry Pi.

If you have an electronic circuit that contains a pushbutton switch, and you want the activity of the switch to cause action in your program, you need to write code that will read the state of the switch and then react based on the state of the switch. There are many different ways that you can approach this problem, but probably the two most common ways are polling and interrupt-driven. In a polling solution, you simply write code that reads the state of the switch, and call it on a regular basis. This works well if you can anticipate the pressing of the switch (i.e, the flow of your application is such that the user would press the switch at known points in the code), or if you use an asynchronous method to create a timer that reads the switch at defined intervals (of course this could be a problem if the switch is pressed and released outside of the defined interval, it would go unnoticed).

An interrupt is a signal to the processor to tell it to stop whatever it’s currently doing and do something else. In the switch example above, if we connected the switch to a GPIO and then defined that pin as an interrupt, whenever the pin detected a change in state (if the button were pressed, for example), it would activate the interrupt, and our code would simply have to handle that interrupt.

Because the Raspberry Pi GPIO interface is a digital interface, we have to understand the concept of Signal Edge when working with interrupts. Digital signals are either on or off and are represented by a square wave:

image

The normal HIGH (or current is not flowing) signal on the Raspberry Pi GPIO is 3.3v. This is why, in the previous post, we set the pin to LOW in order to light the LED. If you look at the above chart, you will notice that there are transitions between zero and 3.3v represented at the edge of the square. For example, if you look at 10 on the x axis, you’ll see the transition from 3.3v to 0v. This is known as a Falling Edge. Conversely, if you look at 15, you’ll notice the transition from 0v to 3.3v. This is known as the Rising Edge. If we construct our circuit such that the button press allows current to flow, then we will want to ensure the interrupt is generated only on the Falling Edge, otherwise we would trigger two interrupts for every button press. The simplified circuit diagram would look like this:

image

When the button is pressed, the GPIO pin is “Pulled Low”, meaning that current will flow between the pin and ground. One issue that might arise in this configuration is the fact that the mechanical switch isn’t precise and can possibly generate false signals as the contacts travel close during the push or release of the switch. Sometimes this “bouncing” effect can generate multiple signal transitions and must be considered when coding routines that deal with switch presses. This concept is known as de-bouncing. (As this series of posts is meant to be instructional and not production-quality, we will not de-bounce our switch inputs)

With this basic understanding out of the way, let’s move on to creating a simple circuit and writing some code to react to the button press.

Step One – Construct the Simple Circuit

In the previous post, we created a simple circuit that included an LED and a resistor that connected to the GPIO interface and allowed us to control the LED in code. The circuit diagram looks like:

image

and the resulting breadboard configuration looks like:

This circuit allowed current to flow through the LED when the GPIO pin was set (or “pulled”) LOW. We wrote code that simply set the GPIO interface to LOW whenever a key was pressed on the keyboard.

For this example, we will add the switch to the breadboard and connect one side to ground, and the other side to GPIO 6 (physical pin 31). The circuit will look like this:

image

And the resulting breadboard configuration will be (ignore the additional LEDs and Resistors at the top, we will discuss those later):

20160424_173812624_iOS

(note that even though the same breadboard is used, the switch circuit is not connected to the LED in any way)

Once the circuit is constructed, we’re ready to move on to coding the solution.

Step Two – Develop the Code

For this exercise, we will create a new solution in Visual Studio that uses interrupts to detect when the button is pressed. We will then toggle the LED every time the button is pressed.  In Visual Studio, create a new C# Windows Console application named ISR.

image

Once the solution is created, add a reference to WiringPi.dll (Because you have previously referenced this .dll, it should be present in the Reference Manager dialog and you should just have to select it).

image

Once the reference is complete, add a using statement for the WiringPi library:

image

The basic flow the application we are about to create is:

  1. Initialize the GPIO Library and tell it how we will reference the GPIO pins
  2. Instruct the GPIO that we will be writing to the pin with the LED connected
  3. Turn the LED On
  4. Initialize the WiringPi Interrupt Service Routine and tell it which signal edge will trigger an interrupt
  5. Wait for a button press, and if one is detected toggle the state of the LED
  6. Check if a key is pressed, and if so, exit the program

The completed code, with comments is below:

using System;
using System.Threading.Tasks;
using WiringPi;

namespace ISR
{
class Program
{
const int redLedPin = 29; // GPIO 5, physical pin 29
const int buttonPin = 31; // GPIO 6, physical pin 31
static void buttonPress()
{
if (GPIO.digitalRead(redLedPin) == (int)GPIO.GPIOpinvalue.Low) // Check to see if LED is on
GPIO.digitalWrite(redLedPin, (int)GPIO.GPIOpinvalue.High); // If so, turn it off
else
GPIO.digitalWrite(redLedPin, (int)GPIO.GPIOpinvalue.Low); // Otherwise turn it on

        }
static void Main(string[] args)
{
Console.WriteLine(“Initializing GPIO Interface”);
if (Init.WiringPiSetupPhys() < 0) //Initialize the GPIO Interface with physical pin numbering
{
throw new Exception(“Unable to Initialize GPIO Interface”); // Any value less than 0 represents a failure
}
GPIO.pinMode(redLedPin, (int)GPIO.GPIOpinmode.Output); // Tell the Pi we will be writing to the GPIO
GPIO.digitalWrite(redLedPin, (int)GPIO.GPIOpinvalue.Low); // Turn LED On
Console.WriteLine(“Initializing Interrupt Service Routine”);
// We will fire the interrupt on the falling edge of the button press. Note that we are not “debouncing” the switch, so we will likely
// see some extra button presses during operation
if (GPIO.PiThreadInterrupts.wiringPiISR(buttonPin, (int)GPIO.PiThreadInterrupts.InterruptLevels.INT_EDGE_FALLING, buttonPress) < 0) // Initialize the Interrupt and set the callback to our method above
{
throw new Exception(“Unable to Initialize ISR”);
}
Console.WriteLine(“Press the button to toggle LED or press any key (and then press button) to exit”);
Console.ReadKey(); // Wait for a key to be pressed
GPIO.digitalWrite(redLedPin, (int)GPIO.GPIOpinvalue.High);
}
}
}

 

Once you have the application coded, build it and prepare to deploy it to the Raspberry Pi.

Deploying and Executing the ISR Application

Use a file transfer program (I use FileZilla) to copy the ISR.exe and WiringPi.dll files to a folder on the Raspberry Pi. (I used ~/DevOps/ISR)

image

 

Once the files are copied, you can execute the application with the command sudo mono ISR.exe:

image

The LED will light, and when you press the button it should toggle state. You will likely very quickly see the result of our not de-bouncing the switch, as some presses will likely result in multiple toggles of the LED. There are many patterns for de-bouncing in software, but the most reliable solutions are hardware-based.  You will also note that when you press a key, the application does not exit until you press the button as well. This is a byproduct of the ISR Routine needing to execute in order to capture the keypress.

Congrats! You’ve now developed an application that bridges the gap between code and the physical world. This is an extremely important concept for developing any IoT Application.

In the next post in this series, we will discuss the concept of digital signaling and will develop an application that interacts with an RGB LED that requires Pulse Width Modulation in order to work properly.

The IoT Journey: Introducing the Raspberry Pi Expansion Capabilities

In my previous post in this series, I introduced the Raspberry Pi3 and provided the steps necessary to build and configure it to run C# applications that were developed using Visual Studio on Windows. Obviously you need to know the basics in order to proceed to more IoT-relevant projects, so if you’re just getting started in the IoT world with the Raspberry Pi, please make sure you understand the concepts discussed in the first post.

In any IoT project, you’ll likely want some external sensor capability beyond what is provided by the Raspberry Pi itself. Examples of relevant sensors could be temperature and humidity sensors, barometric sensors, altimeter sensors, etc.. The list is limited only by your creativity and imagination.

The Raspberry Pi is very versatile in that it has expansion capabilities that allow us to both read data and write data using several common interface protocols.

In this post, I’ll discuss the Raspberry Pi’s expansion header and how to write data as well as read data from externally-connected “things”. Admittedly this post is going to enter into “uber geek” territory quickly, and if you’re the type of person that is easily intimidated by “raw electronics”, I’d suggest to you that even though it might look intimidating and complicated, working with electronic components and circuitry is not all that difficult and can be extremely rewarding, so please, read on and give it a go. You won’t be disappointed.

Step One – Obtaining the Necessary Components

When I was much younger, one of my favorite things to do was to grab my allowance savings and free battery card (ah, the memories) and run down to the local Radio Shack store to see what cool things I could afford. Although the stores have declined in popularity and capability somewhat, they have seen a resurgence due to the popularity of the “maker” community, and many stores do stock the components necessary to experiment with IoT. With that said, I have found that there are some very comprehensive kits available from Sunfounder that put all of the parts into a single kit. While there are many different kits available, I found that for my experimentation the following two kits were the most useful:

Super Starter Kit – This kit has pretty much everything you’ll want, although it is a little light on some of the more useful sensors.

Sensor Kit – This kit makes up for the missing sensors above and then some.

For the purposes of this post, you’ll need the following components (which are available in the Super Starter Kit):

  • Breadboard – The breadboard allows you to place the components together in a circuit without the need to solder anything
  • Raspberry Pi GPIO Expansion Board – This simply plugs into the breadboard and makes it easy to connect the components to the Pi
  • Expansion Cable – The Raspberry Pi expansion header is 40 pins and the cable will connect the header on the Pi to the Breadboard
  • 220 Ohm resistor – You’ll need this to ensure the circuit we build is properly configured
  • Led – We will write code to turn the LED Light on and off
  • Switch – We will use code to read the state of the switch and take action

20160421_165941558_iOS

Once you have the components, you’ll want to connect everything to the expansion header in the Raspberry Pi (pay close attention to the pin layout – if you use the rainbow cables from the sunfounder kit mentioned above, you’ll want to connect it with the black wire (pin 1) closest to the end of the Raspberry Pi that contains the SD card. This can be confusing because the cable will fit in either direction, but it’s extremely important to connect the correct pin #s to the breadboard, otherwise you could find yourself applying the wrong signal or voltage to a sensor and at best destroying your sensor, and worst destroying the Pi….

20160421_175849764_iOS

Just make sure that the cable is correctly connected, and you’ll be OK.

Step Two – Prepare the Raspberry Pi

Once you’ve obtained the components and have plugged the expansion cable into the Pi, we need to make sure we have the appropriate development libraries installed in order to take advantage of the capabilities of the Pi.

The primary expansion capability on the Raspberry Pi is accessed through a series of General Purpose Input-Output (GPIO) pins. In short, the GPIO interface allows you to connect external devices directly to the Raspberry and then interact with that device. The “General Purpose” nature of this interface is exactly what makes the Raspberry Pi so popular with “makers”, and is what allows us to connect so many different devices to the Pi. This is the good news, the bad news, of course, is that because it is “General Purpose”, we need to actually write all of the code necessary to interact with the device. Fortunately, there are a number of very good libraries that already exist in the public domain that are designed to do exactly that.

Arguably the most popular and useful library is called Wiring Pi and is made available by Gordon Henderson. Gordon has made his library available using various methods and he is very good about reacting quickly to community feedback and requests for enhancement. This really does demonstrate the power of the Open Source community and is one reason that I’ve chosen to use open source tools for all of my IoT work.

The easiest way to install the WiringPi library is to use git (git is a tool that allows you to interact directly with source code repositories) and install it directly on the Pi. To do this, you’ll first need to make sure you have the current version of the git command installed connect to your Pi via SSH (see the first post in this series for a refresher on how to do this) and then run the following command, sudo apt-get install git-core . (Git is usually preinstalled on the Raspbian image, but you want to install the latest version, which this command will do)

image

Once Git is installed, change directories to the DevOps folder that you created for the Hello application (created in the first post in this series) and then execute the following command: git clone git://git.drogon.net/wiringPi . This will install the git source code onto your Pi.

image

Once the source code is installed, you’ll need to compile the code that will be used by your applications. Fortunately Gordon makes this simple and has provided a build script to do all of this. Change directories to the wiringPi directory and execute the script by typing ./build to start the script.

image

You are now ready to start developing applications for the Pi that interact directly with the GPIO interface.

Step Three – Preparing your Development Environment

Now that your Pi is configured and ready to receive your IoT applications, you need to prepare your development environment to ensure that you can properly develop code that will run on the Pi and interact with the wiringPi library. This can get tricky, because we’re going to be developing our code in C# on windows, but the wiringPi library is written on C for Linux. The first step in preparing for this is to create shared libraries from the code we just complied on the Pi. For our purposes, there are 3 specific libraries that we will create. Change to the wiringPi folder (inside the wiringPi folder that you created) and execute the following commands to create the libraries:

  • cc -shared wiringPi.o -o libwiringPi.so This command uses the compiler to create the basic library – this is the core library that we will use
  • cc -shared wiringPiI2C.o -o libwiringPiI2C.so – This command creates the library specific to the I2C protocol – more on this in later posts
  • cc -shared wiringPiSPI.o -o libwiringPiSPI.so – This command create the library specific to the SPI protocol – more on this in later posts

image

The next step in utilizing the wiringPi library is to build a C# wrapper that will encapsulate the functionality of the C libraries and allow you to use the C-on-Linux library as just another reference in your project. This is done in C# / Visual Studio by using an External Reference in a code file which tells the C# code where to “enter” the C code and what the specific interface looks like. As you might imagine, it can get fairly complex trying to read through the C source code to locate the routines that you want to use and then generate the appropriate wrapper. Fortunately for us, a gentleman by the name of Daniel Riches has taken the time to do just that and made his work available via open source. (yet another reason to love the open source community!)

In order to take advantage of the work Daniel has done, you’ll need to clone his git repository on your Windows development environment. You should already have git for windows installed, but if not, download it and install it, and then start the command line utility (which is PowerShell-based), switch to the root of the folder that you want to install the library in, and then issue the command git clone https://github.com/danriches/WiringPi.Net.git

image

This will install the source code and test harness for the WiringPi.Net library, which you will use for all of the code that you develop moving forward.

Once the library is installed, open Visual Studio and load the WiringPi.sln file that is included in the library. Right-click on the WiringPi project and then select Build.

image

This will create a .dll that you will reference in all of your other projects. Alternatively you can simply add the WiringPi code directly to each of your new solutions moving forward, but I personally find it easier to just add the .dll reference. (note, the default is to build the .dll in Debug mode, which is fine for our experiments, but if you are going to build production applications you’ll probably want to build it in Release mode)

Once your development environment is setup, you’re ready to develop  the “Hello World” equivalent in the IoT world…

Step Four – Create a Simple IoT Application

In any IoT application, the basic components are a device or sensor that collects information and a repository for that information. For the purposes of this post, we will focus on the sensor aspect of IoT. In most cases, the sensor that you would work with in a real world application would collect data in an analog form, and then convert that data to a digital signal in order to communicate that to the host (in our case via the GPIO pins on the Raspberry Pi). The process of converting the analog information into a digital signal is a non-trivial process, so when you are beginning to learn to develop IoT applications, you want to start with the most basic and work your way up to the more complex tasks, learning along the way. One of the more basic tasks that you can perform is using code to light an LED light. (It may sound like a very simple and easy task, but even though it really is a basic task, there is a lot to be learned by performing this simple task).

For this step, you’ll need an LED (any color will do), a 220 ohm resistor (color code Red,Red,Black) and the wires necessary to connect them on the breadboard.

Wire the Breadboard

If you are using the same GPIO expansion header that I am from the Sunfounder kit, this will be relatively easy for you since the GPIO signals are marked on the header and it’s easy to understand what you are connecting to. If you aren’t using that, you will definitely want to pay attention to the Raspberry Pi pinout diagram below:

RP2_Pinout

There is a fair amount of confusion around how the pins are referenced by both documentation and code, and this pinout diagram is one of the best available that shows the translation between the physical pin number, and the pin number as referenced by code. Due to the fact that hardware does occasionally change, most software libraries will reference the pin by function as opposed to location.

In our case, we are going to use GPIO 5 (physical pin 29) to drive the LED, and will use the 3.3v power (physical pins 1 or 17) as our power source. On the breadboard, plug the LED into an open space, and then connect the shorter leg (This is very important, the LED has 2 legs, one is longer than the other and must be installed in the correct direction to work correctly) to GPIO 5 and the longer leg to the resistor. Connect the other side of the resistor to the 3.3v supply. The circuit might look like this:

20160422_013956739_iOS

Note that I am using a header board that automatically connects the power to the edges of the breadboard, so I simply ran the 3.3v from that instead of to pin 1 or 17.

Once the circuit is complete, you can build a very simple application to turn the light on and off.

Developing the Hello World Application

In Visual Studio, create a new C# Windows Console Application and name it PiLED.

image

Once Visual Studio creates the project, right-click on the references node and select Add Reference. Select Browse, and browse to the location where you compiled the wiringPi library earlier. Select wiringPi.dll and then Add it to the project references.

image

This will ensure that the appropriate references are present for the code we’re about to write.

The basic process that we need to follow in order to instruct the Raspberry Pi to turn the LED on and off is:

  1. Initialize the GPIO Interface and tell it how we will reference the pins.
  2. Tell the GPIO Interface that we are planning to write to it
  3. Instruct the appropriate GPIO pin to turn LOW (which enables current to flow from the 3.3v power through the resistor and LED, thus lighting the LED)
  4. Pause (so that we can witness the LED in an ON state
  5. Instruct the appropriate GPIO pin to turn HIGH (and thus disable current flow through the LED)
  6. Wait for a key press
  7. Exit the program

The completed code (in the file program.cs in your project) looks like this:

using System;
using WiringPi;
namespace PiLED
{
class Program
{
// ENUM to represent various Pin Mode settings
public enum PinMode{
HIGH = 1,
LOW = 0,
READ = 0,
WRITE = 1
}
const int RedLedPin = 29; // LED is on GPIO 5 (Physical Pin 29)

        //This is a console application with no GUI interface. Everything that happens will be in the shell
static void Main(string[] args)
{
Console.WriteLine(“Initializing GPIO Interface”); // Tell the user that we are attempting to start the GPIO
if (Init.WiringPiSetupPhys() != -1) // The WiringPiSetup method is static and returns either true or false. Calling it in this fashion
//ensures that it initializes the GPIO interface and reports ready to work. We will use Physical Pin Numbers
{
GPIO.pinMode(RedLedPin, (int)PinMode.WRITE); // Set the mode of the GPIO Pin to WRITE (The method requires an integer, so CAST it)
GPIO.digitalWrite(RedLedPin, (int)PinMode.HIGH); //Ensure that the LED is OFF
Console.WriteLine(“GPIO Initialization Complete”);
Console.WriteLine(“Press Any Key to Turn LED On”);
Console.ReadKey(); // Pause and wait for user to press a key
GPIO.digitalWrite(RedLedPin, (int)PinMode.LOW); // Turn the LED On
Console.WriteLine(“Led should be on”);
Console.WriteLine(“Press Any Key to turn the LED Off and Exit”);
Console.ReadKey();
GPIO.digitalWrite(RedLedPin, (int)PinMode.HIGH); //Turn LED Off
}
else
{
Console.WriteLine(“GPIO Init Failed!”); //If we reach this point, the GPIO Interface did not successfully initialize
}
}
}
}

 

Deploying the HelloWorld Application

Once you have the code in Visual Studio, Compile it by using the Build option. Once the code is compiled, you will need to use a file transfer program to copy the .exe as well as the wiringPi.dll file to a directory on the Pi. (I use FileZilla, and created a new folder in the /DevOps folder called PiLED.

image

Once the file is copied you will need to execute the application on the Raspberry Pi.

Executing the HelloWorld Application

Connect to the Raspberry Pi and then change to the directory where you copied the files (in my case ~/DevOps/PiLED) and execute the application by using the mono command. In this case, since we are interfacing directly with the hardware, we must run the application as root by using the sudo command. The command is sudo mono PiLED.exe

image

Once  you start the application you will see the “Initializing” message. Press a key and the LED will turn on. Press another key and the LED will turn off and the program will exit.

Congratulations! You’ve just written your first application that will become the foundation for further IoT application development.

Future posts in this series will build on what you’ve created here and will demonstrate some of the other protocols as well as how to send data “to the cloud” to truly experience IoT development.

The IoT Journey : Getting Started with the Raspberry Pi 3

If you are involved with “Big Data”, “Advanced Analytics” or “Cloud Computing”, you’ve likely heard all the hype around the “Internet of Things” or “IoT”. It used to be that IoT meant things like the connected refrigerator, or networked thermostats. Now it seems like IoT is being applied to just about anything that can connect and share information, be it “wearables” that track fitness information, RFID tags that track objects, or more complex ideas like Smart Meters and connected buildings. In short, IoT is currently in the midst of a major hype cycle, and as such everyone seems to be talking about it, or wondering what it’s all about.

Simply put, IoT at it core is a connected device (it doesn’t have to be connected to the Internet) that shares some data about itself somewhere with something.

One of the most talked about devices over the last couple of years has been the credit-card sized computer, Raspberry Pi. The Raspberry Pi was originally designed to be used in the classroom to teach kids about programming and electronics, but due to its capability (there are those who use the Pi as their primary computer!) and price (you can buy a new Raspberry Pi for $35 in the US), an entire community of hobbyists and professionals use the Raspberry Pi for their work and play.

I have been working with a Raspberry Pi2 for the last year, but I had never gotten the WiFi adapter that I purchased to work properly, so I was really excited to hear that not only was the Raspberry Pi3 faster, it also had onboard WiFi. I want to utilize WiFi so that the device can be portable and used wherever I might be working.

If you want to learn how to develop IoT applications, there is no better place to start than with a Raspberry Pi and the Pi community. Be warned though, this community will suck you in, and you will find yourself immersed in a world of code, wires, sensors and frameworks before you know it!

The First Steps

One of the major announcements that Microsoft made with Windows 10 is that there is a version that will run on the Raspberry Pi. This version of Windows is called “Windows IoT Core” and is designed to be the OS layer for IoT devices. If you are a developer that focuses on Windows technologies, this is a natural platform for you to gravitate towards. Of course the “New Microsoft” embraces Open Source platforms as well as our own, so I thought it would be interesting to see how far I could extend my Windows development skills into the IoT world using the popular open source Linux Operating System. This post marks the beginning of that journey…

Step One – Obtain the Hardware

There are many places that you can buy a Raspberry Pi3, including Amazon, Fry’s Electronics (in the US) and RS-Online (in the UK). For this project, I decided to buy the kit offered by the Microsoft Store (I was already there, and had other things to buy, so why not?). The specific items I purchased are:

  • Raspberry Pi3 – This kit comes complete with an SD-Card with the NOOBS installer already configured. In my case though, the SD Card was NOT a class 10 card, meaning it did not actually work properly with Windows. It was fine for Linux, which is what I ended up using for reasons stated above, but it is something to look out for. The Microsoft store has subsequently fixed that issue so any new orders will have the correct card.
  • Raspberry Enclosure – I wanted to make sure that the little computer was properly protected and looked good, so I decided to buy the official enclosure. There are plenty of different cases available for the Pi, but this is the “official” one with the proper logo.
  • Power Adapter – This is an important piece! It’s important to note that power is supplied to the Raspberry Pi via the ubiquitous micro-USB port. Most of us have tons of these just laying around. I wanted to make sure though that I had a proper adapter that supplied the full 2.5A that the Pi will demand.

Once I returned home and unpacked everything, it all looked like this:

 

20160418_175839687_iOS

Once assembled (a very simple process, just drop the computer onto the posts in the case bottom, and then snap the rest of the pieces together around it:

20160418_175940941_iOS

Once everything is assembled, you simply plug the SD card into the Pi, then attach a USB keyboard and mouse along with an HDMI cable to connect to the TV/monitor and you’re ready to go. You should also plug in a regular network cable if you can, it will give the setup utility the ability to download the latest OS.

20160418_181545787_iOS

There is no power switch on the Pi, so to boot it you simply plug the power adaper into your AC power.

Step Two – Boot and Configure

If you purchased a version of the Raspberry Pi with NOOBs included on the SD-card, you simply have to insert the SD card and boot. If not, you’ll need to first download a copy of the NOOBS installer and follow the instructions on how to set it up. It really is a simple process (you just format a new SD card and then copy the contents of the NOOBS archive onto the card), so nothing to be concerned about. Once the Pi boots, you’ll see the configuration page:

20160418_182055424_iOS

Once NOOBS has started and displayed the available OS images to install, make sure you select the appropriate region and keyboard settings at the bottom of the page, and then select the Raspbian image (should be the first one on the menu) and then select Install. This will start the installer (which will take about 10-15 minutes).

20160418_182209714_iOS

Once the installer completes, the Pi will reboot and then start Raspbian Linux and will then automatically login as the user “Pi”.

20160418_185014682_iOS

Once the Pi has restarted properly and Raspbian is up and running, you can configure the WiFi connection by clicking the network icon (upper-right of the screen, just to the left of the speaker icon) and joining an available WiFi network.

image

Once you are connected to the WiFi network, you’re ready to configure the remote access capabilities of the Pi, as well as changing the password of the Pi user.

Step Three – Configure for Remote Access

If you are a Windows-only person and haven’t spent much time using Linux, this is where things are going to get a little confusing. If you’re used to the Linux world, then most of what I put here is going to be extremely basic for you.

Linux was designed first and foremost to be a Server Operating System platform, and as such much of the work that you will do will be from a command line shell. The shell is accessed via the Terminal program, which is started by double-clicking on the icon (It looks like a monitor and is located to the right of the menu button in the GUI).

image

Once in the shell, execute the command (by the way, in Linux it’s important to note that case matters when typing commands. “Sudo” and “sudo” for example are not equal) sudo raspi-config which will start the configuration manager. The “sudo” command basically tells the OS that you want to execute the command as an administrator (“root” in linux terms).

image

Use the arrow keys to navigate to Advanced Options, and then select the SSH option. Enable the SSH Server. (SSH is the remote access tool used to enable a secure shell from one computer to another)

image

Once the SSH Server is enabled, you will also want to change the time zone (located in the “Internationalisation settings” as well as the password for the Pi user. You can select the finish option at this point and then type “exit” to exit the terminal program/ shell.

Note: There is a very good guide located here that explains how to enable the remote GUI access as well, along with instructions on how to obtain and download appropriate programs for Windows to access the Pi remotely. For me, I don’t use the GUI that often, but when I do I use a program called Ultra VNC Viewer which is very lightweight and simple. For my remote shell, I’m using the Insider Preview of Windows 10, which includes the Bash Shell. For those not in the insider program, you can use the PuTTY tool mentioned in the article or any remote SSH tool. For file transfer, I’ve found that FileZilla is probably the easiest to use.

Configuring SSH to use Certificate Authentication

I hate having to type passwords, and tend to do the extra up-front work to enable any remote machine that I’m working with to enable certificate authentication instead of passwords. If you don’t mind typing passwords, you can skip this section, but if you’re like me and hate typing the passwords, then start your preferred shell on the remote machine, and execute the ssh-keygen command. Do not enter a passphrase for the key. This will create a new key pair and install it into the ~/.ssh folder.

image

Once the key is generated, execute the ssh-copy-id command, using the IP address of the Raspberry Pi (either WiFi or Cabled, depending on which method you’re using to connect) as the destination and the user “pi” as the user. You will be prompted for the password of the Pi user, but after that you will not be prompted again.

image

Once this is done you are ready to test the ssh command to see if you can connect without password authentication. Type ssh pi@<ip address or name of remote machine> to connect:

image

Congrats! You now have a remote connection to the Raspberry Pi without using a password.

 

Step Four – Preparing the Development and Test Environment

Once remote access is configured and working, you are ready to prepare the Pi for Iot Development and testing. I am a big fan of Microsoft Visual Studio (which you can download for free)and since most of the development work that I do is related to the various demos and proof of concept projects that I build for customer presentations, I didn’t really want to learn a new environment to play with the Raspberry Pi, plus I thought that it would be an interesting test to continue to write code in C# on Windows, then deploy and execute that code on the Raspberry Pi. As you will see, this turns out to be an almost trivial task (for the simple applications, as later posts in this series will show, it does present some serious challenges as well).

The first step to enabling the execution of C# code on the Raspberry Pi is to download and install the Mono framework.  The Mono project is an open source implementation of the Microsoft .NET Framework that allows developers to easily build cross-platform applications. It is very easy to install on the Pi, and uses built-in linux commands to implement.

To install the Mono framework on the Raspberry Pi, first update all of the application repositories by using the apt-get update command. (remember to execute the command as root by using the sudo command)

image

Once the update is complete (depending on the version of Raspbian and the speed of your Internet connection, it can take as little as a minute or as long as 10 minutes) you can then install the complete Mono framework by executing the apt-get install mono-complete command. (again, don’t forget to run it as root by using the sudo command)

image

Once Mono is installed (This will likely take several minutes to complete) you are ready to develop and deploy your first simple application.

Step Five – Hello Raspberry Pi!

No “how to” article would be complete without a “Hello World” application, and I certainly don’t want to disappoint. To start, on your Windows PC, launch Visual Studio and create a new C# Windows Console Application. Title it “PiHelloWorld”.

image

Then in the Program.cs file, add the following code. Note that you are targeting any CPU and using the .NET 4.5 framework.

image

Then once you are happy with the code, select Build to build the application. Once the application builds without errors, copy the PiHelloWorld.exe file to the Raspberry Pi using a file transfer utility as discussed above. (I use FileZilla)

image

Once the file is copied, switch back to the Raspberry Pi and execute the code with the mono command. Remember that Linux is case-sensitive!

image

This will execute the application and prove that the app is actually running on Linux on the Pi, even though it was developed on Windows in C#.

Conclusion

This blog post details the start of a journey, and explains how to get a Raspberry Pi3 ready to develop and deploy IoT applications written in C#. Following posts in this series will explore some of the sensors available that can be connected to the expansion port of the Pi, and will also explain the process of connecting the Pi to Microsoft Azure and the Microsoft IoT Suite of tools available in Azure.