Tuesday, June 02, 2015

Awesome free services from Sense Tecnic: FRED and WoTKit

When Mike from Sense Tecnic announced FRED on the node-red Google group I jumped right away at the opportunity of trying it: and I'm happy I did because it is great! Not only that but the entire team behind it is awesome: the few issues people noticed were resolved in matters of minutes. Of course, having played with node-red for a while now, as you can see in other posts, I have it installed on all my machines, including my Raspberry Pi and I even tried it with Docker. But having an always on node-red instance that I can go to so I can try stuff and learn about node-red is simply awesome; in a matter of days I created a flow that gets my owntracks coords via mqtt, another that monitors a feed of newly released blues albums and highlights the ones I am interested in, one that shows me Top 10 blues and jazz tracks from iTunes and others.

One thing that may keep some from using FRED is the fact that being a hosted environment there are nodes that are not supported like Raspberry Pi or Arduino related. But Sense Tecnic has a solution for this as well: WoTKit. Using this platform one can publish sensor data, aggregate it and display it on dashboards. I have to admit that I've seen WoTKit before but I never sat down to read all the docs. However, when Mike mentioned it to me a couple days ago, I decided to give it a try. Same as FRED, all I can say is that is great! Not only that a lot of work has gone into this platform, what is really great is how responsive and helpful the team is. At first, I started to go through the docs and while a great read, I was stumbling a bit. There are tons of examples on the site for Python and curl but I wanted to get into it faster, preferably using node-red (of course, via FRED) but couldn't find anything. This was solved right away when Roberto from the same team sent me a link to an article exactly about this: node-red and WoTKit. The integration is perfect: as mentioned in the article, WoTKit expects a JSON structure of name:value pairs for sensor data which is so easy to produce from mode-red/FRED.

I won't go into much detail, please read the article I mentioned and all will be clear. Just to see how easy is though, let's go back to my owntracks flow. From owntracks, I get a message payload containing: lat, lon, tst (timestamp in seconds) and a few other fields. WoTKit expects the coordinates in fields named lat and lng and also expects an optional milliseconds timestamp as a long in a field with the same name: timestamp. Given the original payload, all I had to do is add 2 more properties to it, like:

payload.lng = payload.lon;
payload.timestamp = payload.tst*1000;


then added a WoTKit output node to the flow, created the sensor on the WoTKit platform and in a matter of minutes, I had a dashboard showing me the trail of last 20 coordinates posted by owntracks. Simply beautiful!

Another example: I love Scrabble and I play quite a lot on my phone using various apps. One of these is WordFeud for which the great people at Feudia are organizing tournaments. One of the things Feudia keeps track of is the players rating and ranking. When I first started using Feudia I decided to keep track of my rating but couldn't find a way to see the history so once in a while I was saving the values in a Google spreadsheet with a couple charts; FRED and WoTKit gave me an idea of how I can do this automatically.

I created a sensor with 2 fields: rank and rating, then created a simple flow using an http request node to read the Hall Of Fame page then used an html node to extract the info from it; one of reasons I went this route is that for a while now I wanted to figure out how the html node works - it took me a while but after several tries I was able to select the div enclosing the ranking table and extract all the values in its children in an array, that looks like this:

[ "#", "1", "\n \n \t\t", "Deuxchevauxtje", "1956,80", "", "#", "2", "\n \n \t\t", "nte...", "1936,37", "", "#", "3", "\n \n \t\t", "Pix...", "1930,94", "", "#", "4", "\n \n \t\t", "por...", "1922,31", ""....]

In this array, I look for my username and then get the rank 2 items in the array previous to it and the rating 1 item after (since only first 200 players are ranked, if for some reason I will drop lower than 200 and I won't find the username in the list anymore, I simply won't post anything to WoTKit) as you can see in my simple flow below:

[{"id":"b622d78d.49dd28","type":"wotkit-credentials","nickname":"Default","url":"http://wotkit.sensetecnic.com"},{"id":"67bc4260.9843bc","type":"html","name":"","tag":"div.rnk-tm>","ret":"text","as":"single","x":323,"y":265,"z":"8e66ad17.71995","wires":[["e616f09.f19e91"]]},{"id":"2bc264d6.d43d9c","type":"inject","name":"once a day","topic":"","payload":"","payloadType":"date","repeat":"86400","crontab":"","once":false,"x":103,"y":30.090909004211426,"z":"8e66ad17.71995","wires":[["f5e6e137.0a192"]]},{"id":"f5e6e137.0a192","type":"http request","name":"feudia halloffame","method":"GET","ret":"txt","url":"http://www.feudia.com/wordfeud/halloffame.html","x":191,"y":149,"z":"8e66ad17.71995","wires":[["67bc4260.9843bc"]]},{"id":"e616f09.f19e91","type":"function","name":"extract user info","func":"var allInfo = msg.payload;\nvar payload = {};\npayload.found = false;\nfor (i = 0; i < allInfo.length; i++) {\n if(allInfo[i] === 'merlin13') {\n payload.found = true;\n payload.username = \"merlin13\";\n payload.rank = Number(allInfo[i-2]);\n payload.rating = Number(allInfo[i+1].replace(',','.'));\n }\n}\nmsg.payload = payload;\nmsg.headers = {\n \"Content-Type\":\"application/json\"\n};\nreturn msg;","outputs":1,"valid":true,"x":500,"y":187,"z":"8e66ad17.71995","wires":[["902fe43e.6fd018"]]},{"id":"b664ddf4.499b2","type":"wotkit out","name":"feudia sensor","sensor":"claudiuo.feudia","login":"b622d78d.49dd28","x":769,"y":210,"z":"8e66ad17.71995","wires":[]},{"id":"902fe43e.6fd018","type":"switch","name":"","property":"payload.found","rules":[{"t":"true"},{"t":"else"}],"checkall":"true","outputs":2,"x":578,"y":296,"z":"8e66ad17.71995","wires":[["b664ddf4.499b2","33f1575a.cc0ea8"],["33f1575a.cc0ea8"]]},{"id":"33f1575a.cc0ea8","type":"debug","name":"","active":true,"console":"false","complete":"false","x":759,"y":336,"z":"8e66ad17.71995","wires":[]}]

Added the WoTKit output node and all was done! Well, almost. The payload looked great in debug:

{ "found": true, "username": "merlin13", "rank": 150, "rating": 1691.32 }

but no matter what I tried I was getting errors from WoTKit when posting the data. Finally, I decided to change the debug node to print the entire message, not just the payload and noticed the content-type was wrong:

{ "topic": "", "payload": { "username": "merlin13", "rank": 150, "rating": 1691.32 }, "statusCode": 200, "headers": {..."content-type": "text/html; charset=UTF-8" } }

So I added a header to my message as seen in the flow above:

msg.headers = {"Content-Type":"application/json"}

and everything worked like magic. Now my sensor gets data once a day and I also have a dashboard with the 2 charts identical with the original ones in my Google spreadsheet. Disregarding the content-type issue which was my fault, everything was done in less than an hour and now I have a dashboard that updates automatically so I don't have to gather data manually now and then.

Of course, as you see my "sensor" is not really a sensor in the real sense of the word but the concept is the same: instead of using owntracks or scraping the Feudia page, I could as well have a sensor hooked up to my Raspberry Pi or an Arduino, send data to WoTKit directly or to FRED for further processing and then WoTKit and the gap between a physical device and a hosted IoT service is closed, also using a hosted node-red instance in the process. I don't know if I can convey how awesome this is!

If you want to see the beauty of FRED and WoTKit in action, you should definitely try these services: as mentioned in the title, not only they are awesome but also free. Huge thanks to the Sense Tecnic team, especially to Mike and Roberto for all their awesome work, for offering these great services for free and for being so patient and helpful and responsive!

Friday, May 15, 2015

Johnny-five and node-red in a Docker container

Picking up where I've left off in my previous post, I tried to get the simple Blink 2 flow (bottom of the page) working inside a Docker container. And in the end I did, but I had to get over a couple issues first.

My first tries, nothing happened: every time when node-red would start, I see in the console "looking for connected device..." and that was it. Seeing that johnny-five would not connect and getting errors running the flow (cannot read property 'type' of null), I exited the container without stopping it as described in my previous post (Ctrl+PQ followed by Ctrl+C and docker exec -it mynodered /bin/bash), and tried to run the johnny-five board.js example app in node_modules/node-red/eg with:

node board.js

Again, "looking for connected device..." was displayed and nothing else happened no matter how long I waited. Wondering what could be wrong, I checked /sys/class/tty and indeed there was no ttyUSB0 there; I remembered then the --privileged docker run option mentioned in my previous post and in the Docker docs so I restarted the container using this command:

docker run -it -p 1880:1880 --privileged -v ~/my-node-red:/root/.node-red --name mynodered --rm claudiuo/node-red

Unfortunately this didn't change anything, same message showed up both when node-red and board.js started. ttyUSB0 was now present so I knew I was on the right track but still had no idea how to make things work. In a last attempt, I decided to modify board.js and specify the port explicitly as var board = new five.Board({ port: "/dev/ttyUSB0" }); as mentioned somewhere in the johnny-five docs and this time board.js connected to my Arduino and the LED on pin 13 started blinking. This was an awesome moment!

Next step was to modify settings.js and do the same thing, changing the default global context entry:

j5board:require("johnny-five").Board({repl:false})
to:
j5board:require("johnny-five").Board({port: "/dev/ttyUSB0", repl:false})

and restarting the Docker container this time I saw johnny-five connecting and the blink flow worked right away. I can't say I like this solution very much because ttyUSB0 may change on a different machine (or maybe even if I plug in my Arduino in a different USB port) but the fact that it works is awesome. (As a side note, not setting the port explicitly in settings.js works great with node-red outside Docker; not sure why this is the case). Now I need to take the next step and figure out how to use callbacks in a flow (callbacks are key to some of johnny-five functionality but luckily the awesome node-red team added them in node-red 0.10.6 as described here).

One cool thing that needs mentioned is that while I was searching the web for solutions to my issues, I found out a book about johnny-five was just published a few days ago on May 8: Make: JavaScript Robotics: Building NodeBots with Johnny-Five, Raspberry Pi, Arduino, and BeagleBone - I'm sure it is great and I will be getting it very soon.

Wednesday, May 13, 2015

Node-red and Docker update

This is sort of a follow up to my earlier post related to running node-red inside a Docker container. I said "sort of" a follow up because that post was about a Docker container for Raspberry Pi; this one will talk about a container to run on my laptop. A lot of the stuff here applies to Raspberry Pi directly, some needs changes (like the base image, for example). Since that post I learned some things that make building and customizing a Docker container for node-red a lot easier, all thanks to one of the main contributors to node-red, Dave C-J.

All this started because I wanted to build a Docker container with custom packages installed (case in point, I am talking about johnny-five) and I wanted to start with theceejay's master node-red image thinking I will create my own Dockerfile to add the new packages and build a custom image. While looking on Docker hub I noticed theceejay's write up to his other node-red image which is really awesome: it explains things I didn't know about (like using package.json along with Dockerfile) which makes installing new npm packages very easy; also, it talks about overriding the settings.js file when building the custom image and not at runtime as I was doing it previously. This write up is linked to a github repo which I forked and cloned on my local machine, then I modified a little bit by adding johnny-five to package.json. I then built my own custom node-red Docker image (which is basically identical with theceejay's one with the addition of johnny-five package) and started the container mapping a local directory to /root/.node-red so I get a copy of the flow.json and all flows saved in the library:

docker build -t claudiuo/node-red .
docker run -it -p 1880:1880 -v ~/my-node-red:/root/.node-red --name mynodered --rm claudiuo/node-red

Once the container started, Ctrl+PQ followed by Ctrl+C exists the container without shutting it down after which I was able to connect to it using:

docker exec -it mynodered /bin/bash

and confirmed johnny-five was installed. More, looking at the node_modules/node-red/settings.js file, I noticed that jfive and j5Board were already added to the global context, commented out. This was a surprise to me, I had no idea the latest version of node-red comes with these modules already added; this is really cool.

In my previous post, I mentioned that I was also placing a settings.js in my local dir to change the name of the flow file: this is not needed because package.json specifies flow.json as the filename. However, the write up also mentions: "This also copies any files in the same directory as the Dockerfile to the /usr/src/app directory in the container… this means you can also add other node_modules or pre-configured libraries - or indeed overwrite the node_modules/node-red/settings.js file if you wish." So I made a copy of the settings.js in the dir mentioned above, uncommented the 2 johnny-five entries in the global context, placed the file alongside Dockerfile as seen in my repo clone and rebuilt the image. Started the container again and checked it and indeed, the new settings.js is in place.

I was thinking at some point to publish my new custom node-red image to Docker hub but since building it from scratch is as easy as cloning the repo and running docker build, I won't do it. In fact, rebuilding the image will take care of upgrading node-red as well so I rather not publish an image which will be out of date when next release comes out.

Now I need to figure out how to use johnny-five with node-red and build some cool little robot; at this point I have no idea how but I will start with the notes in the second half of this page and go from there. Until then, again, big thanks to Dave C-J for all his awesome work and help.

Friday, May 01, 2015

HC Bluetooth module - quick notes

A while ago I bought a cheap Bluetooth module off ebay and for some reason, I always thought it is a HC-05 module: not only that this is what the ebay page said but also because a long time ago when I tried Amarino on my phone, it connected to the Bluetooth module and its name was HC-05. It works great with Amarino and my custom sketches but only at 9600 which seems to be the default (most people on the web say the default is a higher baud rate but I discovered this being my case by trying different settings in a test sketch) but Firmata needs 57600 (in my opinion, both BT module and Firmata sketch set at 9600 should work fine but I tried and for some reason it doesn't).

Thinking I have a HC-05 I tried to change its default speed to 57600 by following this instructables - my module didn't have a wire on the KEY pin so I soldered one. However, after all connection were done and I ran the sketch, I noticed that no commands worked with one exception "AT+NAME=MYBLUE" - it was the only command that received a response from the module, "OKsetname". To check if something changed, I paired the module with my phone and to my surprise the name was "=MYBLUE" not just MYBLUE so something was not quite right.

Digging more on the web, found another instructables. The module I have doesn't have the same markings on the back (mine says V1.04) and it has 6 pins (only 4 with a connector soldered) so it's not really the same thing, however the name AT command is "AT+NAMExxxx" which is exactly what happens in my case (where = becomes part of the name). Unfortunately, again the only command that works is ATNAME, none of the other get an answer so I am really at a loss of what to do to change the baud rate of my module.

I think for now I will leave it at 9600 - it works with Amarino and it works with my custom sketches so at least is not totally unusable. I would like to use it with Firmata so I can do some Scratch for Arduino or Snap4Arduino or Johhny-Five without having the Arduino connected to the laptop but maybe I'll just try to buy another BT module one that I may be luckier with and be able to change its speed.

[Update] I tried again: changed the StandardFirmata sketch to 9600, uploaded it to my SparkFun RedBoard (UNO compatible), installed my custom Bluetooth and RGB LED shield, and tried again Arduino Commander and this time it connected and the RGB LED works great, both as digital and analog (PWM) output! I don't know what is different than last time, I am puzzled but happy. My Windows machine was able to connect to the module, hopefully my Linux Mint box and Raspberry Pi will work as well so I can be on to the next step, probably a Johhny-Five little robot.

Wednesday, April 29, 2015

Cloud9, resin.io, Cylon.js - all coming together

As I mentioned in my previous post, I am really happy I discovered Cylon.js and was able to make basic stuff working. This is all cool but I wanted to be able to interact with my robot over the net so I thought it's time to try the API plugins the framework offers. To make things more fun and learn more in the process, I decided to use resin.io for deployment: this way I can update the code and test changes without being close to my Raspberry Pi all the time. I know it is possible but never tried to have a git project with multiple remotes; this is the perfect time for me to learn how this works since resin.io works by pushing code to the resin remote but I also want to be able to push changes to github. And because I don't want to be tied to my local machine, I decided to use Cloud9 for this project and push the code from there directly to both resin and github - which works great as you'll see below. By the way, Cloud9 is similar with Codenvy but the support for node.js is better (at least from what I know at this time) and having access to the entire VM and the command line makes it awesome; it is like working on a local machine but a lot better since it is in the cloud and accessible via a browser from anywhere.

This post is not really about the code itself: it is a work in progress that can be seen in my repo; instead, this post is about all of the tools coming together with a special nod to resin.io.

To start I read a lot of the Cylon.js docs and was able to put together a test robot without an actual device (using loopback instead) to which I plan to send commands using one of the API examples on the site; as a side note, the robot code only has generic commands like cmd1, cmd2 and so on instead of having commands like toggle and turnOn because this setup will let me change the actual code a command is executing while a client may never need to change. Going back to the API idea, I decided to start with the simplest API plugin (HTTP) even if there are no examples for it on the site. Unfortunately because I want to access my RasPi from outside my network, I don't know the IP (which will be assigned dynamically by resin) and the HTTP API needs to be configured with an IP; I am pretty sure there are solutions for this but instead of digging more, I decided to try the MQTT API which is tied only to a broker and doesn't need a definite IP. The client code is also very simple at this time but I hope it will evolve as I find some time; in the end though, I plan to issue the API commands via node-red which integrates very easily with MQTT.

It was very easy to start with Cloud9: I connected it to my github account, then created a new node.js workspace, there are plenty of docs on the site. And since Cloud9 gives access to the underlying OS, it was also easy to install libusb-dev (needed for Digispark as mentioned in my previous post) and also install all the node modules I need to start with; here are the commands for reference (last module is only needed for the client and I used the --save option so all the modules are registered automatically in package.json):

sudo apt-get install libusb-dev
npm install cylon cylon-digispark cylon-api-mqtt mqtt --save


Next thing was to add resin.io as a secondary remote which was pretty easy:

git remote add resin git@git.resin.io:username/application_name.git

Then all works as normal, git add/commit/push. The only special thing I needed to do was figure out how to install libusb-dev in the resin image. After some search on the web, I found out I can add a "preinstall" script to package.json. This was easy but it took me quite a while to figure out how to install this library because the only one found by apt-get was libusb-0.1-4 and not libusb-dev which I needed. After a lot of fiddling, I asked in the resin.io forum and the answer was quite simple: add apt-get update before the apt-get libusb-dev, as seen in the current package.json. A new push to the resin remote built the image without errors this time. Great!

The coolest thing is that when I built this image my Pi was offline but as soon as I plugged it in hours later, the new image was updated automatically - I know this is documented but it was so neat to see it working. This is so awesome! The resin.io team really thought of everything and I can't say how happy I am to be using their service. The small complaints I had in my original post are really minor, resin.io is really a great way to update your Pi code remotely. Again, big thanks to the entire team!

Hopefully now that all pieces are in place, I will find some time to write a robot that actually does something, and command it via MQTT from node-red. Soon...

Wednesday, April 22, 2015

Cylon.js - an amazing robot and IoT framework

A few days ago on a blog I follow I noticed an article about the release of Cylon.js 1.0. Never before heard about Cylon.js but the article sounded very interesting, mentioning robots and IoT, javascript and also support for 35 platforms so I decided to check it out. I am really happy I did, I have to say from the start that it is an amazing framework with a great design and tons of supported platforms and drivers, to make it really useful for tons of things: not just robots as the name implies but basically anything related to physical computing and the Internet of Things. It makes it incredibly easy to command robots and devices, and the API plugins it already comes with (http, mqtt and socket.io) make it really easy to connect and interact with these devices online. Really great!

Like I said, there are tons of platforms supported (basically anything I can think of is already supported) but since I happened to have a Digispark with an RGB LED shield handy since I last played with it and node-red, I decided to give it a try. It would have been easier probably to start with an Arduino to avoid a few hiccups but in the end I am very happy I gave it a try because it worked really well.

The Digispark documentation is really good but since I ran in a couple stumbling blocks on my Linux Mint machine (quickly clarified on the IRC chat by a very helpful user) I decided to quickly document the steps here, maybe they'll help somebody some day.

As mentioned in the Ubuntu section of the Digispark docs, first thing to do is install the cylon-digispark npm module. Next commands use "gort" and while this may not be an issue for anybody else, it was for me; I am not familiar with it and apt-get didn't find it so I stumbled a bit with the next step. However, when I asked about it on the chat channel I got a reply right away, saying I need to download it from here. Same user also mentioned that after I install it, I should run

gort digispark set-udev-rules

which was a great pointer because the docs where not very clear about what to run next (this one or upload) so this helped me a lot. Next command in the docs though is

gort digispark upload

which didn't work for me no matter what I tried. In the end I looked at the output of the command and decided to try instead

gort digispark install

and this worked right away. Then cd to the examples dir in the cylon-digispark module and first example I tried, blink, worked like a breeze. After trying most of the examples all I can say is that Cylon.js is indeed awesome and in the end pretty easy with just a couple stopping points, mostly due to my lack of Linux experience, I'm sure.

A big thank you to the Hybrid Group team behind this great project!

Friday, April 10, 2015

Codenvy and Heroku integration: simply beautiful!

Reading through the Codenvy docs I noticed Heroku being mentioned in the Paas Deployment section and since I deployed a Java app there a while back, I decided to give it a try. The most interesting idea was the fact that I can copy the app directly from Heroku to Codenvy with just a couple steps, as described in this page; the really cool thing is that I deployed this app a long time ago and I don't even have my source code anymore - I know I can clone the app at any time to get it back but doing it this way, I can have the app ready for more development, no need to setup the project again locally in Eclipse. The steps I mentioned were:
  • create an SSH connection betweek Codenvy and Heroku: just generate a new key for Heroku, copy it and manually save it to my Heroku account;
  • import the existing application: copy its Git URL, then in the Codenvy workspace, File > Import from Location and paste this URL.
That's it: it can't be easier than this! What's even better is that having imported the app, all project Git history and settings are saved, so there is no need to add Heroku as Git remote – it is already there.

After I imported the app I tried to run it on Codenvy using the Jetty + Java runner but it didn't work. In the end this issue wasn't a problem with Codenvy but with the pom.xml in my project; I am just mentioning here in case someone else will run into this issue.

When trying to run the app I noticed the runner was creating an application.jar which was deployed under /home/user/jetty9/webapps/ROOT which is the correct location; but a jar is not a webapp and indeed invoking my servlet in the browser didn't work. After trying a lot of things and changing project settings, I took a better look at the pom.xml file and noticed packaging was set to jar; changed it to war and this time the webapp was deployed correctly and it worked right away like magic. The main problem seems to be the fact that I created my app originally using the heroku-cli tools which created a pom.xml file with packaging=jar; things have now changed and the new pom.xml file used by default (as seen in this repo) doesn't specify packaging anymore. I know this should mean the default of "jar" is used, but it makes a big difference on Codenvy: no packaging specified makes the webapp deploy correctly on Codenvy (also, it deploys correctly on Heroku as I later tried). So if you have an older Java app created from the Heroku template, remove the packaging directive and it will all work.

After all this was fixed, deploying the modified app to Heroku was a breeze: just git add/commit/push. I then started the app on Heroku and it worked great. Love it! Thanks again to the Codenvy team for all the awesome work they do!

Thursday, April 02, 2015

Weaved: the perfect tool to access my remote Raspberry Pi

I already mentioned Weaved in passing in a previous post but the latest version is so awesome that I thought it deserves its own article. As noted in my update to that post, after updating to 1.2.8, I was able to setup a TCP service on port 1880 (node-red default editor port), connect to it and from the first try node-red editor worked as expected. I am so happy I didn't give up and I tried again.

And today I had another chance to see the amazing power of Weaved: I had my Raspberry Pi A (so no wired connection available, only a wifi dongle) with me at work, plugged it in and it connected right away to the guest wireless network. At least, I thought it connected because I've done this before and it worked without issues. But the app on my phone I normally use to find IPs, Fing, was not able to see it at all. I know that since last time I connected this Pi to the guest wireless the settings have been changed: not sure how, I have no network skills but I know most services are now blocked (I assume the discovery service Fing uses, if there is anything like this, is blocked as well). Almost gave up but then I remembered I had the SSH service from Weaved installed on the SD card so I decided to give it a try: logged in to my account and indeed the Pi was reported online; got the connect info and putty connected right away (also, the My Devices list showed the real IP so I was able to check that indeed it was an IP that was not showing in the Fing scan results). How awesome is this: a device not visible and not accessible even by another device on the same network, was accessed through Weaved without issues! Really amazing!

As far as Weaved pricing goes, the last info I've seen which is supposed to be valid after the beta program will end was something like this (these terms may change since they are not published on the website right now):
- Personal plan - FREE: 2 devices, up to 5 services, 300 generic notifications/month, mobile apps (iOS already out, Android in beta), no Pro features;
- Maker plan - $25/year: 5 devices, unlimited services, 1500 custom notifications/month, mobile apps, Pro features (longer connection times, device sharing, more storage);
- Maker Pro plan - $99/year: 25 devices, unlimited services, unlimited custom notifications, mobile apps and libraries, Pro features.

The free plan is enough for me personally but if I will decide to upgrade it really won't be an issue to pay a bit over $2 a month for all the added features. There are IoT related services out there charging way more for a lot less.

Weaved does more than just allow connections to remote Raspberry Pis (and recently, BeagleBones and even Intel Edison boards), just read some of the articles on their blog and you'll see what I mean. But for me and probably others as well, Weaved is going to be the main way of accessing a Raspberry Pi remotely, which is amazing in itself.

I hope all this will convince anyone who reads this to give Weaved a try. As for me I owe huge thanks to the Weaved team for all their great work!

Wednesday, March 18, 2015

Docker + node-red = awesome!

When I first heard about Docker a few weeks ago, I realized how cool it was so I started reading about it right away. Because I kept talking about it, I got tasked at work to look into how we can use it and a couple weeks later, I was deploying 2 linked containers for my team that definitely made our development easier, even if all we are using are database containers, at least for now.

But my first thought when reading about Docker was how could I use it on my Raspberry Pi so I don't keep mixing stuff on the same SD card (which sometimes is not a very good idea, like when I messed up my node-red because I installed an IDE that used an older version of node.js). I know most of the software packages can co-exist without issues but I like to keep things separate so I have a bunch of SD cards now, one for Java projects, one for node-red and a couple more. Docker seems to be the answer to this - at least for my Raspberry Pi B, the As I have are a bit too constrained for Docker but they are dedicated to other projects anyway.

So, I started looking around and the first site that popped up was the excellent resin.io blog, specifically this article. It sounded awesome but it required Arch Linux which I am not familiar with so I decided to wait a bit. As I was researching Docker for work I happened to find a new blog article at hypriot.com that talked about a new Docker compatible image created by this awesome team. This was so great that I immediately cleaned up an SD card and installed this image. As advertised, it worked from the first try: I can't tell you how happy I was to see Docker running on my Pi. And this guys didn't stop at creating the main SD card image, they also published several Docker images made for Raspberry Pi - like I said, an awesome team. Thank you so much for all you do!

I started playing right away with Docker and couldn't wait to come back to it the next day. To my disappointment though, after I restarted my Pi, I kept getting errors no matter what docker command I tried. Given my luck of experience, I thought I broke something (because I also noticed that after changing the password, I started to get a warning every time I used sudo but it turns out this was easily fixed according to this post by adding 127.0.0.1 black-pearl to /etc/hosts) but after quite a lot of digging, I found a post mentioning how to restart the docker daemon - very simple, in hindsight I realize that I should've thought of it:

sudo /etc/init.d/docker start

Now that all was well, I started to work on what I really wanted to do from the start: create a node-red image, because there wasn't one when I started looking into Docker. Of course, there are several node-red images, including this one and since Dave C-J is one the creators of node-red I trust his image the most; but this image is not for Raspberry Pi. I started to work on my own image and I was able to create something fast but after that I spent a few long hours trying to make the rpi-gpio nodes to work without success. In the end, I published my image on Docker Hub but the fact that rpi-gpio nodes was bugging me so I ended up deleting it; I kept the Dockerfile in this gist so I can redo it at any time if I ever feel the need. Which I don't think it will happen because this morning doing yet another search on Docker Hub for "rpi nodered" luck was on my side and I found this image from nieleyde; there is no Dockerfile but I pulled the image immediately and it works great! Thank you so much, nieleyde!

Very important to note in the docker run command provided by nieleyde is the --privileged option (some notes here). When I first started the container, I noticed in the log that the userDir is /root/.node-red; I want to have access to the flows files and also to be able to install more nodes easily without messing up with the original image, so I start the container with a volume option (as detailed in the "Overriding Dockerfile image defaults" section of this article):

docker run -it -p 1880:1880 --rm --privileged -v /home/pi/.node-red:/root/.node-red nieleyde/rpi-nodered

This way, everything that happens in the real /root/.node-red user directory is mirrored in my /home/pi/.node-red dir and the other way around, so the flows files, new nodes, library files are shared between these directories. I am not sure if this is the best way but it works for me (well, I still need to check the new added nodes idea but the flows file works as expected so I hope new nodes will as well; also settings.js works fine as I will mention later).

The second thing I did to make it easier: the flows file by default is named flows_<machine_name>.json, for example flows_519c0741e1f0.json. The problem is that the machine name is the actual container short ID and it changes every time when the container restarts so the previous flows are not accessible anymore (the file is still present but is not read because the name doesn't match the machine name anymore). I tried naming the container when running it using --name option, but the name is not used by the flows file, only the container ID is used. To fix this, now that I have access to the user directory via the volume option, I placed a settings.js file in /home/pi/.node-red that changes the flows file name to flows.json. And it worked as I hoped it would: my file overwrites the settings. js file in the node-red install, as described here. Now each time I restart the container the flows file is the same so all my saved flows start immediately; this can be easily seen in the node-red logs: Flows file : /root/.node-red/flows.json.

In conclusion, Docker is really awesome and due to teams like hypriot and users like nieleyde Docker on Raspberry Pi and node-red in Docker are great to use! Thanks to everyone for all the great work!

Thursday, March 12, 2015

Reformat Raspberry Pi SD cards

If you are using Windows and ever wanted to write a new image on an SD card previously used with Raspberry Pi, you probably noticed the card looks much smaller than it really is, only a few tens of MB; if I understand correctly this is because we only see the size of the boot partition and not the other Linux partition. When I first ran into this issue, I reformatted the SD card on my Linux Mint machine which worked quite well. Second time though I was away from home and had to use a Windows 7 machine. After some digging on the web I found out I can use diskpart which comes with Windows and works quite well, but there are several steps that need to be done:

C:\temp>diskpart
DISKPART> list disk

This will list all your drives, including the SD card; you need to be very careful to select the SD card and not your hard-drive, usually it is easy to recognize the SD card because its size is only a few GB (depending on the card you use) as compared to the HDD which is usually much larger.

DISKPART> select disk 1
Disk 1 is now the selected disk.
DISKPART> list part
......... list of partitions .........
DISKPART> select part 1
Partition 1 is now the selected partition.
DISKPART> delete part
DiskPart successfully deleted the selected partition.

Now you have to repeat the last 2 steps (select/delete) for as many partitions as you have, the default is 2 partitions so normally you have to do this only once more. After the last partition is deleted, you create a primary one and exit:

DISKPART> create part pri
DiskPart succeeded in creating the specified partition.
DISKPART> exit

At last, you remove the card and re-insert it and windows will prompt you to format it; no need to do a full format, quick format works great. This process works very well for me, I've done it a lot of times but it is quite involved.

Last night I ran into another great post on the excellent Raspberry Pi Spy website about how to format Raspberry Pi SD cards using SD Formatter. I won't detail the steps here, the article I mentioned is really good and I do want to thank Matt for such a great post!

Tuesday, March 10, 2015

node-red is best for... everything

Like I said in a previous post, node-red is becoming for me more and more the first choice for all kinds of projects I'm doing. I can definitely write code for all these little things but every time I start a project I ask myself first if it can be done in node-red.

Case in point: yesterday I remembered the DigiSparks I got a while back from kickstarter and decided to play with them. As you may know, a Digispark is a tiny Arduino-like device, not 100% compatible (because it uses the Attiny85 controller unlike Arduino's ATmega168/328) but plenty powerful; one of mine has an RGB shield; when I first got it from the kickstarter project I downloaded the example code from github and after quite a lot of fiddling I got it to work, both on my Linux Mint laptop and my Raspberry Pi. But that was a long time ago so now that I wanted to play with it a bit more I decided to see if I can make it work with node-red. First thought was that I could probably use the exec node and issue the same DigiRGB.py command I did last time.

But a quick search pointed me to the digirgb node. I quickly installed it but got make errors related to node-hid. After quite a lot of time spent on the web trying to figure out what may be wrong with my environment and after installing quite a few extra libraries and packages I found mentioned here and there (like libssl-dev and build-essential), I did what I should've done from the start: read the error message more carefully; this is how I noticed it said libusb.h missing. Tried:

$ sudo apt-get install libusb-1.0-0

but it was already up to date. Next I tried:

$ sudo apt-get install libusb-1.0-0-dev

and to my surprise this time npm install finished without errors. I connected my Digispark with the RGB led and checked it was visible: $ lsusb -> shows Bus 001 Device 005: ID 16c0:05df VOTI

Restarted node-red and the digiRGB node was right there. A quick test with an inject node sending a "100,100,100" string turned on the LED from the first try. I know by now I should not be amazed any more that node-red is so great but I still am, every time - it is simply awesome!

Monday, March 02, 2015

node-red static directory

This weekend I updated to node-red 0.10.4 which has a major change: the userDir where all the user files are stored is now outside of the node-red install dir, by default being the $HOME/.node-red dir (you can override it with the --userDir option). The steps to upgrade are documented really well here. Another big change is that the way to update using git pull is not recommended any more, instead the preferred way being

    sudo npm install -g node-red

This installs node-red in /usr/local/lib/node_modules/node-red which is accessible to the root user but it should really not be used to store user files. And there is really no reason to do so: the extra nodes can be installed directly in the userDir location, the flows and .config.js are saved there as well, settings.js is read from this directory if present, also flows exported to the library are stored here too. There is only one thing that seems it needs to go in the node-red install dir: static files.

For example, in my case, I have a flow that responds to URLs like /books/:type/:topic/:genre/:num and in order to not have to fill in the type, topic, genre and num params every time, I created a simple HTML page saved in /public/books/index.html that sends various values for these params. Now that node-red is separating the user content from the actual node-red content, I would like this page to be in userDir as well. So I started going through the configuration docs trying to figure out how to do it. At first glance, at least the way I read those docs the only way to do so is to use the httpStatic property; the problem is the docs say "When this property is used, httpAdminRoot must also be used to make editor UI available at a path other than /." I tried to change both httpStatic and httpAdminRoot and couldn't find a good solution until I decided to leave httpAdminRoot unchanged and just set httpStatic to /home/pi/.node-red/public and it worked. So now I have index.html in .node-red/public/books that loads in the browser as expected at http://<node-red ip>:1880/books/ while the actual <node-red install dir>/public dir is unchanged and the editor still works at http://<node-red ip>:1880

I thought this was a problem in the docs but according to Nick the docs are just trying to prevent possible problems so this worked all along. In any case, this is not a big deal in itself but it took me a while to figure it out so I want to document it, to remember it later.

Sunday, February 15, 2015

node-red update, new rpi-gpio node is great

A few days ago a new version of node-red was released: 0.10.1, details on the node-red blog. Tons of new features are in this release as detailed in the article I just mentioned but for me one stands out: the new rpi-gpio node.

A couple months ago I tried to use a PIR sensor with node-red on my Raspberry Pi but using interrupts didn't work as expected, as I mentioned here. The new rpi-gpio node was completely rewritten by Dave C-J as detailed in this thread in the node-red Google group. As mentioned here and also in the node-red release notes, the new node uses the built in RPi.GPIO python library (part of of the newer Raspbian distributions) instead of the wiringpi library; to make sure you have the necessary files do:

sudo apt-get update
sudo apt-get install python-dev python-rpi.gpio
~/node-red/nodes/core/hardware/nrgpio ver 0

Last command should reply 0.5.8 (or better) which is the version of the RPi.GPIO library.

I installed the new node-red version and followed the steps mentioned above (even if I may not have had to since I am using the latest Raspbian, released last year in December) and the last command returned 0.5.9. I connected the PIR sensor to the Pi as mentioned in this ModMyPi article: VCC to +5V [Pin 2], GND to GND [Pin 6] and OUT to GPIO 7 [Pin 26]. Added a new rpi-gpio in node to the editor, configured it to use pin 26, deployed and it worked from the first try. Simply awesome! Even more awesome: in the new node-red version the node status option is on by default so I didn't even need to add a debug node: I can see the rpi-gpio node's status reporting 1 as soon as something moves in front of the PIR sensor and 0 when it resets.

One great thing about the PIR sensor I am not sure I ever mentioned is this: even when the motion sensor is powered with 5V, the output voltage on the data pi is 3.3V (high) and 0V (low); I found this info in several places, in the ModMyPi article mentioned above, on the Learn Adafruit website and a couple more places like this instructable. This makes it perfect to be used with Raspberry Pi without any worries about the voltage applied on the data pin.

To end this short post, there is something I wanted to mention for a while: even if I keep finding new frameworks and services out there (for example, a couple days ago I discovered Lelylan and OpenHAB is on my list of things to study deeper), node-red is the service I keep coming back to every time I need to write an app on my Pi, GPIO related or not: it may not have the fancy charts other services have and it may not have the rule engine others do, but nothing beats node-red when you need to come up with real functionality fast, to connect services easily without having to write new code from scratch each time. I really love it! Give it a try - you will love it, too.

Thursday, January 22, 2015

IBM Bluemix and node-red

I've heard about Bluemix a few months back but only a few days ago after watching a couple YouTube movies about node-red running in Bluemix I decided to give it a try. There is a 30 day free trial after which payment for some services is required; however, there is a free allowance after the trial of 375 GB-hours free for each IBM-provided runtime and 375 GB-hours free for all other runtimes combined. According to the pricing page, 1 GB-hours = Total GB/App x Number of App Instances x Total Hours running so 375 free GB-hours per month basically means one app using up to 512 MB of RAM (or more apps with a total of 512 MB of RAM for all of them) running non-stop which is pretty cool.

Starting with the node-red boilerplate was a bit bumpy at first: the movies I watched showed different steps but the service appears to have been changed a little since then; currently the steps are: after login, click Create an app, select Web, click on Browse Sample Apps and then Explore Samples, choose Node-RED Starter and finally name your app and click Create. In the end though, creating my first app was easy, and using this boilerplate gives you the node.js SDK with node-red preinstalled along with 2 services: Monitoring and Analytics, free not just during the trial period but after that as well and Cloudant NoSQL DB which I believe has a free plan after the trial period, need to look into it if I decide to use it after trial. Also, this app comes with some pretty cool nodes, like ibmiot designed to connect the app with IBM's Internet of Things module, and others.

The coolest thing with this setup is that now I have a node-red app running in the cloud complete with access to the editor so my Raspberry Pi can take a break from running non-stop and no more need for port-forwarding to access my Raspberry Pi node-red from outside my home network. I tried doing the same thing using Weaved but without luck: I was able to use Weaved to connect to port 1880 and the node-red editor came up without issues, however saving flows didn't work, I guess the underlying code didn't like this setup. Another option to run node-red in the cloud is documented by Chris Mobberley on his awesome Hardware_Hacks blog that I never got to try. I assume running node-red in the cloud should be possible using Heroku for example, since Heroku is great for hosting node.js apps but again, never tried.

I have now a flow running non-stop in Bluemix (that I will go over in a future post) and all I can say is that I am very happy I gave Bluemix a try and I am sure I will continue using it beyond the free trial. If you haven't used it, it is definitely worth checking out. Big thanks to IBM for providing such a great service, to Nicholas O'Leary and Dave Conway-Jones and others who I believe are responsible for the node-red boilerplate and provide great support in the Bluemix forum, and also to everyone else that steered me to Bluemix through their videos and comments.

[Update] Since weaved released a new version recently, I decided to give it another try. I uninstalled the previous version 1.2.5 and installed the new 1.2.8 setting up a TCP service on port 1880 (node-red default editor port), connected to it and this time the flows saved and worked as expected. This is really awesome! Huge thanks to the Weaved team for all their great work!