Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Tuesday, June 02, 2015

Awesome free services from Sense Tecnic: FRED and WoTKit

When Mike from Sense Tecnic announced FRED on the node-red Google group I jumped right away at the opportunity of trying it: and I'm happy I did because it is great! Not only that but the entire team behind it is awesome: the few issues people noticed were resolved in matters of minutes. Of course, having played with node-red for a while now, as you can see in other posts, I have it installed on all my machines, including my Raspberry Pi and I even tried it with Docker. But having an always on node-red instance that I can go to so I can try stuff and learn about node-red is simply awesome; in a matter of days I created a flow that gets my owntracks coords via mqtt, another that monitors a feed of newly released blues albums and highlights the ones I am interested in, one that shows me Top 10 blues and jazz tracks from iTunes and others.

One thing that may keep some from using FRED is the fact that being a hosted environment there are nodes that are not supported like Raspberry Pi or Arduino related. But Sense Tecnic has a solution for this as well: WoTKit. Using this platform one can publish sensor data, aggregate it and display it on dashboards. I have to admit that I've seen WoTKit before but I never sat down to read all the docs. However, when Mike mentioned it to me a couple days ago, I decided to give it a try. Same as FRED, all I can say is that is great! Not only that a lot of work has gone into this platform, what is really great is how responsive and helpful the team is. At first, I started to go through the docs and while a great read, I was stumbling a bit. There are tons of examples on the site for Python and curl but I wanted to get into it faster, preferably using node-red (of course, via FRED) but couldn't find anything. This was solved right away when Roberto from the same team sent me a link to an article exactly about this: node-red and WoTKit. The integration is perfect: as mentioned in the article, WoTKit expects a JSON structure of name:value pairs for sensor data which is so easy to produce from mode-red/FRED.

I won't go into much detail, please read the article I mentioned and all will be clear. Just to see how easy is though, let's go back to my owntracks flow. From owntracks, I get a message payload containing: lat, lon, tst (timestamp in seconds) and a few other fields. WoTKit expects the coordinates in fields named lat and lng and also expects an optional milliseconds timestamp as a long in a field with the same name: timestamp. Given the original payload, all I had to do is add 2 more properties to it, like:

payload.lng = payload.lon;
payload.timestamp = payload.tst*1000;


then added a WoTKit output node to the flow, created the sensor on the WoTKit platform and in a matter of minutes, I had a dashboard showing me the trail of last 20 coordinates posted by owntracks. Simply beautiful!

Another example: I love Scrabble and I play quite a lot on my phone using various apps. One of these is WordFeud for which the great people at Feudia are organizing tournaments. One of the things Feudia keeps track of is the players rating and ranking. When I first started using Feudia I decided to keep track of my rating but couldn't find a way to see the history so once in a while I was saving the values in a Google spreadsheet with a couple charts; FRED and WoTKit gave me an idea of how I can do this automatically.

I created a sensor with 2 fields: rank and rating, then created a simple flow using an http request node to read the Hall Of Fame page then used an html node to extract the info from it; one of reasons I went this route is that for a while now I wanted to figure out how the html node works - it took me a while but after several tries I was able to select the div enclosing the ranking table and extract all the values in its children in an array, that looks like this:

[ "#", "1", "\n \n \t\t", "Deuxchevauxtje", "1956,80", "", "#", "2", "\n \n \t\t", "nte...", "1936,37", "", "#", "3", "\n \n \t\t", "Pix...", "1930,94", "", "#", "4", "\n \n \t\t", "por...", "1922,31", ""....]

In this array, I look for my username and then get the rank 2 items in the array previous to it and the rating 1 item after (since only first 200 players are ranked, if for some reason I will drop lower than 200 and I won't find the username in the list anymore, I simply won't post anything to WoTKit) as you can see in my simple flow below:

[{"id":"b622d78d.49dd28","type":"wotkit-credentials","nickname":"Default","url":"http://wotkit.sensetecnic.com"},{"id":"67bc4260.9843bc","type":"html","name":"","tag":"div.rnk-tm>","ret":"text","as":"single","x":323,"y":265,"z":"8e66ad17.71995","wires":[["e616f09.f19e91"]]},{"id":"2bc264d6.d43d9c","type":"inject","name":"once a day","topic":"","payload":"","payloadType":"date","repeat":"86400","crontab":"","once":false,"x":103,"y":30.090909004211426,"z":"8e66ad17.71995","wires":[["f5e6e137.0a192"]]},{"id":"f5e6e137.0a192","type":"http request","name":"feudia halloffame","method":"GET","ret":"txt","url":"http://www.feudia.com/wordfeud/halloffame.html","x":191,"y":149,"z":"8e66ad17.71995","wires":[["67bc4260.9843bc"]]},{"id":"e616f09.f19e91","type":"function","name":"extract user info","func":"var allInfo = msg.payload;\nvar payload = {};\npayload.found = false;\nfor (i = 0; i < allInfo.length; i++) {\n if(allInfo[i] === 'merlin13') {\n payload.found = true;\n payload.username = \"merlin13\";\n payload.rank = Number(allInfo[i-2]);\n payload.rating = Number(allInfo[i+1].replace(',','.'));\n }\n}\nmsg.payload = payload;\nmsg.headers = {\n \"Content-Type\":\"application/json\"\n};\nreturn msg;","outputs":1,"valid":true,"x":500,"y":187,"z":"8e66ad17.71995","wires":[["902fe43e.6fd018"]]},{"id":"b664ddf4.499b2","type":"wotkit out","name":"feudia sensor","sensor":"claudiuo.feudia","login":"b622d78d.49dd28","x":769,"y":210,"z":"8e66ad17.71995","wires":[]},{"id":"902fe43e.6fd018","type":"switch","name":"","property":"payload.found","rules":[{"t":"true"},{"t":"else"}],"checkall":"true","outputs":2,"x":578,"y":296,"z":"8e66ad17.71995","wires":[["b664ddf4.499b2","33f1575a.cc0ea8"],["33f1575a.cc0ea8"]]},{"id":"33f1575a.cc0ea8","type":"debug","name":"","active":true,"console":"false","complete":"false","x":759,"y":336,"z":"8e66ad17.71995","wires":[]}]

Added the WoTKit output node and all was done! Well, almost. The payload looked great in debug:

{ "found": true, "username": "merlin13", "rank": 150, "rating": 1691.32 }

but no matter what I tried I was getting errors from WoTKit when posting the data. Finally, I decided to change the debug node to print the entire message, not just the payload and noticed the content-type was wrong:

{ "topic": "", "payload": { "username": "merlin13", "rank": 150, "rating": 1691.32 }, "statusCode": 200, "headers": {..."content-type": "text/html; charset=UTF-8" } }

So I added a header to my message as seen in the flow above:

msg.headers = {"Content-Type":"application/json"}

and everything worked like magic. Now my sensor gets data once a day and I also have a dashboard with the 2 charts identical with the original ones in my Google spreadsheet. Disregarding the content-type issue which was my fault, everything was done in less than an hour and now I have a dashboard that updates automatically so I don't have to gather data manually now and then.

Of course, as you see my "sensor" is not really a sensor in the real sense of the word but the concept is the same: instead of using owntracks or scraping the Feudia page, I could as well have a sensor hooked up to my Raspberry Pi or an Arduino, send data to WoTKit directly or to FRED for further processing and then WoTKit and the gap between a physical device and a hosted IoT service is closed, also using a hosted node-red instance in the process. I don't know if I can convey how awesome this is!

If you want to see the beauty of FRED and WoTKit in action, you should definitely try these services: as mentioned in the title, not only they are awesome but also free. Huge thanks to the Sense Tecnic team, especially to Mike and Roberto for all their awesome work, for offering these great services for free and for being so patient and helpful and responsive!

Wednesday, April 29, 2015

Cloud9, resin.io, Cylon.js - all coming together

As I mentioned in my previous post, I am really happy I discovered Cylon.js and was able to make basic stuff working. This is all cool but I wanted to be able to interact with my robot over the net so I thought it's time to try the API plugins the framework offers. To make things more fun and learn more in the process, I decided to use resin.io for deployment: this way I can update the code and test changes without being close to my Raspberry Pi all the time. I know it is possible but never tried to have a git project with multiple remotes; this is the perfect time for me to learn how this works since resin.io works by pushing code to the resin remote but I also want to be able to push changes to github. And because I don't want to be tied to my local machine, I decided to use Cloud9 for this project and push the code from there directly to both resin and github - which works great as you'll see below. By the way, Cloud9 is similar with Codenvy but the support for node.js is better (at least from what I know at this time) and having access to the entire VM and the command line makes it awesome; it is like working on a local machine but a lot better since it is in the cloud and accessible via a browser from anywhere.

This post is not really about the code itself: it is a work in progress that can be seen in my repo; instead, this post is about all of the tools coming together with a special nod to resin.io.

To start I read a lot of the Cylon.js docs and was able to put together a test robot without an actual device (using loopback instead) to which I plan to send commands using one of the API examples on the site; as a side note, the robot code only has generic commands like cmd1, cmd2 and so on instead of having commands like toggle and turnOn because this setup will let me change the actual code a command is executing while a client may never need to change. Going back to the API idea, I decided to start with the simplest API plugin (HTTP) even if there are no examples for it on the site. Unfortunately because I want to access my RasPi from outside my network, I don't know the IP (which will be assigned dynamically by resin) and the HTTP API needs to be configured with an IP; I am pretty sure there are solutions for this but instead of digging more, I decided to try the MQTT API which is tied only to a broker and doesn't need a definite IP. The client code is also very simple at this time but I hope it will evolve as I find some time; in the end though, I plan to issue the API commands via node-red which integrates very easily with MQTT.

It was very easy to start with Cloud9: I connected it to my github account, then created a new node.js workspace, there are plenty of docs on the site. And since Cloud9 gives access to the underlying OS, it was also easy to install libusb-dev (needed for Digispark as mentioned in my previous post) and also install all the node modules I need to start with; here are the commands for reference (last module is only needed for the client and I used the --save option so all the modules are registered automatically in package.json):

sudo apt-get install libusb-dev
npm install cylon cylon-digispark cylon-api-mqtt mqtt --save


Next thing was to add resin.io as a secondary remote which was pretty easy:

git remote add resin git@git.resin.io:username/application_name.git

Then all works as normal, git add/commit/push. The only special thing I needed to do was figure out how to install libusb-dev in the resin image. After some search on the web, I found out I can add a "preinstall" script to package.json. This was easy but it took me quite a while to figure out how to install this library because the only one found by apt-get was libusb-0.1-4 and not libusb-dev which I needed. After a lot of fiddling, I asked in the resin.io forum and the answer was quite simple: add apt-get update before the apt-get libusb-dev, as seen in the current package.json. A new push to the resin remote built the image without errors this time. Great!

The coolest thing is that when I built this image my Pi was offline but as soon as I plugged it in hours later, the new image was updated automatically - I know this is documented but it was so neat to see it working. This is so awesome! The resin.io team really thought of everything and I can't say how happy I am to be using their service. The small complaints I had in my original post are really minor, resin.io is really a great way to update your Pi code remotely. Again, big thanks to the entire team!

Hopefully now that all pieces are in place, I will find some time to write a robot that actually does something, and command it via MQTT from node-red. Soon...

Friday, April 10, 2015

Codenvy and Heroku integration: simply beautiful!

Reading through the Codenvy docs I noticed Heroku being mentioned in the Paas Deployment section and since I deployed a Java app there a while back, I decided to give it a try. The most interesting idea was the fact that I can copy the app directly from Heroku to Codenvy with just a couple steps, as described in this page; the really cool thing is that I deployed this app a long time ago and I don't even have my source code anymore - I know I can clone the app at any time to get it back but doing it this way, I can have the app ready for more development, no need to setup the project again locally in Eclipse. The steps I mentioned were:
  • create an SSH connection betweek Codenvy and Heroku: just generate a new key for Heroku, copy it and manually save it to my Heroku account;
  • import the existing application: copy its Git URL, then in the Codenvy workspace, File > Import from Location and paste this URL.
That's it: it can't be easier than this! What's even better is that having imported the app, all project Git history and settings are saved, so there is no need to add Heroku as Git remote – it is already there.

After I imported the app I tried to run it on Codenvy using the Jetty + Java runner but it didn't work. In the end this issue wasn't a problem with Codenvy but with the pom.xml in my project; I am just mentioning here in case someone else will run into this issue.

When trying to run the app I noticed the runner was creating an application.jar which was deployed under /home/user/jetty9/webapps/ROOT which is the correct location; but a jar is not a webapp and indeed invoking my servlet in the browser didn't work. After trying a lot of things and changing project settings, I took a better look at the pom.xml file and noticed packaging was set to jar; changed it to war and this time the webapp was deployed correctly and it worked right away like magic. The main problem seems to be the fact that I created my app originally using the heroku-cli tools which created a pom.xml file with packaging=jar; things have now changed and the new pom.xml file used by default (as seen in this repo) doesn't specify packaging anymore. I know this should mean the default of "jar" is used, but it makes a big difference on Codenvy: no packaging specified makes the webapp deploy correctly on Codenvy (also, it deploys correctly on Heroku as I later tried). So if you have an older Java app created from the Heroku template, remove the packaging directive and it will all work.

After all this was fixed, deploying the modified app to Heroku was a breeze: just git add/commit/push. I then started the app on Heroku and it worked great. Love it! Thanks again to the Codenvy team for all the awesome work they do!

Wednesday, March 18, 2015

Docker + node-red = awesome!

When I first heard about Docker a few weeks ago, I realized how cool it was so I started reading about it right away. Because I kept talking about it, I got tasked at work to look into how we can use it and a couple weeks later, I was deploying 2 linked containers for my team that definitely made our development easier, even if all we are using are database containers, at least for now.

But my first thought when reading about Docker was how could I use it on my Raspberry Pi so I don't keep mixing stuff on the same SD card (which sometimes is not a very good idea, like when I messed up my node-red because I installed an IDE that used an older version of node.js). I know most of the software packages can co-exist without issues but I like to keep things separate so I have a bunch of SD cards now, one for Java projects, one for node-red and a couple more. Docker seems to be the answer to this - at least for my Raspberry Pi B, the As I have are a bit too constrained for Docker but they are dedicated to other projects anyway.

So, I started looking around and the first site that popped up was the excellent resin.io blog, specifically this article. It sounded awesome but it required Arch Linux which I am not familiar with so I decided to wait a bit. As I was researching Docker for work I happened to find a new blog article at hypriot.com that talked about a new Docker compatible image created by this awesome team. This was so great that I immediately cleaned up an SD card and installed this image. As advertised, it worked from the first try: I can't tell you how happy I was to see Docker running on my Pi. And this guys didn't stop at creating the main SD card image, they also published several Docker images made for Raspberry Pi - like I said, an awesome team. Thank you so much for all you do!

I started playing right away with Docker and couldn't wait to come back to it the next day. To my disappointment though, after I restarted my Pi, I kept getting errors no matter what docker command I tried. Given my luck of experience, I thought I broke something (because I also noticed that after changing the password, I started to get a warning every time I used sudo but it turns out this was easily fixed according to this post by adding 127.0.0.1 black-pearl to /etc/hosts) but after quite a lot of digging, I found a post mentioning how to restart the docker daemon - very simple, in hindsight I realize that I should've thought of it:

sudo /etc/init.d/docker start

Now that all was well, I started to work on what I really wanted to do from the start: create a node-red image, because there wasn't one when I started looking into Docker. Of course, there are several node-red images, including this one and since Dave C-J is one the creators of node-red I trust his image the most; but this image is not for Raspberry Pi. I started to work on my own image and I was able to create something fast but after that I spent a few long hours trying to make the rpi-gpio nodes to work without success. In the end, I published my image on Docker Hub but the fact that rpi-gpio nodes was bugging me so I ended up deleting it; I kept the Dockerfile in this gist so I can redo it at any time if I ever feel the need. Which I don't think it will happen because this morning doing yet another search on Docker Hub for "rpi nodered" luck was on my side and I found this image from nieleyde; there is no Dockerfile but I pulled the image immediately and it works great! Thank you so much, nieleyde!

Very important to note in the docker run command provided by nieleyde is the --privileged option (some notes here). When I first started the container, I noticed in the log that the userDir is /root/.node-red; I want to have access to the flows files and also to be able to install more nodes easily without messing up with the original image, so I start the container with a volume option (as detailed in the "Overriding Dockerfile image defaults" section of this article):

docker run -it -p 1880:1880 --rm --privileged -v /home/pi/.node-red:/root/.node-red nieleyde/rpi-nodered

This way, everything that happens in the real /root/.node-red user directory is mirrored in my /home/pi/.node-red dir and the other way around, so the flows files, new nodes, library files are shared between these directories. I am not sure if this is the best way but it works for me (well, I still need to check the new added nodes idea but the flows file works as expected so I hope new nodes will as well; also settings.js works fine as I will mention later).

The second thing I did to make it easier: the flows file by default is named flows_<machine_name>.json, for example flows_519c0741e1f0.json. The problem is that the machine name is the actual container short ID and it changes every time when the container restarts so the previous flows are not accessible anymore (the file is still present but is not read because the name doesn't match the machine name anymore). I tried naming the container when running it using --name option, but the name is not used by the flows file, only the container ID is used. To fix this, now that I have access to the user directory via the volume option, I placed a settings.js file in /home/pi/.node-red that changes the flows file name to flows.json. And it worked as I hoped it would: my file overwrites the settings. js file in the node-red install, as described here. Now each time I restart the container the flows file is the same so all my saved flows start immediately; this can be easily seen in the node-red logs: Flows file : /root/.node-red/flows.json.

In conclusion, Docker is really awesome and due to teams like hypriot and users like nieleyde Docker on Raspberry Pi and node-red in Docker are great to use! Thanks to everyone for all the great work!

Tuesday, January 21, 2014

Amazing graphs

This post is only indirectly related to Raspberry Pi, Arduino and cloud data because I found out about this great website, plotly by reading this post on Quora: What is a good first project with a raspberry pi?. And while the website itself and the Arduino code look great, I was amazed by the blog content: each graph is not only telling a story but it is really beautiful as well, like this one.

Check them out: http://blog.plot.ly/

Saturday, August 10, 2013

My Arduino distance-from-home project, before and after Latitude

In my previous post, "Heroku notes" I mentioned an app I had deployed on Heroku that got my Google Latitude coordinates, calculated the distance to my home and posted this distance to ThingSpeak and to my Arduino (using Teleduino). This app is a node.js app derived from a NinjaBlocks device, using code originally written by ChrisN (thanks a lot to ChrisN for code and all his help). My app uses all of the original ChrisN's code to get the Latitude position and calculate the distance but instead of pushing these values to a NinjaBlocks device, it sends them to a ThingSpeak channel. In addition, it calculates PWM values to be sent to an RGB LED connected to my Arduino (in fact, there are 2 LEDs on both sides of a board, to make it light better), that will show red when I am far from home, blue when closer, green when home.

As a side note, if you are interested in more info about deploying NinjaBlocks app to Heroku, see this How-to.

On August 9th Google discontinued Latitude, which was very unfortunate and upset a lot of people that used it for real stuff; so I had to find another way to do all this. The easiest thing was to find or write some simple Android app that gets the coordinates and post them to some custom URL. After quite a lot of searches, I was able to find such an app in the Play store: EasyGPSTracker written by Christian Brosch. This app allows setting a custom URL to which it will post some data containing latitude, longitude and a few other optional fields (docs are here). Big thanks to Christian for writing this app, it saved me a lot of time since I am only a beginner with Android coding.

And since my first Heroku app was a node.js one, I decided this time to try a Java app; I was worried it will take me a very long time to do this since I am not familiar with maven (which is what Heroku uses for Java apps deployment) but Heroku awesome as it is (I know, I am repeating myself) has quite a few templates to start with which made writing the app a breeze. This app does basically the same thing as the old one except this time it gets the POST-ed data from Christian's app, calculates the distance which is sent to ThingSpeak and then the PWM values to be sent to my Arduino.

I kept the original app's name as generated by Heroku, pretty long but I liked it: hidden-hollows-5561 and named my servlet geo so this is the link. Anyone can try the link however it won't do anything unless you POST the data in the right format and even then it will only send back an arbitrary distance. I started working on a GET method that can take 2 latitude/longitude pairs and return the distance between them but never finished: if I do and anyone will be interested, I will post all the details of the GET call.

The next step is to find a way to get rid of the wires; right now, the Arduino uses an Ethernet shield so I need a wire going to it; I'm using a wall wart to power it so... another wire. If I find a way to make it more portable without spending a lot of money on a WiFi shield, I'll update this post.

There is nothing special in the Arduino code since I'm using Teleduino so there is nothing to post here. The Java code for the Heroku app is also pretty simple so again, nothing special to post. To calculate the distance I am using a library I found in google code: simplelatlng, very easy to use and well documented - thanks to the author!

Monday, June 24, 2013

Heroku notes

Last week I found out about Heroku and from all I've read it looks awesome so I decided to upload and run an app on their cloud. After writing a quick app and testing it locally using foreman (Heroku toolbelt is great!), I tried to push it to Heroku but got stuck with something like Heroku error: "Permission denied (public key)". After digging on the web for a while I pieced some information together, most helpful was Giordano Scalzo's post here. In a nutshell the steps are:
  1. Create a new key just for Heroku:

    ssh-keygen -t rsa -C "email_address" -f ~/.ssh/id_rsa_heroku

  2. Add the new key to Heroku:

    heroku keys:add ~/.ssh/id_rsa_heroku.pub

  3. Set the HOME variable to the /user_home folder (I kept getting errors without doing this, I guess git was getting confused as to where the key was saved).

  4. Create a config file that indicates which key should be used, this worked for me:

    HostName heroku.com
    IdentityFile "%HOME%/.ssh/id_rsa_heroku"

The nice thing is that now I can switch the machine used for development and just follow steps 3 and 4 above for everything to work without issues. Using more than one machine with more than one Heroku account needs a little bit more work but since I am not doing this, I won't go into details.

My app doesn't respond to any http requests - more details about it later for now let's just say that it runs as a process that posts my Google Latitude location to ThingSpeak and to my Arduino (using Teleduino) - so there is no point in posting the app URL here but in any case, now that I was able to deploy and run my app, I can say that Heroku is indeed awesome!