Wednesday, April 29, 2015

Cloud9, resin.io, Cylon.js - all coming together

As I mentioned in my previous post, I am really happy I discovered Cylon.js and was able to make basic stuff working. This is all cool but I wanted to be able to interact with my robot over the net so I thought it's time to try the API plugins the framework offers. To make things more fun and learn more in the process, I decided to use resin.io for deployment: this way I can update the code and test changes without being close to my Raspberry Pi all the time. I know it is possible but never tried to have a git project with multiple remotes; this is the perfect time for me to learn how this works since resin.io works by pushing code to the resin remote but I also want to be able to push changes to github. And because I don't want to be tied to my local machine, I decided to use Cloud9 for this project and push the code from there directly to both resin and github - which works great as you'll see below. By the way, Cloud9 is similar with Codenvy but the support for node.js is better (at least from what I know at this time) and having access to the entire VM and the command line makes it awesome; it is like working on a local machine but a lot better since it is in the cloud and accessible via a browser from anywhere.

This post is not really about the code itself: it is a work in progress that can be seen in my repo; instead, this post is about all of the tools coming together with a special nod to resin.io.

To start I read a lot of the Cylon.js docs and was able to put together a test robot without an actual device (using loopback instead) to which I plan to send commands using one of the API examples on the site; as a side note, the robot code only has generic commands like cmd1, cmd2 and so on instead of having commands like toggle and turnOn because this setup will let me change the actual code a command is executing while a client may never need to change. Going back to the API idea, I decided to start with the simplest API plugin (HTTP) even if there are no examples for it on the site. Unfortunately because I want to access my RasPi from outside my network, I don't know the IP (which will be assigned dynamically by resin) and the HTTP API needs to be configured with an IP; I am pretty sure there are solutions for this but instead of digging more, I decided to try the MQTT API which is tied only to a broker and doesn't need a definite IP. The client code is also very simple at this time but I hope it will evolve as I find some time; in the end though, I plan to issue the API commands via node-red which integrates very easily with MQTT.

It was very easy to start with Cloud9: I connected it to my github account, then created a new node.js workspace, there are plenty of docs on the site. And since Cloud9 gives access to the underlying OS, it was also easy to install libusb-dev (needed for Digispark as mentioned in my previous post) and also install all the node modules I need to start with; here are the commands for reference (last module is only needed for the client and I used the --save option so all the modules are registered automatically in package.json):

sudo apt-get install libusb-dev
npm install cylon cylon-digispark cylon-api-mqtt mqtt --save


Next thing was to add resin.io as a secondary remote which was pretty easy:

git remote add resin git@git.resin.io:username/application_name.git

Then all works as normal, git add/commit/push. The only special thing I needed to do was figure out how to install libusb-dev in the resin image. After some search on the web, I found out I can add a "preinstall" script to package.json. This was easy but it took me quite a while to figure out how to install this library because the only one found by apt-get was libusb-0.1-4 and not libusb-dev which I needed. After a lot of fiddling, I asked in the resin.io forum and the answer was quite simple: add apt-get update before the apt-get libusb-dev, as seen in the current package.json. A new push to the resin remote built the image without errors this time. Great!

The coolest thing is that when I built this image my Pi was offline but as soon as I plugged it in hours later, the new image was updated automatically - I know this is documented but it was so neat to see it working. This is so awesome! The resin.io team really thought of everything and I can't say how happy I am to be using their service. The small complaints I had in my original post are really minor, resin.io is really a great way to update your Pi code remotely. Again, big thanks to the entire team!

Hopefully now that all pieces are in place, I will find some time to write a robot that actually does something, and command it via MQTT from node-red. Soon...

Wednesday, April 22, 2015

Cylon.js - an amazing robot and IoT framework

A few days ago on a blog I follow I noticed an article about the release of Cylon.js 1.0. Never before heard about Cylon.js but the article sounded very interesting, mentioning robots and IoT, javascript and also support for 35 platforms so I decided to check it out. I am really happy I did, I have to say from the start that it is an amazing framework with a great design and tons of supported platforms and drivers, to make it really useful for tons of things: not just robots as the name implies but basically anything related to physical computing and the Internet of Things. It makes it incredibly easy to command robots and devices, and the API plugins it already comes with (http, mqtt and socket.io) make it really easy to connect and interact with these devices online. Really great!

Like I said, there are tons of platforms supported (basically anything I can think of is already supported) but since I happened to have a Digispark with an RGB LED shield handy since I last played with it and node-red, I decided to give it a try. It would have been easier probably to start with an Arduino to avoid a few hiccups but in the end I am very happy I gave it a try because it worked really well.

The Digispark documentation is really good but since I ran in a couple stumbling blocks on my Linux Mint machine (quickly clarified on the IRC chat by a very helpful user) I decided to quickly document the steps here, maybe they'll help somebody some day.

As mentioned in the Ubuntu section of the Digispark docs, first thing to do is install the cylon-digispark npm module. Next commands use "gort" and while this may not be an issue for anybody else, it was for me; I am not familiar with it and apt-get didn't find it so I stumbled a bit with the next step. However, when I asked about it on the chat channel I got a reply right away, saying I need to download it from here. Same user also mentioned that after I install it, I should run

gort digispark set-udev-rules

which was a great pointer because the docs where not very clear about what to run next (this one or upload) so this helped me a lot. Next command in the docs though is

gort digispark upload

which didn't work for me no matter what I tried. In the end I looked at the output of the command and decided to try instead

gort digispark install

and this worked right away. Then cd to the examples dir in the cylon-digispark module and first example I tried, blink, worked like a breeze. After trying most of the examples all I can say is that Cylon.js is indeed awesome and in the end pretty easy with just a couple stopping points, mostly due to my lack of Linux experience, I'm sure.

A big thank you to the Hybrid Group team behind this great project!

Friday, April 10, 2015

Codenvy and Heroku integration: simply beautiful!

Reading through the Codenvy docs I noticed Heroku being mentioned in the Paas Deployment section and since I deployed a Java app there a while back, I decided to give it a try. The most interesting idea was the fact that I can copy the app directly from Heroku to Codenvy with just a couple steps, as described in this page; the really cool thing is that I deployed this app a long time ago and I don't even have my source code anymore - I know I can clone the app at any time to get it back but doing it this way, I can have the app ready for more development, no need to setup the project again locally in Eclipse. The steps I mentioned were:
  • create an SSH connection betweek Codenvy and Heroku: just generate a new key for Heroku, copy it and manually save it to my Heroku account;
  • import the existing application: copy its Git URL, then in the Codenvy workspace, File > Import from Location and paste this URL.
That's it: it can't be easier than this! What's even better is that having imported the app, all project Git history and settings are saved, so there is no need to add Heroku as Git remote – it is already there.

After I imported the app I tried to run it on Codenvy using the Jetty + Java runner but it didn't work. In the end this issue wasn't a problem with Codenvy but with the pom.xml in my project; I am just mentioning here in case someone else will run into this issue.

When trying to run the app I noticed the runner was creating an application.jar which was deployed under /home/user/jetty9/webapps/ROOT which is the correct location; but a jar is not a webapp and indeed invoking my servlet in the browser didn't work. After trying a lot of things and changing project settings, I took a better look at the pom.xml file and noticed packaging was set to jar; changed it to war and this time the webapp was deployed correctly and it worked right away like magic. The main problem seems to be the fact that I created my app originally using the heroku-cli tools which created a pom.xml file with packaging=jar; things have now changed and the new pom.xml file used by default (as seen in this repo) doesn't specify packaging anymore. I know this should mean the default of "jar" is used, but it makes a big difference on Codenvy: no packaging specified makes the webapp deploy correctly on Codenvy (also, it deploys correctly on Heroku as I later tried). So if you have an older Java app created from the Heroku template, remove the packaging directive and it will all work.

After all this was fixed, deploying the modified app to Heroku was a breeze: just git add/commit/push. I then started the app on Heroku and it worked great. Love it! Thanks again to the Codenvy team for all the awesome work they do!

Thursday, April 02, 2015

Weaved: the perfect tool to access my remote Raspberry Pi

I already mentioned Weaved in passing in a previous post but the latest version is so awesome that I thought it deserves its own article. As noted in my update to that post, after updating to 1.2.8, I was able to setup a TCP service on port 1880 (node-red default editor port), connect to it and from the first try node-red editor worked as expected. I am so happy I didn't give up and I tried again.

And today I had another chance to see the amazing power of Weaved: I had my Raspberry Pi A (so no wired connection available, only a wifi dongle) with me at work, plugged it in and it connected right away to the guest wireless network. At least, I thought it connected because I've done this before and it worked without issues. But the app on my phone I normally use to find IPs, Fing, was not able to see it at all. I know that since last time I connected this Pi to the guest wireless the settings have been changed: not sure how, I have no network skills but I know most services are now blocked (I assume the discovery service Fing uses, if there is anything like this, is blocked as well). Almost gave up but then I remembered I had the SSH service from Weaved installed on the SD card so I decided to give it a try: logged in to my account and indeed the Pi was reported online; got the connect info and putty connected right away (also, the My Devices list showed the real IP so I was able to check that indeed it was an IP that was not showing in the Fing scan results). How awesome is this: a device not visible and not accessible even by another device on the same network, was accessed through Weaved without issues! Really amazing!

As far as Weaved pricing goes, the last info I've seen which is supposed to be valid after the beta program will end was something like this (these terms may change since they are not published on the website right now):
- Personal plan - FREE: 2 devices, up to 5 services, 300 generic notifications/month, mobile apps (iOS already out, Android in beta), no Pro features;
- Maker plan - $25/year: 5 devices, unlimited services, 1500 custom notifications/month, mobile apps, Pro features (longer connection times, device sharing, more storage);
- Maker Pro plan - $99/year: 25 devices, unlimited services, unlimited custom notifications, mobile apps and libraries, Pro features.

The free plan is enough for me personally but if I will decide to upgrade it really won't be an issue to pay a bit over $2 a month for all the added features. There are IoT related services out there charging way more for a lot less.

Weaved does more than just allow connections to remote Raspberry Pis (and recently, BeagleBones and even Intel Edison boards), just read some of the articles on their blog and you'll see what I mean. But for me and probably others as well, Weaved is going to be the main way of accessing a Raspberry Pi remotely, which is amazing in itself.

I hope all this will convince anyone who reads this to give Weaved a try. As for me I owe huge thanks to the Weaved team for all their great work!

Wednesday, March 18, 2015

Docker + node-red = awesome!

When I first heard about Docker a few weeks ago, I realized how cool it was so I started reading about it right away. Because I kept talking about it, I got tasked at work to look into how we can use it and a couple weeks later, I was deploying 2 linked containers for my team that definitely made our development easier, even if all we are using are database containers, at least for now.

But my first thought when reading about Docker was how could I use it on my Raspberry Pi so I don't keep mixing stuff on the same SD card (which sometimes is not a very good idea, like when I messed up my node-red because I installed an IDE that used an older version of node.js). I know most of the software packages can co-exist without issues but I like to keep things separate so I have a bunch of SD cards now, one for Java projects, one for node-red and a couple more. Docker seems to be the answer to this - at least for my Raspberry Pi B, the As I have are a bit too constrained for Docker but they are dedicated to other projects anyway.

So, I started looking around and the first site that popped up was the excellent resin.io blog, specifically this article. It sounded awesome but it required Arch Linux which I am not familiar with so I decided to wait a bit. As I was researching Docker for work I happened to find a new blog article at hypriot.com that talked about a new Docker compatible image created by this awesome team. This was so great that I immediately cleaned up an SD card and installed this image. As advertised, it worked from the first try: I can't tell you how happy I was to see Docker running on my Pi. And this guys didn't stop at creating the main SD card image, they also published several Docker images made for Raspberry Pi - like I said, an awesome team. Thank you so much for all you do!

I started playing right away with Docker and couldn't wait to come back to it the next day. To my disappointment though, after I restarted my Pi, I kept getting errors no matter what docker command I tried. Given my luck of experience, I thought I broke something (because I also noticed that after changing the password, I started to get a warning every time I used sudo but it turns out this was easily fixed according to this post by adding 127.0.0.1 black-pearl to /etc/hosts) but after quite a lot of digging, I found a post mentioning how to restart the docker daemon - very simple, in hindsight I realize that I should've thought of it:

sudo /etc/init.d/docker start

Now that all was well, I started to work on what I really wanted to do from the start: create a node-red image, because there wasn't one when I started looking into Docker. Of course, there are several node-red images, including this one and since Dave C-J is one the creators of node-red I trust his image the most; but this image is not for Raspberry Pi. I started to work on my own image and I was able to create something fast but after that I spent a few long hours trying to make the rpi-gpio nodes to work without success. In the end, I published my image on Docker Hub but the fact that rpi-gpio nodes was bugging me so I ended up deleting it; I kept the Dockerfile in this gist so I can redo it at any time if I ever feel the need. Which I don't think it will happen because this morning doing yet another search on Docker Hub for "rpi nodered" luck was on my side and I found this image from nieleyde; there is no Dockerfile but I pulled the image immediately and it works great! Thank you so much, nieleyde!

Very important to note in the docker run command provided by nieleyde is the --privileged option (some notes here). When I first started the container, I noticed in the log that the userDir is /root/.node-red; I want to have access to the flows files and also to be able to install more nodes easily without messing up with the original image, so I start the container with a volume option (as detailed in the "Overriding Dockerfile image defaults" section of this article):

docker run -it -p 1880:1880 --rm --privileged -v /home/pi/.node-red:/root/.node-red nieleyde/rpi-nodered

This way, everything that happens in the real /root/.node-red user directory is mirrored in my /home/pi/.node-red dir and the other way around, so the flows files, new nodes, library files are shared between these directories. I am not sure if this is the best way but it works for me (well, I still need to check the new added nodes idea but the flows file works as expected so I hope new nodes will as well; also settings.js works fine as I will mention later).

The second thing I did to make it easier: the flows file by default is named flows_<machine_name>.json, for example flows_519c0741e1f0.json. The problem is that the machine name is the actual container short ID and it changes every time when the container restarts so the previous flows are not accessible anymore (the file is still present but is not read because the name doesn't match the machine name anymore). I tried naming the container when running it using --name option, but the name is not used by the flows file, only the container ID is used. To fix this, now that I have access to the user directory via the volume option, I placed a settings.js file in /home/pi/.node-red that changes the flows file name to flows.json. And it worked as I hoped it would: my file overwrites the settings. js file in the node-red install, as described here. Now each time I restart the container the flows file is the same so all my saved flows start immediately; this can be easily seen in the node-red logs: Flows file : /root/.node-red/flows.json.

In conclusion, Docker is really awesome and due to teams like hypriot and users like nieleyde Docker on Raspberry Pi and node-red in Docker are great to use! Thanks to everyone for all the great work!

Thursday, March 12, 2015

Reformat Raspberry Pi SD cards

If you are using Windows and ever wanted to write a new image on an SD card previously used with Raspberry Pi, you probably noticed the card looks much smaller than it really is, only a few tens of MB; if I understand correctly this is because we only see the size of the boot partition and not the other Linux partition. When I first ran into this issue, I reformatted the SD card on my Linux Mint machine which worked quite well. Second time though I was away from home and had to use a Windows 7 machine. After some digging on the web I found out I can use diskpart which comes with Windows and works quite well, but there are several steps that need to be done:

C:\temp>diskpart
DISKPART> list disk

This will list all your drives, including the SD card; you need to be very careful to select the SD card and not your hard-drive, usually it is easy to recognize the SD card because its size is only a few GB (depending on the card you use) as compared to the HDD which is usually much larger.

DISKPART> select disk 1
Disk 1 is now the selected disk.
DISKPART> list part
......... list of partitions .........
DISKPART> select part 1
Partition 1 is now the selected partition.
DISKPART> delete part
DiskPart successfully deleted the selected partition.

Now you have to repeat the last 2 steps (select/delete) for as many partitions as you have, the default is 2 partitions so normally you have to do this only once more. After the last partition is deleted, you create a primary one and exit:

DISKPART> create part pri
DiskPart succeeded in creating the specified partition.
DISKPART> exit

At last, you remove the card and re-insert it and windows will prompt you to format it; no need to do a full format, quick format works great. This process works very well for me, I've done it a lot of times but it is quite involved.

Last night I ran into another great post on the excellent Raspberry Pi Spy website about how to format Raspberry Pi SD cards using SD Formatter. I won't detail the steps here, the article I mentioned is really good and I do want to thank Matt for such a great post!

Tuesday, March 10, 2015

node-red is best for... everything

Like I said in a previous post, node-red is becoming for me more and more the first choice for all kinds of projects I'm doing. I can definitely write code for all these little things but every time I start a project I ask myself first if it can be done in node-red.

Case in point: yesterday I remembered the DigiSparks I got a while back from kickstarter and decided to play with them. As you may know, a Digispark is a tiny Arduino-like device, not 100% compatible (because it uses the Attiny85 controller unlike Arduino's ATmega168/328) but plenty powerful; one of mine has an RGB shield; when I first got it from the kickstarter project I downloaded the example code from github and after quite a lot of fiddling I got it to work, both on my Linux Mint laptop and my Raspberry Pi. But that was a long time ago so now that I wanted to play with it a bit more I decided to see if I can make it work with node-red. First thought was that I could probably use the exec node and issue the same DigiRGB.py command I did last time.

But a quick search pointed me to the digirgb node. I quickly installed it but got make errors related to node-hid. After quite a lot of time spent on the web trying to figure out what may be wrong with my environment and after installing quite a few extra libraries and packages I found mentioned here and there (like libssl-dev and build-essential), I did what I should've done from the start: read the error message more carefully; this is how I noticed it said libusb.h missing. Tried:

$ sudo apt-get install libusb-1.0-0

but it was already up to date. Next I tried:

$ sudo apt-get install libusb-1.0-0-dev

and to my surprise this time npm install finished without errors. I connected my Digispark with the RGB led and checked it was visible: $ lsusb -> shows Bus 001 Device 005: ID 16c0:05df VOTI

Restarted node-red and the digiRGB node was right there. A quick test with an inject node sending a "100,100,100" string turned on the LED from the first try. I know by now I should not be amazed any more that node-red is so great but I still am, every time - it is simply awesome!

Monday, March 02, 2015

node-red static directory

This weekend I updated to node-red 0.10.4 which has a major change: the userDir where all the user files are stored is now outside of the node-red install dir, by default being the $HOME/.node-red dir (you can override it with the --userDir option). The steps to upgrade are documented really well here. Another big change is that the way to update using git pull is not recommended any more, instead the preferred way being

    sudo npm install -g node-red

This installs node-red in /usr/local/lib/node_modules/node-red which is accessible to the root user but it should really not be used to store user files. And there is really no reason to do so: the extra nodes can be installed directly in the userDir location, the flows and .config.js are saved there as well, settings.js is read from this directory if present, also flows exported to the library are stored here too. There is only one thing that seems it needs to go in the node-red install dir: static files.

For example, in my case, I have a flow that responds to URLs like /books/:type/:topic/:genre/:num and in order to not have to fill in the type, topic, genre and num params every time, I created a simple HTML page saved in /public/books/index.html that sends various values for these params. Now that node-red is separating the user content from the actual node-red content, I would like this page to be in userDir as well. So I started going through the configuration docs trying to figure out how to do it. At first glance, at least the way I read those docs the only way to do so is to use the httpStatic property; the problem is the docs say "When this property is used, httpAdminRoot must also be used to make editor UI available at a path other than /." I tried to change both httpStatic and httpAdminRoot and couldn't find a good solution until I decided to leave httpAdminRoot unchanged and just set httpStatic to /home/pi/.node-red/public and it worked. So now I have index.html in .node-red/public/books that loads in the browser as expected at http://<node-red ip>:1880/books/ while the actual <node-red install dir>/public dir is unchanged and the editor still works at http://<node-red ip>:1880

I thought this was a problem in the docs but according to Nick the docs are just trying to prevent possible problems so this worked all along. In any case, this is not a big deal in itself but it took me a while to figure it out so I want to document it, to remember it later.

Sunday, February 15, 2015

node-red update, new rpi-gpio node is great

A few days ago a new version of node-red was released: 0.10.1, details on the node-red blog. Tons of new features are in this release as detailed in the article I just mentioned but for me one stands out: the new rpi-gpio node.

A couple months ago I tried to use a PIR sensor with node-red on my Raspberry Pi but using interrupts didn't work as expected, as I mentioned here. The new rpi-gpio node was completely rewritten by Dave C-J as detailed in this thread in the node-red Google group. As mentioned here and also in the node-red release notes, the new node uses the built in RPi.GPIO python library (part of of the newer Raspbian distributions) instead of the wiringpi library; to make sure you have the necessary files do:

sudo apt-get update
sudo apt-get install python-dev python-rpi.gpio
~/node-red/nodes/core/hardware/nrgpio ver 0

Last command should reply 0.5.8 (or better) which is the version of the RPi.GPIO library.

I installed the new node-red version and followed the steps mentioned above (even if I may not have had to since I am using the latest Raspbian, released last year in December) and the last command returned 0.5.9. I connected the PIR sensor to the Pi as mentioned in this ModMyPi article: VCC to +5V [Pin 2], GND to GND [Pin 6] and OUT to GPIO 7 [Pin 26]. Added a new rpi-gpio in node to the editor, configured it to use pin 26, deployed and it worked from the first try. Simply awesome! Even more awesome: in the new node-red version the node status option is on by default so I didn't even need to add a debug node: I can see the rpi-gpio node's status reporting 1 as soon as something moves in front of the PIR sensor and 0 when it resets.

One great thing about the PIR sensor I am not sure I ever mentioned is this: even when the motion sensor is powered with 5V, the output voltage on the data pi is 3.3V (high) and 0V (low); I found this info in several places, in the ModMyPi article mentioned above, on the Learn Adafruit website and a couple more places like this instructable. This makes it perfect to be used with Raspberry Pi without any worries about the voltage applied on the data pin.

To end this short post, there is something I wanted to mention for a while: even if I keep finding new frameworks and services out there (for example, a couple days ago I discovered Lelylan and OpenHAB is on my list of things to study deeper), node-red is the service I keep coming back to every time I need to write an app on my Pi, GPIO related or not: it may not have the fancy charts other services have and it may not have the rule engine others do, but nothing beats node-red when you need to come up with real functionality fast, to connect services easily without having to write new code from scratch each time. I really love it! Give it a try - you will love it, too.

Thursday, January 22, 2015

IBM Bluemix and node-red

I've heard about Bluemix a few months back but only a few days ago after watching a couple YouTube movies about node-red running in Bluemix I decided to give it a try. There is a 30 day free trial after which payment for some services is required; however, there is a free allowance after the trial of 375 GB-hours free for each IBM-provided runtime and 375 GB-hours free for all other runtimes combined. According to the pricing page, 1 GB-hours = Total GB/App x Number of App Instances x Total Hours running so 375 free GB-hours per month basically means one app using up to 512 MB of RAM (or more apps with a total of 512 MB of RAM for all of them) running non-stop which is pretty cool.

Starting with the node-red boilerplate was a bit bumpy at first: the movies I watched showed different steps but the service appears to have been changed a little since then; currently the steps are: after login, click Create an app, select Web, click on Browse Sample Apps and then Explore Samples, choose Node-RED Starter and finally name your app and click Create. In the end though, creating my first app was easy, and using this boilerplate gives you the node.js SDK with node-red preinstalled along with 2 services: Monitoring and Analytics, free not just during the trial period but after that as well and Cloudant NoSQL DB which I believe has a free plan after the trial period, need to look into it if I decide to use it after trial. Also, this app comes with some pretty cool nodes, like ibmiot designed to connect the app with IBM's Internet of Things module, and others.

The coolest thing with this setup is that now I have a node-red app running in the cloud complete with access to the editor so my Raspberry Pi can take a break from running non-stop and no more need for port-forwarding to access my Raspberry Pi node-red from outside my home network. I tried doing the same thing using Weaved but without luck: I was able to use Weaved to connect to port 1880 and the node-red editor came up without issues, however saving flows didn't work, I guess the underlying code didn't like this setup. Another option to run node-red in the cloud is documented by Chris Mobberley on his awesome Hardware_Hacks blog that I never got to try. I assume running node-red in the cloud should be possible using Heroku for example, since Heroku is great for hosting node.js apps but again, never tried.

I have now a flow running non-stop in Bluemix (that I will go over in a future post) and all I can say is that I am very happy I gave Bluemix a try and I am sure I will continue using it beyond the free trial. If you haven't used it, it is definitely worth checking out. Big thanks to IBM for providing such a great service, to Nicholas O'Leary and Dave Conway-Jones and others who I believe are responsible for the node-red boilerplate and provide great support in the Bluemix forum, and also to everyone else that steered me to Bluemix through their videos and comments.

[Update] Since weaved released a new version recently, I decided to give it another try. I uninstalled the previous version 1.2.5 and installed the new 1.2.8 setting up a TCP service on port 1880 (node-red default editor port), connected to it and this time the flows saved and worked as expected. This is really awesome! Huge thanks to the Weaved team for all their great work!